CN114327225A - User interface for sharing contextually relevant media content - Google Patents

User interface for sharing contextually relevant media content Download PDF

Info

Publication number
CN114327225A
CN114327225A CN202111244490.6A CN202111244490A CN114327225A CN 114327225 A CN114327225 A CN 114327225A CN 202111244490 A CN202111244490 A CN 202111244490A CN 114327225 A CN114327225 A CN 114327225A
Authority
CN
China
Prior art keywords
media items
user
media
input
suggested
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111244490.6A
Other languages
Chinese (zh)
Inventor
L·迪瓦恩
W·A·索伦蒂诺
G·铃木
M·勃兰特
C·西尔克雷斯
C·勒布朗
J·温格福斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201870385A external-priority patent/DK180171B1/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN114327225A publication Critical patent/CN114327225A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/545Gui

Abstract

The invention provides a user interface for sharing contextually relevant media content. The present disclosure relates generally to managing and sharing contextually relevant media content. In some embodiments, a device receives an input and, in response, displays a set of suggested media items for sharing with a recipient, wherein the set is related to a message conversation with the recipient. After displaying the set of suggestions, the device transmits a message to the recipient as part of a message conversation, the message providing access to at least a portion of the set of suggested media items. In some implementations, a device receives, from an external device, an indication that a first user has shared a first set of media items with a second user. After receiving an indication that the first user has shared the first set of media items with the second user, the device outputs a prompt to share one or more suggested media items with the first user.

Description

User interface for sharing contextually relevant media content
The present application is a divisional application of the invention patent application No.201811136445.7 entitled "user interface for sharing contextually relevant media content" filed on 2018, 9, month 28.
Technical Field
The present disclosure relates generally to computer user interfaces, and more particularly to techniques for viewing and sharing related media items.
Background
The size of user media libraries and the amount of media shared among device users continues to grow. Therefore, it is increasingly desirable for devices to have elaborate user interfaces for handling such activities.
Disclosure of Invention
However, some techniques for viewing and sharing related media items with electronic devices are generally cumbersome and inefficient. For example, some prior art techniques use complex and time-consuming user interfaces that may include multiple keystrokes or keystrokes. The prior art requires more time than necessary, which results in wasted time for the user and energy for the device. This latter consideration is particularly important in battery-powered devices.
Thus, the present technology provides faster, more efficient methods and interfaces for electronic devices to view and share related media items. Such methods and interfaces optionally complement or replace other methods for viewing or sharing related media items. Such methods and interfaces reduce the cognitive burden placed on the user and result in a more efficient human-machine interface. For battery-driven computing devices, such methods and interfaces conserve power and increase the time interval between battery charges. Further, such methods and interfaces reduce the number of unnecessary, extraneous, or repeated inputs by the user.
According to some embodiments, the method is performed on a device having a display and one or more input devices. The method comprises the following steps: receiving a first input via one or more input devices; in response to receiving the first input, displaying on the display a set of suggested media items for sharing with a recipient, wherein the set of suggestions relates to a message conversation with the recipient; after displaying the suggested set of media items, receiving, via the one or more input devices, a second input representing a request to transmit at least a portion of the suggested set of media items to the recipient; and in response to receiving the second input, transmitting a message to the recipient as part of a message conversation, the message providing access to at least a portion of the set of suggested media items.
According to some embodiments, a non-transitory computer-readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for: receiving a first input via one or more input devices; in response to receiving the first input, displaying on the display a set of suggested media items for sharing with a recipient, wherein the set of suggestions relates to a messaging conversation of the recipient; after displaying the suggested set of media items, receiving, via the one or more input devices, a second input representing a request to transmit at least a portion of the suggested set of media items to the recipient; and in response to receiving the second input, transmitting a message to the recipient as part of a message conversation, the message providing access to at least a portion of the set of suggested media items.
According to some embodiments, an electronic device is described. The electronic device includes: a display; one or more input devices; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for: receiving a first input via the one or more input devices; in response to receiving the first input, displaying on the display a set of suggested media items for sharing with a recipient, wherein the set of suggestions are related to a message session with the recipient; after displaying the suggested set of media items, receiving, via the one or more input devices, a second input representing a request to transmit at least a portion of the suggested set of media items to the recipient; and in response to receiving the second input, transmitting a message to the recipient as part of a message conversation, the message providing access to at least a portion of the set of suggested media items.
According to some embodiments, an electronic device is described. The electronic device includes: a display; one or more input devices; means for receiving a first input via the one or more input devices; means for displaying, on the display, a suggested set of media items for sharing with a recipient in response to receiving the first input, wherein the suggested set is related to a message conversation with the recipient; means for receiving, via the one or more input devices, a second input representing a request to transmit at least a portion of the suggested set of media items to the recipient after displaying the suggested set of media items; and means for sending a message to the recipient as part of the message conversation in response to receiving the second input, the message providing access to at least a portion of the suggested set of media items.
According to some embodiments, a transitory computer-readable storage medium is described. The transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for: receiving a first input via one or more input devices; in response to receiving the first input, displaying on the display a set of suggested media items for sharing with a recipient, wherein the set of suggestions relates to a message conversation with the recipient; after displaying the suggested set of media items, receiving, via the one or more input devices, a second input representing a request to transmit at least a portion of the suggested set of media items to the recipient; and in response to receiving the second input, transmitting a message to the recipient as part of a message conversation, the message providing access to at least a portion of the set of suggested media items.
According to some embodiments, the method is performed on a device having a display and one or more input devices. The method comprises the following steps: receiving, from an external device, an indication that a first user shares a first set of media items with a second user; after receiving an indication that the first user shares a first set of media items with the second user, outputting a prompt to share one or more suggested media items associated with the second user with the first user, the media items related to the first set of media items based on a context, wherein the context is determined based on the first set of media items and the one or more suggested media items are not included in the first set.
According to some embodiments, a non-transitory computer-readable storage medium is described. A non-transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for receiving, from an external device, an indication that a first set of media items is shared by a first user with a second user; after receiving an indication that the first user shares a first set of media items with the second user, outputting a prompt to share one or more suggested media items associated with the second user with the first user, the media items related to the first set of media items based on a context, wherein the context is determined based on the first set of media items and the one or more suggested media items are not included in the first set.
According to some embodiments, an electronic device is described. The electronic device includes: a display; one or more input devices; one or more processors; and memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for receiving, from an external device, an indication that the first user has shared a first set of media items with a second user; after receiving an indication that the first user shares a first set of media items with the second user, outputting a prompt to share one or more suggested media items associated with the second user with the first user, the media items related to the first set of media items based on a context, wherein the context is determined based on the first set of media items and the one or more suggested media items are not included in the first set.
According to some embodiments, an electronic device is described. The electronic device includes: a display; one or more input devices; means for receiving, from an external device, an indication that a first user shares a first set of media items with a second user; means for outputting, after receiving an indication that the first user shares a first set of media items with the second user, a prompt to share one or more suggested media items associated with the second user with the first user, the media items related to a first set of media items based on a context, wherein the context is determined based on the first set of media items and the one or more suggested media items are not included in the first set.
According to some embodiments, a transitory computer-readable storage medium is described. A transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for receiving, from an external device, an indication that a first set of media items is shared by a first user with a second user; after receiving an indication that the first user shares a first set of media items with the second user, outputting a prompt to share one or more suggested media items associated with the second user with the first user, the media items related to the first set of media items based on a context, wherein the context is determined based on the first set of media items and the one or more suggested media items are not included in the first set.
Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are optionally included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
Thus, devices are provided with faster, more efficient methods and interfaces for viewing and sharing related media items, thereby increasing the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may complement or replace other methods for viewing or sharing related media items.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, wherein like reference numerals designate corresponding parts throughout the figures.
FIG. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
Fig. 1B is a block diagram illustrating exemplary components for event processing, according to some embodiments.
FIG. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
Fig. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
Figure 4A illustrates an exemplary user interface for a menu of applications on a portable multifunction device according to some embodiments.
FIG. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display, in accordance with some embodiments.
Fig. 5A illustrates a personal electronic device, according to some embodiments.
Fig. 5B is a block diagram illustrating a personal electronic device, in accordance with some embodiments.
Fig. 5C-5D illustrate exemplary components of a personal electronic device with a touch-sensitive display and an intensity sensor, according to some embodiments.
Fig. 5E-5H illustrate exemplary components and user interfaces of a personal electronic device, according to some embodiments.
Fig. 6A-6 AAC illustrate exemplary techniques and interfaces for viewing and sharing related media items, according to some embodiments.
7A-7J are flow diagrams illustrating processes for viewing and sharing related media items, according to some embodiments.
Figures 8A-8 AQ illustrate exemplary techniques and interfaces for viewing and sharing related media items, according to some embodiments.
9A-9G are flow diagrams illustrating processes for viewing and sharing related media items, according to some embodiments.
Detailed Description
The following description sets forth exemplary methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure, but is instead provided as a description of exemplary embodiments.
There is a need for electronic devices that provide more efficient methods and interfaces for viewing and sharing related media items. For example, there is a need to provide electronic devices that quickly and easily access relevant sharing suggestions, as well as improved interfaces for interacting with media of sharing suggestions. In addition, there is a need for devices that provide an improved interface for managing and saving received shared media. Such techniques may reduce the cognitive burden on users accessing media items for sharing, thereby increasing productivity. Moreover, such techniques may reduce processor power and battery power that would otherwise be wasted on redundant user inputs.
1A-1B, 2, 3, 4A-4B, and 5A-5H below provide descriptions of exemplary devices for performing techniques for viewing and sharing related media items. Fig. 6-6 AAC illustrate exemplary user interfaces for viewing and sharing related media items. 7A-7J are flow diagrams illustrating methods of viewing and sharing related media items, according to some embodiments. The user interfaces in fig. 6A-6 AAC are used to illustrate the processes described below, including the processes in fig. 7A-7J. Figures 8-8 AQ illustrate exemplary user interfaces for viewing and sharing related media items. 9A-9G are flow diagrams illustrating methods of viewing and sharing related media items, according to some embodiments. The user interfaces in fig. 8A-8 AQ are used to illustrate the processes described below, including the processes in fig. 9A-9G.
Although the following description uses the terms "first," "second," etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first touch may be named a second touch and similarly a second touch may be named a first touch without departing from the scope of various described embodiments. The first touch and the second touch are both touches, but they are not the same touch.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Depending on the context, the term "if" is optionally to be interpreted to mean "when", "at. Similarly, the phrase "if determined … …" or "if [ stated condition or event ] is detected" is optionally interpreted to mean "upon determination … …" or "in response to determination … …" or "upon detection of [ stated condition or event ] or" in response to detection of [ stated condition or event ] ", depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and related processes for using such devices are described herein. In some embodiments, the device is a portable communication device, such as a mobile phone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, but are not limited to, those from Apple Inc
Figure BDA0003320447800000071
Device and iPod
Figure BDA0003320447800000072
An apparatus, and
Figure BDA0003320447800000073
and (4) equipment. Other portable electronic devices, such as laptops or tablets with touch-sensitive surfaces (e.g., touch screen displays and/or touch pads), are optionally usedThe brain. It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications such as one or more of the following: a mapping application, a rendering application, a word processing application, a website creation application, a disc editing application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, a fitness support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications executing on the device optionally use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or varied for different applications and/or within respective applications. In this way, a common physical architecture of the device (such as a touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and clear to the user.
Attention is now directed to embodiments of portable devices having touch sensitive displays. FIG. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display 112 is sometimes referred to as a "touch screen" for convenience, and is sometimes referred to or called a "touch-sensitive display system". Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), a memory controller 122, one or more processing units (CPUs) 120, a peripherals interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, an input/output (I/O) subsystem 106, other input control devices 116, and an external port 124. The device 100 optionally includes one or more optical sensors 164. Device 100 optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on device 100 (e.g., a touch-sensitive surface, such as touch-sensitive display system 112 of device 100). Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touch panel 355 of device 300). These components optionally communicate over one or more communication buses or signal lines 103.
As used in this specification and claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (surrogate) for the force or pressure of a contact on the touch-sensitive surface. The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine the estimated contact force. Similarly, the pressure-sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereof, the capacitance of the touch-sensitive surface in the vicinity of the contact and/or changes thereof and/or the resistance of the touch-sensitive surface in the vicinity of the contact and/or changes thereof are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the surrogate measurement of contact force or pressure is used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the surrogate measurement). In some implementations, the surrogate measurement of contact force or pressure is converted into an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as a property of the user input, allowing the user to access additional device functionality that would otherwise be inaccessible to the user on a smaller sized device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or physical/mechanical controls, such as knobs or buttons).
As used in this specification and claims, the term "haptic output" refers to a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch-sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a center of mass of the device that is to be detected by a user with the user's sense of touch. For example, where the device or component of the device is in contact with a surface of the user that is sensitive to touch (e.g., a finger, palm, or other portion of the user's hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of a touch sensitive surface (e.g., a touch sensitive display or trackpad) is optionally interpreted by the user as a "down click" or "up click" of a physical actuation button. In some cases, the user will feel a tactile sensation, such as a "press click" or "release click," even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moving. As another example, even when there is no change in the smoothness of the touch sensitive surface, the movement of the touch sensitive surface is optionally interpreted or sensed by the user as "roughness" of the touch sensitive surface. While such interpretation of touch by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touch are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless otherwise stated, the generated haptic output corresponds to a physical displacement of the device or a component thereof that would generate the sensory perception of a typical (or ordinary) user.
It should be understood that device 100 is merely one example of a portable multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of these components. The various components shown in fig. 1A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits.
The memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 122 optionally controls access to memory 102 by other components of device 100.
Peripheral interface 118 may be used to couple the input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and to process data. In some embodiments, peripherals interface 118, CPU 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
RF (radio frequency) circuitry 108 receives and transmits RF signals, also referred to as electromagnetic signals. The RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuitry 108 optionally communicates with networks such as the internet, also known as the World Wide Web (WWW), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices via wireless communication. RF circuitry 108 optionally includes well-known circuitry for detecting Near Field Communication (NFC) fields, such as by short-range communication radios. The wireless communication optionally uses any of a number of communication standards, protocols, and techniques, including, but not limited to, Global System for Mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), evolution, data-only (EV-DO), HSPA +, Dual-cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), wideband code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth Low Power consumption (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or IEEE 802.11ac), Voice over Internet protocol (VoIP), Wi-MAX, email protocols (e.g., Internet Message Access Protocol (IMAP), and/or Post Office Protocol (POP)) Instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol for instant messaging and presence with extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol including communication protocols not yet developed at the time of filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. The audio circuitry 110 receives audio data from the peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to the speaker 111. The speaker 111 converts the electrical signals into sound waves audible to humans. The audio circuitry 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuit 110 converts the electrical signals to audio data and transmits the audio data to the peripheral interface 118 for processing. Audio data is optionally retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuitry 110 and a removable audio input/output peripheral such as an output-only headset or a headset having both an output (e.g., a monaural headset or a binaural headset) and an input (e.g., a microphone).
The I/O subsystem 106 couples input/output peripheral devices on the device 100, such as a touch screen 112 and other input control devices 116, to a peripheral interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive/transmit electrical signals from/to other input control devices 116. Other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels, and the like. In some alternative embodiments, one or more input controllers 160 are optionally coupled to (or not coupled to) any of: a keyboard, an infrared port, a USB port, and a pointing device such as a mouse. The one or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of the speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2).
A quick press of the push button optionally disengages the lock on the touch screen 112 or optionally begins the process of Unlocking the Device using a gesture on the touch screen, as described in U.S. patent application No. 7,657,849 entitled "Unlocking a Device by Performance on an Unlock Image", filed on 23.12.2005, which is hereby incorporated by reference in its entirety. A long press of a button (e.g., 206) optionally turns the device 100 on or off. The functionality of one or more buttons is optionally customizable by the user. The touch screen 112 is used to implement virtual or soft buttons and one or more soft keyboards.
Touch-sensitive display 112 provides an input interface and an output interface between the device and the user. Display controller 156 receives electrical signals from touch screen 112 and/or transmits electrical signals to touch screen 112. Touch screen 112 displays visual output to a user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively "graphics"). In some embodiments, some or all of the visual output optionally corresponds to a user interface object.
Touch screen 112 has a touch-sensitive surface, sensor, or group of sensors that accept input from a user based on tactile and/or haptic contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and convert the detected contact into interaction with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on touch screen 112. In an exemplary embodiment, the point of contact between touch screen 112 and the user corresponds to a finger of the user.
Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch screen 112 and display controller 156 optionally detect contact and any movement or breaking thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. In an exemplary embodiment, projected mutual capacitance sensing technology is used, such as that available from Apple Inc
Figure BDA0003320447800000121
And iPod
Figure BDA0003320447800000122
The technique used in (1).
The touch sensitive display in some embodiments of touch screen 112 is optionally similar to a multi-touch sensitive trackpad described in the following U.S. patents: 6,323,846(Westerman et al), 6,570,557(Westerman et al) and/or 6,677,932(Westerman et al) and/or U.S. patent publication 2002/0015024a1, each of which is hereby incorporated by reference in its entirety. However, touch screen 112 displays visual output from device 100, whereas touch-sensitive touchpads do not provide visual output.
In some embodiments, the touch sensitive display of touch screen 112 is as described in the following patent applications: (1) U.S. patent application 11/381,313 entitled "Multi Touch Surface Controller" filed on 2.5.2006; (2) U.S. patent application No. 10/840,862 entitled "Multipoint touch screen" filed on 6.5.2004; (3) U.S. patent application No. 10/903,964 entitled "Gestures For Touch Sensitive Input Devices" filed on 30.7.2004; (4) U.S. patent application No. 11/048,264 entitled "Gestures For Touch Sensitive Input Devices" filed on 31/1/2005; (5) U.S. patent application 11/038,590 entitled "model-Based Graphical User Interfaces For Touch Sensitive Input Devices" filed on 18.1.2005; (6) U.S. patent application No. 11/228,758 entitled "Virtual Input Device On A Touch Screen User Interface" filed On 16.9.2005; (7) U.S. patent application No. 11/228,700 entitled "Operation Of A Computer With A Touch Screen Interface," filed on 16.9.2005; (8) U.S. patent application No. 11/228,737 entitled "Activating Virtual Keys Of A Touch-Screen Virtual Keys" filed on 16.9.2005; and (9) U.S. patent application 11/367,749 entitled "Multi-Functional Hand-Held Device" filed 3.3.2006. All of these applications are incorporated herein by reference in their entirety.
Touch screen 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of about 160 dpi. The user optionally makes contact with touch screen 112 using any suitable object or appendage, such as a stylus, finger, or the like. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which may not be as accurate as stylus-based input due to the larger contact area of the finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the action desired by the user.
In some embodiments, in addition to a touch screen, the device 100 optionally includes a trackpad (not shown) for activating or deactivating particular functions. In some embodiments, the trackpad is a touch-sensitive area of the device that, unlike a touchscreen, does not display visual output. The touchpad is optionally a touch-sensitive surface separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
The device 100 also includes a power system 162 for powering the various components. Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in a portable device.
The device 100 optionally further includes one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The optical sensor 164 optionally includes a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor 164 receives light from the environment projected through one or more lenses and converts the light into data representing an image. In conjunction with imaging module 143 (also called a camera module), optical sensor 164 optionally captures still images or video. In some embodiments, an optical sensor is located on the back of device 100, opposite touch screen display 112 on the front of the device, so that the touch screen display can be used as a viewfinder for still and/or video image acquisition. In some embodiments, an optical sensor is located on the front of the device so that images of the user are optionally acquired for the video conference while the user views other video conference participants on the touch screen display. In some implementations, the position of the optical sensor 164 can be changed by the user (e.g., by rotating a lens and sensor in the device housing) such that a single optical sensor 164 is used with the touch screen display for both video conferencing and still image and/or video image capture.
Device 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. Contact intensity sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors for measuring the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor 165 receives contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some implementations, at least one contact intensity sensor is collocated with or proximate to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is optionally coupled to the input controller 160 in the I/O subsystem 106. The proximity sensor 166 optionally performs as described in the following U.S. patent applications: 11/241,839 entitled "Proximaty Detector In Handheld Device"; 11/240,788 entitled "Proximaty Detector In Handheld Device"; 11/620,702, entitled "Using Ambient Light Sensor To Automation restriction Sensor Output"; 11/586,862, entitled "Automated Response To And Sensing Of User Activity In Portable Devices"; and 11/638,251, entitled "Methods And Systems For Automatic Configuration Of Peripherals," which are hereby incorporated by reference in their entirety. In some embodiments, the proximity sensor turns off and disables the touch screen 112 when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call).
Device 100 optionally further comprises one or more tactile output generators 167. FIG. 1A shows a tactile output generator coupled to a tactile feedback controller 161 in I/O subsystem 106. Tactile output generator 167 optionally includes one or more electro-acoustic devices, such as speakers or other audio components; and/or an electromechanical device that converts energy into linear motion, such as a motor, solenoid, electroactive aggregator, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts an electrical signal into a tactile output on the device). Contact intensity sensor 165 receives haptic feedback generation instructions from haptic feedback module 133 and generates haptic output on device 100 that can be felt by a user of device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., into/out of the surface of device 100) or laterally (e.g., back and forth in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
Device 100 optionally also includes one or more accelerometers 168. Fig. 1A shows accelerometer 168 coupled to peripherals interface 118. Alternatively, accelerometer 168 is optionally coupled to input controller 160 in I/O subsystem 106. Accelerometer 168 optionally performs as described in the following U.S. patent publications: U.S. patent publication 20050190059 entitled "Accelation-Based Detection System For Portable Electronic Devices" And U.S. patent publication 20060017692 entitled "Methods And apparatus For Operating A Portable Device Based On An Accelerometer", both of which are incorporated herein by reference in their entirety. In some embodiments, information is displayed in a portrait view or a landscape view on the touch screen display based on analysis of data received from one or more accelerometers. Device 100 optionally includes a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) in addition to accelerometer 168 for obtaining information about the position and orientation (e.g., portrait or landscape) of device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or set of instructions) 128, a contact/motion module (or set of instructions) 130, a graphics module (or set of instructions) 132, a text input module (or set of instructions) 134, a Global Positioning System (GPS) module (or set of instructions) 135, and an application program (or set of instructions) 136. Further, in some embodiments, memory 102 (fig. 1A) or 370 (fig. 3) stores device/global internal state 157, as shown in fig. 1A and 3. Device/global internal state 157 includes one or more of: an active application state indicating which applications (if any) are currently active; display state indicating what applications, views, or other information occupy various areas of the touch screen display 112; sensor status, including information obtained from the various sensors of the device and the input control device 116; and location information regarding the location and/or pose of the device.
Operating system 126 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124 and also includes various software components for processing data received by the RF circuitry 108 and/or the external ports 124. External port 124 (e.g., Universal Serial Bus (USB), firewire, etc.) are adapted to couple directly to other devices or indirectly via a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is an external port
Figure BDA0003320447800000161
(trademark of Apple inc.) a multi-pin (e.g., 30-pin) connector that is the same as or similar to and/or compatible with the 30-pin connector used on the device.
Contact/motion module 130 optionally detects contact with touch screen 112 (in conjunction with display controller 156) and other touch sensitive devices (e.g., a trackpad or physical click wheel). The contact/motion module 130 includes various software components for performing various operations related to contact detection, such as determining whether contact has occurred (e.g., detecting a finger-down event), determining contact intensity (e.g., force or pressure of contact, or a substitute for force or pressure of contact), determining whether there is movement of contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining whether contact has ceased (e.g., detecting a finger-up event or a contact-breaking). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining velocity (magnitude), velocity (magnitude and direction), and/or acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts) or multiple point simultaneous contacts (e.g., "multi-touch"/multiple finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on the touch panel.
In some embodiments, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by the user (e.g., to determine whether the user has "clicked" on an icon). In some embodiments, at least a subset of the intensity thresholds are determined as a function of software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of device 100). For example, the mouse "click" threshold of the trackpad or touchscreen can be set to any one of a wide range of predefined thresholds without changing the trackpad or touchscreen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds of a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, the gesture is optionally detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event, and then detecting a finger-up (lift-off) event at the same location (or substantially the same location) as the finger-down event (e.g., at the location of the icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then subsequently detecting a finger-up (lift-off) event.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual characteristics) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including without limitation text, web pages, icons (such as user interface objects including soft keys), digital images, videos, animations and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 132 receives one or more codes for specifying a graphic to be displayed, if necessary together with coordinate data and other graphic attribute data from an application program or the like, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions for use by haptic output generator 167 in generating haptic outputs at one or more locations on device 100 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications such as contacts 137, email 140, IM 141, browser 147, and any other application that requires text input.
The GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to the phone 138 for use in location-based dialing; to the camera 143 as picture/video metadata; and to applications that provide location-based services, such as weather desktop widgets, local yellow pages desktop widgets, and map/navigation desktop widgets).
Application 136 optionally includes the following modules (or sets of instructions), or a subset or superset thereof:
a contacts module 137 (sometimes referred to as an address book or contact list);
the phone module 138;
a video conferencing module 139;
an email client module 140;
an Instant Messaging (IM) module 141;
fitness support module 142;
a camera module 143 for still and/or video images;
an image management module 144;
a video player module;
a music player module;
a browser module 147;
A calendar module 148;
desktop applet module 149, optionally including one or more of: a weather desktop applet 149-1, a stock market desktop applet 149-2, a calculator desktop applet 149-3, an alarm desktop applet 149-4, a dictionary desktop applet 149-5, and other desktop applets acquired by the user, and a user created desktop applet 149-6;
a desktop applet creator module 150 for forming a user-created desktop applet 149-6;
the search module 151;
a video and music player module 152 that incorporates a video player module and a music player module;
a notepad module 153;
a map module 154; and/or
Online video module 155.
Examples of other applications 136 that are optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, rendering applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, contacts module 137 is optionally used to manage an address book or contact list (e.g., stored in application internal state 192 of contacts module 137 in memory 102 or memory 370), including: adding one or more names to the address book; delete names from the address book; associating a telephone number, email address, physical address, or other information with a name; associating the image with a name; classifying and classifying names; providing a telephone number or email address to initiate and/or facilitate communication through telephone 138, video conferencing module 139, email 140, or IM 141; and so on.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, phone module 138 is optionally used to enter a sequence of characters corresponding to a phone number, access one or more phone numbers in contacts module 137, modify an entered phone number, dial a corresponding phone number, conduct a conversation, and disconnect or hang up when the conversation is complete. As noted above, the wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, graphics module 132, text input module 134, contacts module 137, and telephony module 138, video conference module 139 includes executable instructions to initiate, conduct, and terminate video conferences between the user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions to create, send, receive, and manage emails in response to user instructions. In conjunction with the image management module 144, the e-mail client module 140 makes it very easy to create and send e-mails with still images or video images captured by the camera module 143.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, instant messaging module 141 includes executable instructions for: inputting a sequence of characters corresponding to an instant message, modifying previously input characters, transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Messaging Service (MMS) protocol for a phone-based instant message or using XMPP, SIMPLE, or IMPS for an internet-based instant message), receiving an instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant messages optionally include graphics, photos, audio files, video files, and/or MMS and/or other attachments supported in an Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module, workout support module 142 includes executable instructions to create a workout (e.g., having time, distance, and/or calorie burning goals); communicating with fitness sensors (sports equipment); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for fitness; and displaying, storing and transmitting fitness data.
In conjunction with touch screen 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact/motion module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for: capturing still images or video (including video streams) and storing them in the memory 102, modifying features of the still images or video, or deleting the still images or video from the memory 102.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, labeling, deleting, presenting (e.g., in a digital slide or album), and storing still and/or video images.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions to browse the internet (including searching for, linking to, receiving and displaying web pages or portions thereof, and attachments and other files linked to web pages) according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do, etc.) according to user instructions.
In conjunction with the RF circuitry 108, the touch screen 112, the display system controller 156, the contact/motion module 130, the graphics module 132, the text input module 134, and the browser module 147, the desktop applet module 149 is a mini-application (e.g., a weather desktop applet 149-1, a stock market desktop applet 149-2, a calculator desktop applet 149-3, an alarm clock desktop applet 149-4, and a dictionary desktop applet 149-5) or a mini-application created by a user (e.g., a user created desktop applet 149-6) that is optionally downloaded and used by the user. In some embodiments, the desktop applet includes an HTML (hypertext markup language) file, a CSS (cascading style sheet) file, and a JavaScript file. In some embodiments, the desktop applet includes an XML (extensible markup language) file and a JavaScript file (e.g., Yahoo!desktop applet).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, the desktop applet creator module 150 is optionally used by a user to create a desktop applet (e.g., to turn a user-specified portion of a web page into the desktop applet).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions for searching memory 102 for text, music, sound, images, video, and/or other files that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speakers 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch screen 112 or on an external display connected via external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple inc.).
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, notepad module 153 includes executable instructions to create and manage notepads, to-do-things, etc. according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 is optionally used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data related to stores and other points of interest at or near a particular location, and other location-based data) according to user instructions.
In conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, text input module 134, email client module 140, and browser module 147, online video module 155 includes instructions for: allowing a user to access, browse, receive (e.g., by streaming and/or downloading), playback (e.g., on a touch screen or on an external display connected via external port 124), send an email with a link to a particular online video, and otherwise manage online video in one or more file formats, such as h.264. In some embodiments, the link to the particular online video is sent using the instant messaging module 141 instead of the email client module 140. Additional descriptions of Online video applications may be found in U.S. provisional patent application 60/936,562 entitled "Portable Multi function Device, Method, and Graphical User Interface for Playing Online video," filed on year 2007, 20.6.2007 and U.S. patent application 11/968,067 entitled "Portable Multi function Device, Method, and Graphical User Interface for Playing Online video," filed on year 2007, 31.12.12, the contents of both of which are hereby incorporated by reference in their entirety.
Each of the modules and applications described above corresponds to a set of executable instructions for performing one or more of the functions described above as well as the methods described in this patent application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. For example, the video player module is optionally combined with the music player module into a single module (e.g., the video and music player module 152 in fig. 1A). In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device on which the operation of a predefined set of functions is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or trackpad as the primary input control device for operating the device 100, the number of physical input control devices (e.g., push buttons, dials, etc.) on the device 100 is optionally reduced.
The predefined set of functions performed exclusively through the touchscreen and/or trackpad optionally includes navigation between user interfaces. In some embodiments, device 100 is navigated to a main, home, or root menu from any user interface displayed on device 100 when a user touches the touchpad. In such embodiments, a touchpad is used to implement a "menu button". In some other embodiments, the menu button is a physical push button or other physical input control device, rather than a touchpad.
Fig. 1B is a block diagram illustrating exemplary components for event processing, according to some embodiments. In some embodiments, memory 102 (FIG. 1A) or memory 370 (FIG. 3) includes event classifier 170 (e.g., in operating system 126) and corresponding application 136-1 (e.g., any of the aforementioned applications 137-151, 155, 380-390).
Event sorter 170 receives the event information and determines application 136-1 and application view 191 of application 136-1 to which the event information is to be delivered. The event sorter 170 includes an event monitor 171 and an event dispatcher module 174. In some embodiments, application 136-1 includes an application internal state 192 that indicates one or more current application views that are displayed on touch-sensitive display 112 when the application is active or executing. In some embodiments, device/global internal state 157 is used by event classifier 170 to determine which application(s) are currently active, and application internal state 192 is used by event classifier 170 to determine the application view 191 to which to deliver event information.
In some embodiments, the application internal state 192 includes additional information, such as one or more of: resume information to be used when the application 136-1 resumes execution, user interface state information indicating information being displayed by the application 136-1 or information that is ready for display by the application 136-1, a state queue for enabling a user to return to a previous state or view of the application 236-1, and a repeat/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about a sub-event (e.g., a user touch on touch-sensitive display 112 as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or sensors such as proximity sensor 166, accelerometer 168, and/or microphone 113 (through audio circuitry 110). Information received by peripheral interface 118 from I/O subsystem 106 includes information from touch-sensitive display 112 or a touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, peripheral interface 118 transmits event information. In other embodiments, peripheral interface 118 transmits event information only when there is a significant event (e.g., receiving input above a predetermined noise threshold and/or receiving input for more than a predetermined duration).
In some embodiments, event classifier 170 further includes hit view determination module 172 and/or active event recognizer determination module 173.
When touch-sensitive display 112 displays more than one view, hit view determination module 172 provides a software process for determining where within one or more views a sub-event has occurred. The view consists of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected optionally corresponds to a programmatic level within a programmatic or view hierarchy of applications. For example, the lowest level view in which a touch is detected is optionally referred to as a hit view, and the set of events identified as correct inputs is optionally determined based at least in part on the hit view of the initial touch that initiated the touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the touch-based gesture. When the application has multiple views organized in a hierarchy, hit view determination module 172 identifies the hit view as the lowest view in the hierarchy that should handle the sub-event. In most cases, the hit view is the lowest level view in which the initiating sub-event (e.g., the first sub-event in the sequence of sub-events that form an event or potential event) occurs. Once a hit view is identified by hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as a hit view.
The active event recognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the activity event identifier determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of the sub-event are actively participating views, and thus determines that all actively participating views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely confined to the area associated with a particular view, the higher views in the hierarchy will remain actively participating views.
The event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments that include active event recognizer determination module 173, event dispatcher module 174 delivers event information to event recognizers determined by active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in an event queue, which is retrieved by the respective event receiver 182.
In some embodiments, operating system 126 includes an event classifier 170. Alternatively, application 136-1 includes event classifier 170. In yet another embodiment, the event classifier 170 is a stand-alone module or is part of another module stored in the memory 102 (such as the contact/motion module 130).
In some embodiments, the application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for processing touch events that occur within a respective view of the application's user interface. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module that is a higher-level object such as a user interface toolkit (not shown) or application 136-1 that inherits methods and other properties from it. In some embodiments, the respective event handlers 190 comprise one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data updater 176, object updater 177 or GUI updater 178 to update application internal state 192. Alternatively, one or more of the application views 191 include one or more respective event handlers 190. Additionally, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
The corresponding event recognizer 180 receives event information (e.g., event data 179) from the event classifier 170 and recognizes events from the event information. Event recognizer 180 includes an event receiver 182 and an event comparator 184. In some embodiments, event recognizer 180 also includes metadata 183 and at least a subset of event delivery instructions 188 (which optionally include sub-event delivery instructions).
The event receiver 182 receives event information from the event sorter 170. The event information includes information about a sub-event such as a touch or touch movement. According to the sub-event, the event information further includes additional information, such as the location of the sub-event. When the sub-event relates to motion of a touch, the event information optionally also includes the velocity and direction of the sub-event. In some embodiments, the event comprises rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information comprises corresponding information about the current orientation of the device (also referred to as the device pose).
Event comparator 184 compares the event information to predefined event or sub-event definitions and determines an event or sub-event or determines or updates the state of an event or sub-event based on the comparison. In some embodiments, event comparator 184 includes event definitions 186. Event definition 186 contains definitions of events (e.g., predefined sub-event sequences), such as event 1(187-1), event 2(187-2), and other events. In some embodiments, sub-events in event 187 include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1(187-1) is a double click on the displayed object. For example, a double tap includes a first touch (touch start) on the displayed object for a predetermined length of time, a first lift-off (touch end) for a predetermined length of time, a second touch (touch start) on the displayed object for a predetermined length of time, and a second lift-off (touch end) for a predetermined length of time. In another example, the definition of event 2(187-2) is a drag on the displayed object. For example, the drag includes a predetermined length of time of touch (or contact) on the displayed object, movement of the touch on the touch-sensitive display 112, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, event definitions 187 include definitions of events for respective user interface objects. In some embodiments, event comparator 184 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view where three user interface objects are displayed on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects the event handler associated with the sub-event and the object that triggered the hit test.
In some embodiments, the definition of the respective event 187 further includes a delay action that delays the delivery of the event information until it has been determined that the sequence of sub-events does or does not correspond to the event type of the event identifier.
When the respective event recognizer 180 determines that the sequence of sub-events does not match any event in the event definition 186, the respective event recognizer 180 enters an event not possible, event failed, or event ended state, after which subsequent sub-events of the touch-based gesture are ignored. In this case, other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.
In some embodiments, the respective event recognizer 180 includes metadata 183 with configurable attributes, tags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively participating event recognizers. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate how or how event recognizers interact with each other. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate whether a sub-event is delivered to a different level in the view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the respective event identifier 180 activates an event handler 190 associated with the event. In some embodiments, the respective event identifier 180 delivers event information associated with the event to event handler 190. Activating event handler 190 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, the event recognizer 180 throws a marker associated with the recognized event, and the event handler 190 associated with the marker retrieves the marker and performs a predefined process.
In some embodiments, event delivery instructions 188 include sub-event delivery instructions that deliver event information about sub-events without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the sequence of sub-events or to actively participating views. Event handlers associated with the sequence of sub-events or with actively participating views receive the event information and perform a predetermined process.
In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, the data updater 176 updates a phone number used in the contacts module 137 or stores a video file used in the video player module. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, object updater 177 creates a new user interface object or updates the location of a user interface object. The GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends the display information to graphics module 132 for display on the touch-sensitive display.
In some embodiments, event handler 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be understood that the above discussion of event processing with respect to user touches on a touch sensitive display also applies to other forms of user input utilizing an input device to operate multifunction device 100, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses, optionally in conjunction with single or multiple keyboard presses or holds; contact movements on the touchpad, such as tapping, dragging, scrolling, etc.; inputting by a stylus; movement of the device; verbal instructions; detected eye movement; inputting biological characteristics; and/or any combination thereof, is optionally used as input corresponding to sub-events defining the event to be identified.
Fig. 2 illustrates a portable multifunction device 100 with a touch screen 112 in accordance with some embodiments. The touch screen optionally displays one or more graphics within the User Interface (UI) 200. In this embodiment, as well as other embodiments described below, a user can select one or more of these graphics by making gestures on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the figure). In some embodiments, selection of one or more graphics will occur when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up, and/or down), and/or a rolling of a finger (right to left, left to right, up, and/or down) that has made contact with device 100. In some implementations, or in some cases, inadvertent contact with a graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that sweeps over an application icon optionally does not select the corresponding application.
Device 100 optionally also includes one or more physical buttons, such as "home" or menu button 204. As previously described, the menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on the device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen 112.
In some embodiments, device 100 includes touch screen 112, menu buttons 204, push buttons 206 for powering the device on/off and for locking the device, one or more volume adjustment buttons 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and docking/charging external port 124. Pressing the button 206 optionally serves to turn the device on/off by pressing the button and holding the button in a pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlocking the device or initiating an unlocking process. In an alternative embodiment, device 100 also accepts voice input through microphone 113 for activating or deactivating certain functions. Device 100 also optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on touch screen 112, and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
Fig. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop, desktop computer, tablet computer, multimedia player device, navigation device, educational device (such as a child learning toy), gaming system, or control device (e.g., a home controller or industrial controller). Device 300 typically includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. The communication bus 320 optionally includes circuitry (sometimes called a chipset) that interconnects and controls communication between system components. Device 300 includes an input/output (I/O) interface 330 with a display 340, typically a touch screen display. I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and touchpad 355, tactile output generator 357 (e.g., similar to tactile output generator 167 described above with reference to fig. 1A) for generating tactile outputs on device 300, sensors 359 (e.g., optical sensors, acceleration sensors, proximity sensors, touch-sensitive sensors, and/or contact intensity sensors (similar to contact intensity sensors 165 described above with reference to fig. 1A)). Memory 370 includes high speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 370 optionally includes one or more storage devices located remotely from CPU 310. In some embodiments, memory 370 stores programs, modules, and data structures similar to or a subset of the programs, modules, and data structures stored in memory 102 of portable multifunction device 100 (fig. 1A). Further, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (FIG. 1A) optionally does not store these modules.
Each of the above elements in fig. 3 is optionally stored in one or more of the previously mentioned memory devices. Each of the aforementioned means corresponds to a set of instructions for performing a function described above. The modules or programs (e.g., sets of instructions) described above need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed to embodiments of user interfaces optionally implemented on, for example, portable multifunction device 100.
Figure 4A illustrates an exemplary user interface of an application menu on portable multifunction device 100 in accordance with some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
one or more signal strength indicators 402 for one or more wireless communications (such as cellular signals and Wi-Fi signals);
Time 404;
a Bluetooth indicator 405;
a battery status indicator 406;
a tray 408 having icons for commonly used applications, such as:
an icon 416 of the telephony module 138 labeled "telephony", the icon 416 optionally including an indicator 414 of the number of missed calls or voice messages;
an icon 418 of the email client module 140 labeled "mail", the icon 418 optionally including an indicator 410 of the number of unread emails;
icon 420 labeled "browser" for browser module 147; and
an icon 422 labeled "iPod" of video and music player module 152 (also referred to as iPod (trademark of Apple inc.) module 152); and
icons for other applications, such as:
icon 424, labeled "message," of IM module 141;
icon 426 of calendar module 148 labeled "calendar";
icon 428 of image management module 144 labeled "photo";
icon 430 of camera module 143 labeled "camera";
icon 432 of online video module 155 labeled "online video";
an icon 434 of the stock market desktop applet 149-2 labeled "stock market";
Icon 436 of map module 154 labeled "map";
icon 438 labeled "weather" for weather desktop applet 149-1;
icon 440 of alarm clock desktop applet 149-4 labeled "clock";
icon 442 labeled "fitness support" for fitness support module 142;
icon 444 of notepad module 153 labeled "notepad"; and
an icon 446 labeled "settings" for the applications or modules is set, which provides access to the settings of the device 100 and its various applications 136.
It should be noted that the icon labels shown in fig. 4A are merely exemplary. For example, icon 422 of video and music player module 152 is labeled "music" or "music player". Other tabs are optionally used for the various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 of fig. 3) having a touch-sensitive surface 451 (e.g., tablet or touchpad 355 of fig. 3) separate from a display 450 (e.g., touchscreen display 112). Device 300 also optionally includes one or more contact intensity sensors (e.g., one or more of sensors 359) for detecting the intensity of contacts on touch-sensitive surface 451, and/or one or more tactile output generators 357 for generating tactile outputs for a user of device 300.
Although some of the examples below will be given with reference to input on touch screen display 112 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects input on a touch-sensitive surface that is separate from the display, as shown in fig. 4B. In some implementations, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to a primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). In accordance with these embodiments, the device detects contacts (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at locations that correspond to respective locations on the display (e.g., in fig. 4B, 460 corresponds to 468 and 462 corresponds to 470). As such, when the touch-sensitive surface (e.g., 451 in fig. 4B) is separated from the display (450 in fig. 4B) of the multifunction device, user inputs (e.g., contacts 460 and 462 and their movements) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display. It should be understood that similar methods are optionally used for the other user interfaces described herein.
Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contact, single-finger tap gesture, finger swipe gesture), it should be understood that in some embodiments one or more of these finger inputs are replaced by inputs from another input device (e.g., mouse-based inputs or stylus inputs). For example, the swipe gesture is optionally replaced by a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detecting a contact, followed by ceasing to detect a contact) while the cursor is over the location of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be understood that multiple computer mice are optionally used simultaneously, or mouse and finger contacts are optionally used simultaneously.
Fig. 5A illustrates an exemplary personal electronic device 500. The device 500 includes a body 502. In some embodiments, device 500 may include some or all of the features described with respect to devices 100 and 300 (e.g., fig. 1A-4B). In some embodiments, the device 500 has a touch-sensitive display screen 504, hereinafter referred to as a touch screen 504. Instead of or in addition to the touch screen 504, the device 500 has a display and a touch-sensitive surface. As with devices 100 and 300, in some embodiments, touch screen 504 (or touch-sensitive surface) optionally includes one or more intensity sensors for detecting intensity of applied contact (e.g., touch). One or more intensity sensors of the touch screen 504 (or touch-sensitive surface) may provide output data representing the intensity of a touch. The user interface of device 500 may respond to a touch based on its strength, meaning that different strengths of touches may invoke different user interface operations on device 500.
Exemplary techniques for detecting and processing touch intensity are found, for example, in the following related patent applications: international patent Application Ser. No. PCT/US2013/040061, entitled "Device, Method, and Graphical User Interface for Displaying User Interface Objects reforming to an Application", filed on 8.5.2013, published as WIPO patent publication No. WO/2013/169849; and International patent application Ser. No. PCT/US2013/069483 entitled "Device, Method, and Graphical User Interface for transiting Between Touch Input to Display Output Relationships", filed on 11/2013, published as WIPO patent publication No. WO/2014/105276, each of which is hereby incorporated by reference in its entirety.
In some embodiments, the device 500 has one or more input mechanisms 506 and 508. The input mechanisms 506 and 508, if included, may be in physical form. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, device 500 has one or more attachment mechanisms. Such attachment mechanisms, if included, may allow for attachment of the device 500 with, for example, a hat, glasses, earrings, necklace, shirt, jacket, bracelet, watchband, bracelet, pants, belt, shoe, purse, backpack, and the like. These attachment mechanisms allow the user to wear the device 500.
Fig. 5B illustrates an exemplary personal electronic device 500. In some embodiments, the apparatus 500 may include some or all of the components described with reference to fig. 1A, 1B, and 3. The device 500 has a bus 512 that operatively couples an I/O portion 514 with one or more computer processors 516 and a memory 518. The I/O portion 514 may be connected to the display 504, which may have a touch sensitive member 522 and optionally an intensity sensor 524 (e.g., a contact intensity sensor). Further, I/O section 514 may connect with communications unit 530 for receiving applications and operating system data using Wi-Fi, bluetooth, Near Field Communication (NFC), cellular, and/or other wireless communications technologies. Device 500 may include input mechanisms 506 and/or 508. For example, the input mechanism 506 is optionally a rotatable input device or a depressible input device and a rotatable input device. In some examples, the input mechanism 508 is optionally a button.
In some examples, the input mechanism 508 is optionally a microphone. Personal electronic device 500 optionally includes various sensors, such as a GPS sensor 532, an accelerometer 534, an orientation sensor 540 (e.g., a compass), a gyroscope 536, a motion sensor 538, and/or combinations thereof, all of which may be operatively connected to I/O portion 514.
The memory 518 of the personal electronic device 500 may include one or more non-transitory computer-readable storage media for storing computer-executable instructions that, when executed by the one or more computer processors 516, may, for example, cause the computer processors to perform the techniques described below, including processes 700 and 900 (fig. 7A-7J and 9A-9G). A computer readable storage medium may be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with an instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer readable storage medium may include, but is not limited to, magnetic storage devices, optical storage devices, and/or semiconductor storage devices. Examples of such storage devices include magnetic disks, optical disks based on CD, DVD, or blu-ray technology, and persistent solid state memory such as flash memory, solid state drives, and the like. The personal electronic device 500 is not limited to the components and configuration of fig. 5B, but may include other components or additional components in a variety of configurations.
As used herein, the term "affordance" refers to a user-interactive graphical user interface object that is optionally displayed on a display screen of device 100, 300, and/or 500 (fig. 1A, 3, and 5A-5B). For example, images (e.g., icons), buttons, and text (e.g., hyperlinks) optionally each constitute an affordance.
As used herein, the term "focus selector" refers to an input element that is used to indicate the current portion of the user interface with which the user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted according to the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system 112 in fig. 1A or touch screen 112 in fig. 4A) that enables direct interaction with user interface elements on the touch screen display, a detected contact on the touch screen serves as a "focus selector" such that when an input (e.g., a press input by the contact) is detected at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element) on the touch screen display, the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without corresponding movement of a cursor or movement of a contact on the touch screen display (e.g., by moving the focus from one button to another using tab or arrow keys); in these implementations, the focus selector moves according to movement of the focus between different regions of the user interface. Regardless of the particular form taken by the focus selector, the focus selector is typically a user interface element (or contact on a touch screen display) that is controlled by the user to deliver the user's intended interaction with the user interface (e.g., by indicating to the device the element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a touchpad or touchscreen), the location of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (as opposed to other user interface elements shown on the device display).
As used in the specification and in the claims, the term "characteristic intensity" of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined time period (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detecting contact, before detecting contact liftoff, before or after detecting contact start movement, before or after detecting contact end, before or after detecting an increase in intensity of contact, and/or before or after detecting a decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: a maximum value of the intensity of the contact, a mean value of the intensity of the contact, an average value of the intensity of the contact, a value at the top 10% of the intensity of the contact, a half-maximum value of the intensity of the contact, a 90% maximum value of the intensity of the contact, and the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, a contact whose characteristic intensity does not exceed the first threshold results in a first operation, a contact whose characteristic intensity exceeds the first intensity threshold but does not exceed the second intensity threshold results in a second operation, and a contact whose characteristic intensity exceeds the second threshold results in a third operation. In some embodiments, a comparison between the characteristic strength and one or more thresholds is used to determine whether to perform one or more operations (e.g., whether to perform the respective operation or to forgo performing the respective operation) rather than to determine whether to perform the first operation or the second operation.
FIG. 5C illustrates the detection of multiple contacts 552A-552E on the touch-sensitive display screen 504 using multiple intensity sensors 524A-524D. FIG. 5C also includes an intensity map that shows current intensity measurements of the intensity sensors 524A-524D relative to intensity units. In this example, the intensity measurements of intensity sensors 524A and 524D are each 9 intensity units, and the intensity measurements of intensity sensors 524B and 524C are each 7 intensity units. In some implementations, the cumulative intensity is a sum of intensity measurements of the plurality of intensity sensors 524A-524D, which in this example is 32 intensity units. In some embodiments, each contact is assigned a respective intensity, i.e., a fraction of the cumulative intensity. FIG. 5D illustrates assigning cumulative intensities to the contacts 552A-552E based on their distances from the center of the force 554. In this example, each of contacts 552A, 552B, and 552E is assigned a strength of 8 strength units of contact of cumulative strength, and each of contacts 552C and 552D is assigned a strength of 4 strength units of contact of cumulative strength. More generally, in some implementations, each contact j is assigned a respective intensity Ij, which is a portion of the cumulative intensity a, according to a predefined mathematical function Ij ═ a · (Dj/Σ Di), where Dj is the distance of the respective contact j from the force center, and Σ Di is the sum of the distances of all respective contacts (e.g., i ═ 1 to last) from the force center. The operations described with reference to fig. 5C-5D may be performed using an electronic device similar or identical to device 100, 300, or 500. In some embodiments, the characteristic intensity of the contact is based on one or more intensities of the contact. In some embodiments, an intensity sensor is used to determine a single characteristic intensity (e.g., a single characteristic intensity of a single contact). It should be noted that the intensity map is not part of the displayed user interface, but is included in fig. 5C-5D to assist the reader.
In some implementations, a portion of the gesture is recognized for determining the feature intensity. For example, the touch-sensitive surface optionally receives a continuous swipe contact that transitions from a starting location and reaches an ending location where the contact intensity increases. In this example, the characteristic intensity of the contact at the end location is optionally based on only a portion of the continuous swipe contact, rather than the entire swipe contact (e.g., only the portion of the swipe contact at the end location). In some embodiments, a smoothing algorithm is optionally applied to the intensity of the swipe contact before determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: a non-weighted moving average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some cases, these smoothing algorithms eliminate narrow spikes or dips in the intensity of the swipe contact for the purpose of determining the feature intensity.
The intensity of a contact on the touch-sensitive surface is optionally characterized relative to one or more intensity thresholds, such as a contact detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds. In some embodiments, the light press intensity threshold corresponds to an intensity that: at which intensity the device will perform the operations typically associated with clicking a button of a physical mouse or touchpad. In some embodiments, the deep press intensity threshold corresponds to an intensity that: at which intensity the device will perform a different operation than that typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, when a contact is detected whose characteristic intensity is below a light press intensity threshold (e.g., and above a nominal contact detection intensity threshold, a contact below the nominal contact detection intensity threshold is no longer detected), the device will move the focus selector in accordance with movement of the contact on the touch-sensitive surface without performing operations associated with a light press intensity threshold or a deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface drawings.
The increase in the characteristic intensity of the contact from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a "light press" input. An increase in the characteristic intensity of a contact from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a "deep press" input. An increase in the characteristic intensity of the contact from an intensity below the contact detection intensity threshold to an intensity between the contact detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting a contact on the touch surface. The decrease in the characteristic intensity of the contact from an intensity above the contact detection intensity threshold to an intensity below the contact detection intensity threshold is sometimes referred to as detecting liftoff of the contact from the touch surface. In some embodiments, the contact detection intensity threshold is zero. In some embodiments, the contact detection intensity threshold is greater than zero.
In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting a respective press input performed with a respective contact (or contacts), wherein the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or contacts) above a press input intensity threshold. In some embodiments, the respective operation is performed in response to detecting an increase in intensity of the respective contact above a press input intensity threshold (e.g., a "down stroke" of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below the press input intensity threshold, and the respective operation is performed in response to detecting a subsequent decrease in intensity of the respective contact below the press input threshold (e.g., an "up stroke" of the respective press input).
5E-5H illustrate detection of a gesture that includes an intensity of contact 562 from below the light press intensity threshold in FIG. 5E (e.g., "IT" pressure thresholdL") increases in intensity above the deep press intensity threshold in FIG. 5H (e.g.," IT)D") intensity corresponding to a press input. On the displayed user interface 570 including the application icons 572A-572D displayed in the predefined area 574, a gesture performed with the contact 562 is detected on the touch-sensitive surface 560 while a cursor 576 is displayed over the application icon 572B corresponding to application 2. In some implementations, the gesture is detected on the touch-sensitive display 504. The intensity sensor detects the intensity of the contact on the touch-sensitive surface 560. The device determines that the intensity of contact 562 is at a deep press intensity threshold (e.g., "ITD") peak above. A contact 562 is maintained on the touch-sensitive surface 560. In response to detecting the gesture, and in accordance with the intensity rising to the deep press intensity threshold during the gesture (e.g., "IT)D") above, displays reduced-scale representations 578A-578C (e.g., thumbnails) of the document recently opened for application 2, as shown in fig. 5F-5H. In some embodiments, the intensity is a characteristic intensity of the contact compared to one or more intensity thresholds. It should be noted that the intensity map for contact 562 is not part of the displayed user interface, but is included in fig. 5E-5H to assist the reader.
In some embodiments, the display of representations 578A-578C includes animation. For example, the representation 578A is initially displayed in proximity to the application icon 572B, as shown in fig. 5F. As the animation progresses, the representation 578A moves upward and a representation 578B is displayed adjacent to the application icon 572B, as shown in fig. 5G. Representation 578A then moves upward, 578B moves upward toward representation 578A, and representation 578C is displayed adjacent to application icon 572B, as shown in fig. 5H. Representations 578A-578C form an array over icon 572B. In some embodiments, the animation progresses according to the intensity of contact 562, as shown in fig. 5F-5G, where representations 578A-578C appear and press the intensity threshold toward deep with the intensity of contact 562 (e.g., "ITD") increases and moves upward. In a 1In some embodiments, the intensity at which the animation progresses is a characteristic intensity of the contact. The operations described with reference to fig. 5E-5H may be performed using an electronic device similar or identical to device 100, 300, or 500.
In some embodiments, the device employs intensity hysteresis to avoid accidental input sometimes referred to as "jitter," where the device defines or selects a hysteresis intensity threshold having a predefined relationship to the press input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press input intensity threshold, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below a hysteresis intensity threshold corresponding to the press input intensity threshold, and the respective operation is performed in response to detecting a subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an "upstroke" of the respective press input). Similarly, in some embodiments, a press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press input intensity threshold and optionally a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and a corresponding operation is performed in response to detecting the press input (e.g., an increase in intensity of the contact or a decrease in intensity of the contact, depending on the circumstances).
For ease of explanation, optionally, the description of operations performed in response to a press input associated with a press input intensity threshold or in response to a gesture that includes a press input is triggered in response to detection of any of the following: the intensity of the contact increases above the press input intensity threshold, the intensity of the contact increases from an intensity below the hysteresis intensity threshold to an intensity above the press input intensity threshold, the intensity of the contact decreases below the press input intensity threshold, and/or the intensity of the contact decreases below the hysteresis intensity threshold corresponding to the press input intensity threshold. Additionally, in examples in which operations are described as being performed in response to detecting that the intensity of the contact decreases below the press input intensity threshold, the operations are optionally performed in response to detecting that the intensity of the contact decreases below a hysteresis intensity threshold that corresponds to and is less than the press input intensity threshold.
As used herein, an "installed application" refers to a software application that has been downloaded to an electronic device (e.g., device 100, 300, and/or 500) and is ready to be launched (e.g., become open) on the device. In some embodiments, the downloaded application is changed to an installed application with an installer, the installed application extracts program portions from the downloaded software package and integrates the extracted portions with the operating system of the computer system.
As used herein, the term "open application" or "executing application" refers to a software application that has maintained state information (e.g., as part of device/global internal state 157 and/or application internal state 192). The open or executing application is optionally any of the following types of applications:
the active application currently displayed on the display screen of the device using the application;
a background application (or background process) that is not currently displayed but one or more processes of the application are being processed by one or more processors; and
a suspended or dormant application that is not running but is stored in memory (volatile and non-volatile, respectively) and can be used to resume execution of the application.
As used herein, the term "closed application" refers to a software application that does not have retained state information (e.g., the state information of the closed application is not stored in the memory of the device). Thus, closing an application includes stopping and/or removing the application's application process and removing the application's state information from the device's memory. Generally, while in a first application, opening a second application does not close the first application. The first application becomes the background application when the second application is displayed and the first application stops being displayed.
Attention is now directed to embodiments of a user interface ("UI") and associated processes implemented on an electronic device, such as portable multifunction device 100, device 300, or device 500.
Fig. 6A-6 AAC illustrate exemplary user interfaces for sharing a suggested set of media items, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 7A-7J.
Fig. 6A illustrates an exemplary message interface 604 displayed on the display 602 of the electronic device 600. In some embodiments, the device 600 includes one or more features of the device 100, 300, or 500. In some embodiments, device 600 includes one or more input devices (e.g., a touch screen display, a touch screen surface). Messaging interface 604 is associated with an instant messaging application (e.g., executing on device 600). For example, an instant messaging application is any application that may be used to send and/or receive electronic messages to a recipient. Exemplary electronic messages include messages sent using short message service ("SMS"), multimedia instant message service ("MMS"), and/or internet-based instant messaging protocols, such as the iMessage of apple inc.
In fig. 6A, the message interface 604 includes an exemplary text record 604A representing a message conversation. In some embodiments, the word record includes content (e.g., text, media, shared location, or other data) shared between one or more parties to an electronic message conversation (also referred to as a message conversation). In the example shown in fig. 6A, the session is between three parties, namely, linne (e.g., user 603A), william (e.g., user 603B), and gray (e.g., user 603C), with an indicator for each displayed in area 604E. The displayed text record 604A includes messages that are aligned with the right side of the display, representing messages sent by the user of the device 600 (in this example, the user 603A named forest) and messages aligned on the left side of the display, representing messages received by the user of the device 600 (e.g., from williams or gray).
At fig. 6A, the device 600 receives a user input 605 corresponding to a selection of the text entry field 604D of the message interface 604. In some embodiments, a device (e.g., 600) receives user input associated with a messaging interface and, in response, displays one or more input affordances (e.g., shown in fig. 6B and 6C, described below).
FIG. 6B illustrates a message user interface 604 having several exemplary input affordances. In some embodiments, the one or more input affordances include one or more input suggestions for sharing content with participants of the message conversation. For example, fig. 6B shows a message interface 604 with a displayed shared suggestion region 606 that includes an input suggestion 606A (input of the suggested text "ok") and an input suggestion 606B (input of the suggested text "ok"). In this example, the input suggestions 606A and 606B are affordances that, in response to selection, cause the device 600 to insert a pre-selected phrase into the message conversation (e.g., selection of 606B causes the device 600 to insert the text "ok" into the word record 604A or text entry field 604D).
In some embodiments, the one or more input affordances include a keyboard (e.g., 604D). For example, the keyboard 604E shown in fig. 6B may be used to insert text, emoticons, or dictation into the text record of the session.
Fig. 6C illustrates an exemplary message interface 604 in which the input suggestions have been replaced with input suggestions of shared media content. In some embodiments, a device (e.g., 600) displays an input suggestion that suggests sharing media content (e.g., one or more sets of media items). For example, as shown in fig. 6C, device 600 has replaced the display of input suggestion 606B with exemplary input suggestion 606C. The input suggestion 606C includes the text "share photo" and represents a quick and easy method of accessing an interface that shares suggested media items with participants of the current session, as described in more detail below (e.g., with reference to fig. 6E).
In some embodiments, the suggestion is based on the participant(s) (e.g., one or more) in the session. For example, device 600 displays input suggestion 606B because there is a media item (e.g., in a media library associated with device 600) that includes a description of the participants (e.g., users 603A, 603B, and/or 603C) of the session represented by textual record 604A. For example, device 600 displays input suggestion 606B because there is a media item (e.g., in a media library associated with device 600) that is associated with an event known to be attended by a participant of the session represented by textual record 604A (e.g., users 603A, 603B, and/or 603C). Thus, in the example shown in fig. 6C, device 600 displays input suggestions 606C prompting the user to share photos because the user has media items (e.g., photos and/or videos) in their media library that are suitable for sharing with the participants of the current session. The basis for the suggestion and suggesting a particular media item are discussed in more detail below.
Fig. 6D illustrates an exemplary message interface after receiving one or more messages from a recipient (e.g., user 603B). As shown in fig. 6D, william (user 603B) has added content 604B to the textual record (e.g., received by device 600), representing a collection of media items (e.g., one or more media items) that have been shared with other participants in the current message conversation, in this example, ryan (user 603C) and gray (user 603C). In some implementations, the input suggestions (e.g., 606C) are displayed based on the media content in the textual record (e.g., 604A). For example, the input suggestion 606C is displayed because william (user 603B) shares some media related to media present in the media library of the user (603A) of the device 600. In some implementations, the input suggestion (e.g., 606C) is displayed in response to receiving media content (e.g., such as by a collection of media items from william represented by 604B) that has been shared with the device (e.g., 600). For example, the device 600 may immediately display an input suggestion 606C indicating william shared media content in response to receiving the representation 604B in the text record 604A.
As shown in fig. 6D, william also inserts a message 604C in the text record requesting that other participants in the session (e.g., the user of device 600, in this example, linne) share back media. Message 604C indicates: "hello! Can you send a photograph of a lake too great on the last week to me? "in some embodiments, the input suggestions (e.g., 606C) are displayed based on the textual content in the word record (e.g., 604A). In some embodiments, the input suggestion (e.g., 606B) is displayed in response to receiving textual content (e.g., text, such as 604C from william) (e.g., a request to represent that the user of the device 600 share media related to a trip in taihaohang on the last weekend). For example, device 600 can immediately display input suggestion 606C in response to receiving message 604C in text record 604A that references a photograph (e.g., from great lake).
In some embodiments, an input suggestion (e.g., 606C) is displayed in response to receiving an input on a keyboard (e.g., 604E). For example, as shown in FIG. 6D, the device 600 receives a user input 607 indicating selection of a key (for the letter O). In response, the letter O has been entered into the text entry field. For example, the input suggestions 606C may be displayed in response to the user (e.g., beginning) typing (e.g., user input 607).
FIG. 6E illustrates selection of an exemplary input suggestion affordance 606C. In some embodiments, a device (e.g., 600) receives user input corresponding to selection of an input suggestion affordance (e.g., 606C), and in response, displays a sharing interface (e.g., that includes a description of at least a portion of a suggested set of media items for sharing). For example, at fig. 6E, device 600 receives user input 608 corresponding to selection input suggestion affordance 606C. In response to receiving user input 608, device 600 displays a sharing interface 610, as shown in FIG. 6F.
Fig. 6F illustrates an exemplary sharing interface 610. The sharing interface 610 is associated with a photo application (e.g., stored locally on the device 600, and/or stored remotely and a media library associated with the device 600) for managing media items (e.g., photos, videos, etc.) associated with the device 600. A user of device 600 may use sharing interface 610 to select one or more media items for sharing with one or more recipients. For example, the sharing interface 610 is displayed concurrently with the text records 604A of the same william and gray conversations, and conveniently allows the user to access the media item selected for sharing while continuing to view the associated message conversation text record (e.g., view the new message received).
In some embodiments, the shared interface includes a plurality of pages. For example, as shown in fig. 6F, paging point 610D indicates that the sharing interface 610 includes three pages, where the second page (e.g., 612) is the currently selected page in this example (e.g., the second paging point is filled). In some embodiments, the device receives a user input (e.g., a swipe gesture) and, in response to the user input, replaces the display with the second page. The skilled artisan will appreciate that other arrangements of the content may be used, which are intended to fall within the scope of the present disclosure. For example, the content of the three pages in this example may be arranged in a single page and viewed, for example, by vertically scrolling through the single page.
FIG. 6F illustrates an exemplary suggested collections interface 612. Suggested collections interface 612 represents one or more media items (hereinafter also referred to as "media item collections," or "suggested collections") that are available for sharing. In some embodiments, the device (e.g., 600) displays the suggested collection interface 612 in response to detecting one of the user inputs 608, 614, 616, or 620.
As shown in FIG. 6F, suggested aggregation interface 612 includes a header card 612A and descriptive elements 612B and 612C. In some embodiments, a title card and/or descriptive elements are optionally included in the suggested collection interface. The title card 612A includes representative images from the suggestion set. Element 612B indicates a location associated with the set of suggestions (Taihaohang). Element 612C indicates a time (in the range of 12 months 1 to 4 days) associated with the set of suggestions. In this example, the year is not shown (e.g., because the date refers to the current year or a relatively recent date, such as within the past year). In other examples, a year associated with a date may be displayed.
In some embodiments, the set of suggestions (e.g., as represented by interface 612) is suggested (e.g., by device 600) based on one or more factors. In some embodiments, the set of suggestions is determined to be related to a message conversation (e.g., represented by text record 604A). In some embodiments, suggesting the set includes displaying a suggestion set interface for the suggested set, the suggested set determined to be relevant to the message session. In some embodiments, the set of suggestions is suggested based on content (e.g., text or one or more media items) in a word record (e.g., 604A) of the message conversation.
In some embodiments, the content in the word record is textual content. In some embodiments, the set of suggestions is determined to be related to textual content. In some embodiments, the set of suggested media items is determined to be relevant to the recipient's message conversation based on the geographic location referenced (e.g., included) in the text record of the message conversation. For example, the device 600 may suggest a collection in the suggested collection interface 612 based on message text 604C from william that mentions the geographic location "taihao lake" (e.g., requesting photos from taihao lake on the last weekend), as shown in fig. 6D. Because the set of suggestions of 612 corresponds to a geographic location of the lake of great abundance (e.g., includes media captured at that location), the device 600 suggests that the set be shared in a messaging session.
The text related to the set of suggestions need not be related to geography. In some embodiments, the set of suggested media items is determined to be relevant to the message session with the recipient based on a reference to a time referenced (e.g., included) in a textual record of the message session (e.g., a textual reference to a particular time, date or date range or relative description (e.g., last week)). For example, the device 600 may suggest a set in the suggested set interface 612 based on message text 604C from william that mentions the time "last weekend" (requesting photos from taihao lake on the last weekend), as shown in fig. 6D. Because the suggested set 612 corresponds to a time referenced in the textual record (e.g., representing a date of the last weekend) (e.g., the suggested set includes media captured within and/or near the referenced time), the device 600 suggests that the set be shared in the message session.
The text related to the set of suggestions need not be time or geographic related. For example, other text (e.g., phrases or keywords) in a message conversation may cause a related set of media items to be suggested for sharing. For example, a mention of the word "birthday" may result in a suggestion from a media collection of a birthday celebration (e.g., including the birthday in the title, or the date of the birthday known as one or more persons depicted in the collection). Of course, other text in the message conversation may be related to the set of media items.
In some embodiments, the content in the textual record is media content. In some implementations, the set of suggestions is determined to be related to media content. For example, the suggested set of interfaces 612 may be suggested by device 600 based on received media items shared by william. In this example, the received media item shared by william (represented by 604B) is associated with a location of taihao lake and a time range of 12 months 1 day to 4 days. Notably, the device 600 can suggest the set of suggestions represented by the suggested sets interface 612 because the set of suggestions are also associated with a location of taihao lake and a time range of 12 months 1 day to 4 days.
In some embodiments, the set of suggested media items is determined to be relevant to the message conversation with the recipient based on the identity of one or more participants in the message conversation. In some embodiments, the participant is a user associated with a user account (e.g., cloud-based service, social media). In some embodiments, the participant is a contact (e.g., from an address book) associated with the device (e.g., 600) or a user of the device. In some embodiments, the suggested set of media items is a set of media items in which one or more participants in the conversation (or more than a threshold number of participants) are displayed in the media items in the suggested set. For example, the set of suggested media items (612) includes media items taken during a camping trip (e.g., to taihaohang), with some or all of the participants in the session (e.g., 603A, 603B, and 603C) participating. Thus, the set of media items is suggested to be shared with the participants in the session because the participants in the session are engaged in a camping trip.
In some embodiments, the set of suggested media items is determined to be relevant to the messaging session with the participant based on an event known to be attended by one or more participants in the messaging session. In some embodiments, the events include media items captured at one or more geographic locations and within a particular time range. For example, the set of recommendations of interface 612 corresponds to events defined by the geographic location of "taihao lake" and the time range of 12 months 1 day to 4 days, and optionally includes media items captured at the geographic location of "taihao lake" and the time range of 12 months 1 day to 4 days. In some embodiments, the event is determined automatically (e.g., based on the geographic location and time of capture of the media item). In some embodiments, the event is user created (e.g., a user manually creates a collection using device 600). For example, the suggested set interface 612 may be suggested based on data associated with the suggested set that indicates that a participant in the session attended an event associated with the suggested set. For example, the suggested set of media items represented by suggested collection interface 612 may correspond to an event known to be william attended (e.g., taihao lake media taken from 1 to 4 days 12 months ago) (e.g., based on other metadata associated with the media items of the suggested set, in accordance with an identification of a face associated with william among the media items of the suggested set).
6G-6K illustrate other techniques for accessing a suggested collection interface. FIG. 6G illustrates an exemplary technique for selecting message text to view a suggestion set for sharing. In some embodiments, a portion of the message text in the word record is optional. For example, as shown in fig. 6G, a portion 604F of the message 604C is visually emphasized (e.g., underlined) and is optional-the portion is displayed as a "photograph from taihao lake". In some embodiments, a device (e.g., 600) (or a cloud-based service in communication with the device) detects that text of a message (e.g., message 604C) includes text related to an action that may be performed by the device. For example, the device 600 detects that the message 604C includes text related to a shared media item, and the device 600 is capable of sharing (e.g., associated with shareable media). In this case, the text 604F of the message is selectable and is optionally displayed with a visual indication that such text is selectable. As shown in FIG. 6G, a "photos from Taihaohang" section 604F is displayed underlined and selected by a user input 614. In some embodiments, in response to receiving user input corresponding to selection of selectable text (e.g., 604F), the device (e.g., 600) displays a sharing interface (e.g., as shown in 610 of fig. 6F) and/or a suggested collection interface (e.g., as shown in fig. 6F). In fig. 6G, device 600 receives user input 614 corresponding to a selection of portion 604F of message 604C.
At FIG. 6H (e.g., the same interface as in FIG. 6D), the device 600 receives a user input 616 corresponding to selection of the application selection affordance 604G. The application selection affordance 604G may be used to access one or more sharing interfaces (e.g., 610) associated with one or more applications for sharing content in a message session. In some embodiments, in response to receiving user input 612, device 600 displays sharing interface 610 (e.g., as shown in fig. 6I or fig. 6F).
In some embodiments, a device (e.g., 600) displays one or more interfaces associated with applications other than a photo application. For example, the application selector area 610A in FIG. 6I shows the selected photo application affordance 610B (e.g., enclosed by a box). In FIG. 6I, the set of suggested media items 610C is associated with a music application and is not selected. In some embodiments, user input corresponding to selection of an application affordance causes display of a sharing interface associated with a respective application. For example, in response to selecting music application affordance 614C, device 600 replaces display of sharing interface 610 with a sharing interface associated with a music application.
In some embodiments, in response to selecting the application affordance (e.g., 604G), the device (e.g., 600) displays a sharing interface that is not associated with the photo application. For example, in response to user input 616, device 600 can display an interface associated with affordance 610A (e.g., an application storefront to access an interface to download one or more applications) or an interface associated with affordance 610C (e.g., an interface to share related content music). In some embodiments, when displaying a sharing affordance not associated with a photo application, the device displays a sharing interface associated with the photo application in response to user input. For example, the user input may be one or more swipes at the location of one or more other sharing interfaces. For example, two left swipes of the interface from the music application (represented by affordance 610C) cause shared interface 610 associated with the photo application to be displayed (e.g., because respective affordances 610B and 60C are separated by one icon). For example, if the shared interface includes multiple pages (e.g., as in 610), a swipe received while displaying the end page in a direction away from the other pages causes the device to display the shared interface for the next adjacent application. In some embodiments, the most recent sharing interface is displayed in response to a selection (e.g., 616) of an application affordance (e.g., 604G). For example, if sharing interface 610 is a previously displayed most recently displayed sharing interface, device 600 displays sharing interface 610 in response to user input 616.
FIG. 6I illustrates an exemplary sharing interface 610 including an exemplary recent photos page 618. In this example, the recent photos page 618 includes the most recent media items associated with the media library associated with the device 600 (e.g., chronological by the most recently captured media items). In this example, page 618 is a first page (e.g., associated with first paging point 610D). At fig. 6J, the device 600 receives a user input 620. The user input 620 is a left swipe gesture at the most recent photos page 618. In response to receiving the user input 620, the device 600 replaces the display of the recent photos page 618 with the display of the adjacent page, the suggestion page 612 (also referred to as the suggested collection interface 612). For example, FIG. 6K shows that user input 620 representing a contact on the touch-sensitive display of device 600 continues to be applied, but has moved to the left (relative to FIG. 6J). As shown in FIG. 6K, a portion of the suggestion collection interface 612 is now displayed, while a portion of the recent photos page 618 is no longer displayed.
In some embodiments, a device (e.g., 600) displays a suggested aggregation interface (e.g., 612) without displaying a sharing interface (e.g., 610). For example, as described throughout this document, device 600 may display one or more elements of a suggested collection interface (e.g., 612) without displaying a sharing interface (e.g., 610) (e.g., without displaying a paging point or application affordance).
6L-6N illustrate exemplary techniques for expanding a suggested collection interface (e.g., 612). As shown in FIG. 6L, the suggested aggregation interface 612 is displayed concurrently with the text record 604A. In some embodiments, the device (e.g., 600) expands the suggested collection interface in response to user input. For example, at fig. 6L, device 600 receives user input 622 (e.g., swipe up on the graphical handle associated with suggested collection interface 612). As shown in fig. 6M, in response to detecting user input 622, device 600 expands displayed suggestion collection interface 612 (e.g., causes more suggested collection interface 612 to be displayed). In fig. 6M, user input 622 continues to be detected and has moved upward on the touch-sensitive display (relative to fig. 6L), and suggested aggregation interface 612 has expanded while less text records 604A are displayed.
FIG. 6N shows the suggested aggregation interface fully expanded. For example, in response to the swipe-up gesture of user input 622 being complete, device 600 displays suggested collection interface 612 fully expanded. In some embodiments, displaying the suggested aggregation interface fully expanded causes the amount of displayed text records to be reduced (e.g., partially or fully). For example, as shown in FIG. 6N, the text records of the message conversation are no longer displayed simultaneously with the suggested aggregation interface 612. In some embodiments, the text records are displayed simultaneously with the fully expanded suggestion set interface. For example, some portion of the text record may be displayed simultaneously with the fully expanded suggestion collection interface, where the portion is smaller than what is displayed when the suggestion collection interface is not fully expanded. In some embodiments, the set of suggested media items (e.g., as shown in interface 612) are scrollable (e.g., to display additional content or elements as described herein with respect to interface 612). For example, user input representing a request to scroll (e.g., swipe up or down at 612) causes additional media items in the collection to be displayed.
6O-6R illustrate exemplary techniques for navigating between pages associated with a photo application in a sharing interface. When viewing the fully expanded suggested collections interface 612, the user may still move between pages associated with the photo application (e.g., sharing interface 610). In some embodiments, a device (e.g., 600) displays a different suggestion set of the different suggestion sets in response to receiving user input. For example, at fig. 6O, device 600 receives user input 624 at suggested collection interface 612, which represents a left swipe gesture (e.g., title card 612A) on the location of interface 612. In response to receiving user input 624, device 600 replaces the display of suggested collection interface 612 with the display of an adjacent page associated with another suggested collection, such as suggested page 626 shown in fig. 6P (also referred to as suggested collection interface 626).
FIG. 6P illustrates an exemplary suggested collection interface 626. In some embodiments, suggested collection interface 626 includes one or more features of suggested collection interface 612 as described herein. Similar to suggested collection interface 612, suggested collection interface 626 represents a suggested set of media items (e.g., a different suggested set than represented by the suggested collection interface 612) (e.g., a "second" suggested set). Suggested sets interface 626 also includes a header card 626A (e.g., with representative images from their respective suggested sets), an indication of location 626 (a forest of great expense), and an indication of date 626C (7 months 3 to 8 days). Suggested collection interface 626 also includes representations (e.g., representation 626D) of one or more media items in the respective collection. In this example, the set of suggestions represented by the set of suggestions interface 626 is determined to be related to the message conversation represented by the word record 804A, and photos in the lake of Quikaga Welllow are suggested based on the geographic locations contained in the text of the word records 604A associated with William and Gray conversations, and the set of suggestions is associated with nearby geographic locations named "Quaho forest". Thus, the suggested collection interface 626 provides a second set of suggested media items that are available for sharing and that are william and gray related to potential recipients (e.g., related to text of william and gray conversations with the participants).
In some embodiments, the set of suggestions determined to be relevant to the message conversation are displayed in order (e.g., decreasing relevance). In some embodiments, the order is based on relevance to the message session. For example, if the interface set of suggestions 612 is related to word record 604A in fig. 6D because it depicts participants 603B and 603C and relates to geographic locations mentioned in the text, but the set of suggestions 612 is only related to word record 604A based on geographic locations (e.g., a forest near taihao lake, taihao), but the participants are not shown, the interface 612 is more related to the session. Thus, for example, interface 612 may be immediately displayed in response to selecting the input suggestion affordance (e.g., 606C), while interface 626 may be accessed using a swipe from 612. In addition, interface 612 may be assigned a second page (e.g., paging point 610D) while interface 626 is assigned to a third (later) paging point.
In this example, the photograph required by william in the text record 604A corresponds to the set of suggestions (e.g., the set of "first" suggestions) represented by the suggested set interface 612. Thus, the user may swipe back to interface 612. At fig. 6Q, device 600 receives user input 628 (e.g., representing a right swipe gesture) at a location associated with interface 626. In response to receiving the user input 628, the device 600 again displays the suggested collections interface 612, as shown in FIG. 6R.
FIG. 6R illustrates an exemplary suggested collections interface 612. In some embodiments, a suggested collection interface (e.g., 612) includes information indicating the amount of media items in the corresponding suggested collection. For example, indicator 612D indicates that the suggested set includes 23 photos and 1 video (e.g., 24 media items in total). In some embodiments, a suggested collection interface (e.g., 612) includes a sharing affordance for transmitting one or more media items from a corresponding suggested collection. For example, the sharing affordance 612E may be selected to share one or more media items as described below. In some embodiments, the suggested aggregation interface (e.g., 612) includes representations of one or more media items. For example, suggested collections interface 612 includes media items 612F and 612G, each of which represents a media item in suggested collections 612 (the Taihaohang collection). In some embodiments, the suggested collection interface (e.g., 612) includes one or more selection indicators that indicate whether one or more media items are currently selected (e.g., for sharing in response to selecting a sharing affordance). In some implementations, the selection indicator is visually associated with the currently selected media item and is optionally not displayed when the same media item is not selected. For example, media items 612F and 612G each include a selection indicator 612H that indicates that media items 612F and 612G are currently selected.
In some embodiments, the device (e.g., 600) enters a selection mode (e.g., in response to a user input) after displaying the suggested set interface (e.g., 612). In some embodiments, the device (e.g., 600) is in a selection mode (e.g., initial) when the suggested set interface (e.g., 612) is displayed. In some embodiments, the suggested aggregation interface (e.g., 612) includes one or more features of 814, as described below.
6S-6U illustrate exemplary techniques for sharing a suggested set of media items. In some implementations, a device (e.g., 600) receives input (e.g., 630) from a request for one or more media items in a suggestion set (e.g., 612). For example, at FIG. 6S, device 600 receives user input 630 at sharing affordance 612E. As shown in FIG. 6S, when user input 630 is received, the sharing affordance 612E indicates that all of the media items in the collection are currently selected (e.g., the sharing affordance 612E indicates "send all"). In response to receiving the user input 630, the device 600 can prepare to share the selected media item in the set of suggestions, in this case, all of the media items.
FIG. 6T illustrates an exemplary representation 631 of a collection of media items inserted into the exemplary text entry field 604D for sharing. In some embodiments, preparing to share the media item includes inserting a representation of the media item into a text entry field (e.g., 604D of fig. 6T). For example, once inserted into the text entry field 604D, the user can optionally add accompanying text (or other content) and then transmit a message that provides access to the selected media item (e.g., by inserting the media item or a representation of the media item into a word record of the message conversation).
Figure 6U shows an exemplary representation 631 inserted into the text record of a message conversation after sharing. In some embodiments, in response to receiving a request to share the suggested set of media items (e.g., 630), a device (e.g., 600) transmits a message providing access to at least a portion of the suggested set. For example, the device 600 may transmit a message providing access to the suggested set of media items (e.g., represented by representation 631, as shown in fig. 6U) in response to receiving the user input 630, and optionally without sending the message, or providing additional user input with an opportunity to add accompanying content. Alternatively, device 600 can transmit a message of access to the suggested set of media items (e.g., represented by representation 631, as shown in FIG. 6U) in response to user input corresponding to selection of the affordance 604I of FIG. 6T.
In some implementations, the device (e.g., 600) shares less than all of the media items in the suggestion set. In some embodiments, the suggested aggregation interface (e.g., 612) has less than all of the media items initially selected. For example, a select group of less than all of the media items can be selected and presented on the initial display of interface 612 (e.g., similar to that shown in FIG. 6W). In some implementations, the device (e.g., 600) allows for the addition (e.g., via user input) of a media item to the initial selection. This may be useful if the suggested collection includes a large number of media items. In some implementations, the device (e.g., 600) allows for the removal of the media item from the initial selection (e.g., via user input). In some embodiments, a select group of less than all of the media items in the suggested group is selected based on characteristics of individual media items (e.g., media items are in focus, have good composition, etc.) and/or based on characteristics of the set as a combination (e.g., media items that are prevented from being selected as being duplicate or very similar, media items that are selected to display multiple identified faces, media items that are selected to display multiple topics, etc.).
6V-6 AG illustrate exemplary interfaces for custom selection of media items of a suggestion set for sharing. In some embodiments, the device (e.g., 600) receives user input representing a request to toggle whether a media item is selected (e.g., select one unselected item, or deselect a selected item). For example, in fig. 6V, the device 600 receives a user input 632 associated with the media item 612G (e.g., a location). In some implementations, in response to a user input, the device (e.g., 600) toggles the selection of the media item. In some embodiments, if toggling the selection results in selection of a media item, the device will display a selection indicator associated with the item. In some embodiments, if toggling the selection results in a media item not being selected, the device stops displaying the selection indicator associated with the item. In some embodiments, if toggling the selection results in a media item not being selected, the device will display an unselected indicator associated with the item (e.g., instead of the selection indicator). As shown in fig. 6W, in response to receiving user input 632, device 600 stops displaying selection indicator 612H, and optionally displays non-selection indicator 612I. Thus, user input 632 causes media item 612G to no longer be selected, which is visually indicated by selection indicator 612H ceasing to display.
As shown in fig. 6V, user input 632 is received at the location of selection indicator 612H. In some implementations, user input to toggle a selection (e.g., 632) is received at a location of a media item (e.g., 612G) that does not include a selection indicator, which causes a device (e.g., 600) to toggle the selection of the respective media item. For example, the user input to toggle selection need not be at the location of the (e.g., displayed) selection indicator (or the (e.g., displayed) non-selection indicator) in order to toggle selection of the media item. For example, if the user input 632 is instead a tap centered at the top left corner of the media item 612G in fig. 6V (e.g., not on indicator 612H), the device 600 can switch the selection of the media item 612G in response.
In some implementations, the change to the selection of the media item results in a display indication of the amount of selected media to be updated (e.g., 612E). In some embodiments, an indication of the amount of selected media is included in the sharing affordance (e.g., 612E). For example, as shown in FIG. 6W, in response to user input 632, the share affordance 612E has been updated (now representing "send 23") to indicate that 23 media items (22 photos and 1 video) are selected for sharing, which reflects that photo 612G has been deselected. In contrast to FIG. 6V, where the share affordance 612E indicates that ("send all"), the amount of media items to be shared is all of the media items in the suggested set (e.g., all 23 photos and 1 video indicated in the title card 612A). In some embodiments, the amount indicates the type of media selected (e.g., whether photos and/or videos are selected, such as in the statement "22 photos and 1 video").
FIG. 6X illustrates an exemplary technique for entering into a top view of a suggested set of media items. As shown in FIG. 6X, the media items in the suggested set of media items are displayed in an exemplary grid diagram at interface 612. In some embodiments, the grid map includes representations of media items of substantially similar or identical size. For example, FIG. 6X shows media items arranged as equally sized squares. In some embodiments, the grid graph includes representations of media items aligned along one or more of a vertical or horizontal axis. For example, fig. 6X shows media items aligned along both vertical and horizontal axes (e.g., the edges of each square are aligned with two horizontal axes and two vertical axes). While the media items are displayed in the grid diagram shown in FIG. 6X, the device 600 optionally provides the user with the option of more closely examining the media items, such as determining whether to select or deselect a media item for sharing. In some embodiments, a device (e.g., 600) receives user input (e.g., 634) associated with a media item (e.g., in a grid graph), and in response to receiving the user input, displays the media item in a top view (e.g., 636 of fig. 6Y).
FIG. 6Y illustrates an exemplary top view of a media item. In some implementations, a device (e.g., 600) displays a media item in a top view. In some embodiments, displaying the media item in the top view includes displaying the media item, wherein only the media item is displayed on the display, displayed substantially larger than other media items on the display, displayed such that it extends in at least one dimension from one edge of the display to an opposite edge of the display (e.g., between the left and right edges, and/or between the top and bottom edges), and/or displayed larger (e.g., enlarged) than before detecting the user input, which causes the media item to be displayed in the top view. For example, in response to receiving user input 634 on media item 612G in FIG. 6X, device 600 displays in top view 636, as shown in FIG. 6Y. In this example, the user input associated with media item 612G is a "pinch out" gesture (e.g., two contacts move apart from each other by more than a threshold distance). In some implementations, the user input (e.g., 612G) associated with the media item is a long press gesture (e.g., contact over a predetermined length of time). In some embodiments, the user input (e.g., 612G) associated with the media item is a hard press gesture (e.g., a contact having a characteristic intensity that exceeds a threshold intensity, such as a threshold intensity greater than a nominal contact detection intensity threshold at which a tap input may be detected.) in some embodiments, if the device (e.g., 600) is not currently in the selection mode, the user input (e.g., 612G) associated with the media item is a tap gesture (e.g., a contact that does not exceed a predetermined length of time), which causes display of the top view.
As shown in FIG. 6Y, a media item 612G is displayed in an exemplary top view 636. In some embodiments, the top view (e.g., 636) includes a region (e.g., 636A) with the media item (e.g., 612G) displayed in the top view (e.g., in a grid map displayed in a zoomed state that is larger than the suggestion collection interface 612, as shown in fig. 6X). In some embodiments, the top view (e.g., 636) also includes an indication (e.g., 636B, currently displaying unselected indicator 637A) indicating the selection of the media item currently selected (or deselected) in area 636A. As shown in FIG. 6Y, media item 612G is not currently selected (e.g., due to user input 632 in FIG. 6V). If the media item 612G is currently selected, the indication of selection 636B will include, for example, a selection indicator (e.g., a checkmark, such as 637B of FIG. 6 AA). In some embodiments, the top view includes a friction region (e.g., 636C) that includes representations of the plurality of media items from the suggested collection. For example, the top view 636 also includes a friction region 636C that includes representations of additional media items (e.g., media item 612G, different from the media item currently in the top view) that are displayed smaller than the media item 612G being displayed in the top view. In some implementations, the plurality of media items (e.g., 636C) includes an indication of whether one or more media items are currently selected. For example, the media items in the friction region 636C that are selected (e.g., as shown in fig. 6X) each include a selection indicator (e.g., a check mark), and the representation of the unselected single media item (media item 612G) does not include a selection indicator (e.g., lacks a check mark, but includes an unselected indicator 637A). The skilled person will appreciate that other techniques for indicating whether to select or deselect an item are possible and are intended to be within the scope of the present disclosure.
In some embodiments, one in the top view (e.g., 636) includes an indication of the amount of the currently selected media item. In some embodiments, one in the top view includes an indication of the total number of media items in the respective collection of media items. For example, indicator 636D as shown in FIG. 6Y indicates the number of selections (23 items selected) and the total number of media items in the collection being viewed (24 items), where it designates: "23 rd of 24 choices".
When a media item is displayed in the top view, the device optionally provides the user with the option to view another media item from the set of suggestions in the top view without having to return to the grid graph. In some embodiments, the device receives user input (e.g., 638) while displaying the first media item in the top view, and in response, replaces the display of the first media item in the top view (e.g., 612G in fig. 6Y) with the display of the second media item in the top view (e.g., 612F in fig. 6 AA). For example, at FIG. 6Z, the device 600 receives a user input 638 representing a right swipe gesture, while the media item 612G is displayed in the top view 636. In response to receiving the user input 638, the device 600 displays the media item 612F in the top view 636, as shown in FIG. 6 AA. In some implementations, the user input is associated with a friction region (e.g., 636C). For example, interface 636 as shown in FIG. 6AA may be displayed in response to selection 612F in friction area 636, or in response to a swipe that swipes the selection in area 636C to another media item.
Fig. 6AA shows media item 612F in top view 636. The media item 612F is adjacent to and to the left of the media item 612G (e.g., as shown in the suggested collections interface 612 of FIG. 6X, and as shown in region 636C of FIG. 6Y). Thus, the single right swipe gesture 638 causes the display of media item 612G in the top view to be replaced with the display of media item 612F in the top view. In some implementations, the device (e.g., 600) plays the video in response to a user selection received while in the top view of the media item. For example, as shown in fig. 6AA, the device 600 displays a playback affordance 636E associated with (e.g., overlaying) the media item 612F, which is a video. In response to a user input representing selection of the playback affordance 636E, the device 600 begins playback of at least a portion of the media item 612F.
As shown in FIG. 6AA, media item 612F is currently selected-thus, media item 612F includes a selection indicator 637B (a checkmark in this example) in the top view 636. In some implementations, a device (e.g., 600) switches whether a media item is selected in response to a user input request to switch the selection of the media item received in the top view of the media item. For example, in fig. 6AB, the device 600 receives a user input 640 representing selection of the selection indicator 637B, while the media item 612F is selected. In response to the user input 640, the device 600 switches the media item 612F from being selected to the unselected selection. For example, in FIG. 6AC, media item 612F is no longer selected, so selection indicator 637B is no longer displayed, but unselected indicator 637A is instead displayed in the indication of selection 636B. Likewise, if the user input 640 is received while the media item 612F is not currently selected (e.g., while the indicator 637B is not displayed with the media item 612F, but is displayed with the unselected indicator 637A), the device 600 switches the selection of the media item 612F from unselected to selected, and optionally displays the selection indicator 637A with the media item 612F, in response to the user input 640. Also shown in fig. 6AC, an unselected indicator 637A associated with the reduced-size representation (e.g., position of) the media item 612F in the friction region 636C is also displayed (and the selection indicator 637B ceases to be displayed).
In some implementations, in response to switching the selection, the device updates the indication of the amount of selected media. For example, as shown in fig. 6AC, indicator 636D has been updated in response to user input 640 and now indicates that only 22 items are selected, indicating that: "22 out of 24 are selected".
In some embodiments, a device (e.g., 600) receives a user input and, in response to receiving the user input, replaces display in the top view with display of a grid map. For example, in fig. 6AD, device 600 receives user input 642 corresponding to selection of performed affordance 636F. As shown in FIG. 6AE, in response to receiving the user input 642, the device 600 stops displaying the top view 636 of the suggested aggregation interface and displays a grid map (e.g., the suggested aggregation interface 612).
In some embodiments, the currently selected set of media items is independent of how the media items are viewed. For example, FIG. 6AE shows that the sharing affordance 612E is again updated in the grid map to reflect the amount of the currently selected media item. That is, because media item 612F is no longer selected, the sharing affordance 612E changes from representing "send 23" in FIG. 6X to now representing "send 22" (e.g., reflecting that 22 items are currently selected, since the unselected media items 612F and 612G out of the 24 total media items). Thus, the currently selected set of media items in the top view remain selected in the grid map (and unselected media items remain unselected). Thus, changes made in the grid map that can be used to undo changes made in the top view, as shown in FIGS. 6 AF-6 AG, can be undone using the top view.
In FIG. 6AF, the device 600 receives user input 644 corresponding to selection of the unselected media item 612F. As shown in FIG. 6AG, in response to receiving user input 644, the device 600 toggles selection of media item 612F and displays a selection indicator associated with it. Further, the share affordance 612E has been updated and changed from displaying "send 22" in fig. 6AE to now displaying "send 23". Thus, a selection change made in the top view to the grid map can be further customized.
Fig. 6 AH-6 AK illustrate exemplary user interfaces for sharing a suggested set of media items with a recipient. In this example, after customizing the selection, device 600 optionally provides the user with an option to share the selected item with a participant of the message conversation (e.g., of word record 604A). Figure 6AH shows the same set of selected media items depicted in figure 6 AF. In fig. 6AH, device 600 receives a user input 646 corresponding to selection of sharing affordance 612E. In some embodiments, in response to receiving a selection of a sharing affordance (e.g., 612E), a device (e.g., 600) shares a set of suggestions with one or more recipients (e.g., shown in suggested set interface 612). In some embodiments, sharing the set of suggestions includes transmitting data to one or more recipients, and optionally to a cloud-based service, the data providing access to at least a portion of the set of suggested media items. For example, in response to receiving the user input 644, the device 600 may transmit a message (e.g., 604J of fig. 6AK, as described below) to william and gray that includes data representing a set of selected media items (e.g., 23 selected media items in fig. 6 AH). In some embodiments, transmitting data that provides access to at least a portion of a media item includes transmitting one or more media items (e.g., copies, which may be reduced in size or quality). In some embodiments, transmitting data that provides access to at least a portion of the media items includes transmitting, by one or more remote devices (e.g., remotely from device 600), information that provides access to one or more media items of the suggested set (e.g., transmitting one or more of a link, address, credential, password, and/or data indicating that a remote (e.g., cloud-based) service allows access by the recipient).
In some embodiments, in response to receiving a selection of a sharing affordance, the device (e.g., 600) inserts a representation of a suggested set of media items (e.g., represented by suggested set interface 612) into a text entry field (e.g., of an instant messaging interface). For example, in response to receiving user input 646, device 600 displays message interface 604, as shown in fig. 6 AI. In fig. 6AI, the message interface 604 now includes a representation 604H in the text entry field 604D of the suggested media item set representing the suggestion set interface 612.
In some embodiments, a device (e.g., 600) receives an input of text in a text input field, the text input including a representation of a set of suggestions. For example, in diagram AJ, the device 600 receives the textual input "give you! "(e.g., by typing in the keyboard region).
In some embodiments, a device (e.g., 600) receives a user input (e.g., 647) selection of a sending affordance (e.g., 604I) of a messaging interface (e.g., 604). In some embodiments, in response to receiving a selection to send an affordance, a device (e.g., 600) transmits a message (e.g., 604J) providing access to a set of suggestions. For example, in fig. 6AJ, device 600 receives user input 647 corresponding to selection of transmit affordance 604I. As shown in fig. 6AK, in response to receiving user input 647, device 600 sends a message 604J to the recipient william and gray and inserts message 604J into literal record 604A, which includes a representation 604H of the suggested set. In this example, device 600 also transmits the respective text "give you | in response to receiving user input 647 (e.g., which may be included as part of the same message (e.g., 604J) or as a separate message sent in response to the same input). ". Alternatively, device 600 transmits message 604J, which provides access to the collection without accompanying text in response to user input 646.
As shown in fig. 6AJ and fig. 6AK, the input suggestions in the shared suggestion region may change depending on the state of the message interface (e.g., 604) or session. For example, the input suggestions "i", "this", and "here" as shown in fig. 6AJ have been changed in fig. 6AK to be replaced with "we", "you", "let us", respectively. In some embodiments, the modification to the input suggestion is based on text in the text input field. In some embodiments, the alteration to the input suggestion is based on content in the text record (e.g., a change to the content).
In some embodiments, the representation of the suggestion set includes descriptive information (e.g., title, date, geographic location). For example, representation 604H includes text, "taihaohan," representing the geographic location associated with the set of suggestions. This may also be considered a title for the set of suggestions, but the title may be different from the geographic location associated with the set (e.g., "taihao lake," instead may have a metadata-defined (e.g., user-defined) title "birthday celebration of a forest" associated with the set of suggestions. This means that the access provided to the recipient (e.g., via the link) will expire at a particular time.
In some embodiments, the representation of the collection (e.g., 604H) includes an indication of the status of the transmission (e.g., upload, download, and progress of upload/download). In some embodiments, the representation of the collection (e.g., 604H) includes a link expiration time (e.g., expires within 13 days). For example, in fig. 6AK, 604H includes the text "link expired on 1 month and 8 days". In some embodiments, the representation (e.g., 604H) includes a receive status (e.g., delivery, read, open, view, etc.).
6 AL-6 AW illustrate exemplary user interfaces for sharing a suggested set of media items. As described above, the device may provide access to an interface for sharing a suggested set of media items from an instant messaging application. In particular, the device may provide access to a set of suggestions while displaying a textual record of a session into which the set of suggestions may be shared. In some embodiments, the interface for sharing the suggested set of media items is accessible via another application (e.g., different from an instant messaging application). In some embodiments, another application is an application for managing and/or viewing media items (e.g., in a media library) associated with a device (e.g., device 600). For example, such applications are also referred to as photo applications.
Fig. 6AL shows an exemplary home screen 648. The Home screen 648 includes a plurality of affordances associated with an application. In fig. 6AL, the device 600 receives a user input 650 corresponding to selection of the photo application affordance 648A. In response to detecting the user input 650, the device 600 displays an interface associated with the photo application.
Fig. 6AM illustrates an exemplary interface (e.g., 652) associated with a photo application. The interface 652 includes a plurality of representations of media libraries associated with (e.g., included in) the media library associated with the device 600. Interface 652 represents an album of photos designated as "favorites". In some embodiments, an interface associated with a photo application includes one or more affordances for accessing a media collection (e.g., a set of suggestions for sharing). For example, at the display of interface 652, menu 654 is displayed. Menu 654 includes affordances 654A-654C (also referred to as tabs). Tab 654A is currently selected and provides access to an interface for browsing the media library. Tab 654B includes the text "personal-specific" and provides access to an interface for accessing personalized features related to the device's media library (e.g., includes one or more suggestions for sharing a media collection with a recipient). Tab 654C may provide access to an interface for searching for media items.
In fig. 6AN, device 600 receives user input 656 corresponding to selection of tab 654B. In some implementations, in response to receiving the input (e.g., 656), the device (e.g., 600) displays a personalized media interface (e.g., associated with a photo application) (e.g., 658). Fig. 6AO illustrates an exemplary personalized media interface 658. In some embodiments, the personalized media interface (e.g., 658) includes a shared suggestion region (e.g., 658A). For example, the personalized media interface 658 includes a shared suggestion region 658A. In some embodiments, the personalized media interface (e.g., 658) includes additional content. In some embodiments, the shared suggestion region includes one or more suggestions (e.g., cues) for sharing the media collection with one or more suggested recipients. In some embodiments, the personalized media interface includes a collection of media items received from other users (e.g., through a messaging session). For example, the interface 658 may include accessing (e.g., displaying.. representative) a collection (e.g., represented by 604A) received from williams.
In some embodiments, the personalized media interface includes representations of one or more pre-selected sets of media items that are related based on context. For example, the personalized media interface 658 includes a memory area 658B that includes a representation of a set related to context (e.g., san diego, address during 11 months of 2017, or Washington, D.C. address in summer of 2017). In some embodiments, pre-selected sets (e.g., similar to suggested sets) may be shared. In some embodiments, the pre-selected set relates to one or more message conversations and/or one or more recipients associated with the device 600.
As described above, the shared suggestion region 658A may include one or more suggestions (e.g., cues) to share one or more suggested sets of media items with one or more suggested recipients. The shared suggestion area 658A includes a prompt 660 to share a suggested set of media items (e.g., represented by four faces under the title card) with four suggested potential recipients. In some embodiments, the prompt includes an indication of one or more recipients. In some embodiments, the indication of the recipient includes an identified face associated with the recipient. In this example, four recognition faces 660A are shown in prompt 660, each representing a suggested recipient. In some embodiments, the indication of the recipient comprises a name associated with the recipient. In this example, only three identified faces are associated with a name, so the prompt 660 includes only three names 660B: su, Anna, and John. Thus, in this example, the fourth face is identified as having been described in at least one media item in the set of suggested media items, however the identified face has not yet matched a known profile (e.g., a profile including one or more of a name or contact information associated with the identified face). In some embodiments, the identified faces do not match because the profile of the face does not exist. In some implementations, the identified faces do not match because the recipient's profile has not previously been associated with any identified faces. For example, the device 600 includes the correct contact information for the person of the identified face, but the contact information is not related to the face identified in the device 600.
In some embodiments, the suggested recipient is suggested based on an identified face associated with the recipient. As touched above, the suggested recipients displayed at prompt 660 may be recipients whose identified faces are present in one or more media items in the set of suggested media items. In this example, at least one media item from the set of suggestions "taihaohu" includes a description of the faces that each of Su, Anna, and John has recognized. Further, as will be explained in detail below, the suggested recipient may be a face identified by contact information that is not currently associated (e.g., by the device 600). In this way, by presenting the user with the face depicted in the photograph, the user may be alerted to the party they may wish to send data even though the device is not currently able to send (e.g., lacks relevant contact information). In addition, AS described below with respect to fig. 6 AS-6 AV, the device may provide the user with an interface for conveniently associating the suggested recipients with contact information.
In some embodiments, the prompt (e.g., 660) includes information associated with the set of suggestions. In some embodiments, the information includes one or more of a title of the suggested set, an indication of an event associated with the suggested set, an indication of a geographic location associated with the suggested set, or an indication of a time (e.g., date range) associated with the suggested set. For example, the prompt 660 includes an indication 660C of a geographic location "Taihaohang lake" and an indication 660D of a time range "12 months 1 through 4 days". In some embodiments, the prompt includes one or more representations of one or more media items in the suggested set of media items. For example, the hints 660 include a title card 660E that depicts media items (e.g., photos) from a collection in taihaohang. In other examples, the media items are videos or images that are played in a sequence.
As can be seen in fig. 6AO, the suggestion set represented by the shared suggestion prompt 660 is the same suggestion set of media items associated with the suggested set interface 612. Thus, in the example of prompt 660, the prompt represents a suggestion to share the previously shared set of media items with at least one additional recipient. However, in other examples, the prompt (e.g., at interface 658) may suggest sharing one or more other sets, or sharing with recipients that have already been shared (e.g., william and gray in fig. 6 AK).
In some embodiments, in response to a user input (e.g., 656), the device (e.g., 600) displays a prompt for sharing a different set of suggestions (e.g., different from the set of suggestions of 612) with the recipient (e.g., 603B, william). For example, after navigating to a sharing interface (e.g., a "personal-specific" tab) of a photo application (e.g., after displaying a messaging session in an instant messaging application), the device displays one or more additional suggestions that are sent to recipients that have been shared with williams. In this way, the device provides a convenient path to share additional content (e.g., related to one or more conversations or other contexts) with a recipient with whom the user of the current device is in a sharing relationship (e.g., a previously proposed suggestion or a user who has previously transmitted access to a media item).
In some embodiments, in response to user input (e.g., 656), if the first suggested set (e.g., of 612) was previously sent to the recipient, the device (e.g., 600) displays a different suggested media set (e.g., of 626). For example, after the first set of suggestions of user sharing interface 612, the device provides new suggestions (e.g., as shown in fig. 6 AK). That is, the device 600 may suggest additional sets to share with william and golay.
In some embodiments, in response to user input (e.g., 656), the device (e.g., 600) displays a different suggested set of media if the first suggested set (e.g., of 612) was not previously sent to the recipient. For example, if the first suggestion is not performed by the user, the device provides a new suggestion. In some embodiments, in response to receiving user input (e.g., 656), the device displays a prompt for sharing the first set of suggestions (e.g., 612) with the recipient if the first set of suggestions (e.g., 612) was not previously shared. For example, if the user has not previously transmitted access to the suggested set in the instant messaging application, the device provides a prompt and opportunity to again transmit the suggested set to the recipient (e.g., at the personalized media interface 658 of the photo application).
Using the prompt 660 in the shared suggestion area 658A, the user may easily share the suggested set of media items with one or more suggested recipients. In some embodiments, the device (e.g., 600) receives user input (e.g., 662) associated with a prompt (e.g., 660) in the shared suggestion region (e.g., 658A). For example, in fig. 6AP, device 600 receives user input 662 corresponding to selecting affordance 660F. In some implementations, in response to a user input (e.g., 662) associated with a prompt (e.g., 660), the device (e.g., 600) displays a suggestion set interface associated with a set of media items associated with the prompt. For example, in response to receiving user input 662, device 600 displays a suggestions page 664 (also referred to as a suggested collections interface 664), as shown in fig. 6 AQ. Suggested collections interface 664 corresponds to the same set of suggested media items as suggested collections interface 612. In this example, suggested aggregation interface 664 is the same as suggested aggregation interface 612. In some embodiments, suggested aggregation interface 664 includes one or more features as described with respect to suggested aggregation interface 612.
In some embodiments, in response to a user input (e.g., 662) associated with a prompt (e.g., 660), a device (e.g., 600) transmits a message to one or more suggested recipients, the message providing access to the suggested set of media items. For example, in response to user input 662, device 600 immediately shares (e.g., transmits a message) the set of suggestions with one or more of the suggested recipients (e.g., suo, ann, john).
In fig. 6AR, the device 600 receives a user input 665 corresponding to selection of the sharing affordance 664A. In some embodiments, in response to receiving a user input corresponding to selection of an affordance (e.g., affordance 660F, or 664A) associated with the prompt, the device (e.g., 600) displays a recipient confirmation interface (e.g., 666 of fig. 6 AS).
Fig. 6AS illustrates an exemplary recipient confirmation interface 666. In some embodiments, the recipient confirmation interface includes an indication of one or more faces detected in the set of suggestions. For example, the recipient confirmation interface 666 includes an area 666A that indicates that multiple faces are detected in the set of Taihaohang lake recommendations. As described with respect to prompt 660 of fig. 6AO, region 666A indicates that four (4) faces were detected in the set of taihao lake recommendations. In some embodiments, the recipient confirmation interface includes one or more affordances for selecting recipients for sharing. For example, recipient confirmation interface 666 includes recipient indicators 666B-666E, each corresponding to one of the detected faces. Each of recipient indicators 666B-666D includes an indication of a profile associated with a detected face, including a name (e.g., Su, Anna, or John) and contact information (e.g., a telephone number, such as Su 123-. Notably, recipient indicator 666E is displayed, but does not include a name or contact information, as the detected face is not associated with profile information. In some embodiments, a recipient associated with known profile information is selected by default. For example, upon initial display of recipient confirmation user interface 666, AS shown in FIG. 6AS, recipients Su, Anna, and John are selected (e.g., a check mark in each of recipient indicators 666B-666D, AS shown by selection indicator 667A), but the recipient associated with recipient indicator 666E is not selected (e.g., selection indicator 667A is not displayed in recipient indicator 666E, but unselected indicator 667B is displayed).
In some embodiments, the recipient confirmation interface (e.g., 666) includes an affordance for adding a recipient. For example, recipient confirmation interface 666 includes affordance 666F. In this example, device 600 provides an interface for selecting one or more additional recipients in response to user input corresponding to selection 666F. The interface for selecting one or more additional recipients may include one or more features of the contact selection user interface 670 of the AU of fig. 6, described in more detail below.
As described above, device 600 can provide a quick and easily accessible interface for associating suggested recipients with contact information. In some embodiments, a device (e.g., 600) receives user input associated with a suggested recipient (e.g., independent of contact information). For example, in FIG. 6AT, the device 600 receives a user input 668 corresponding to selection of the recipient indicator 666E, the user input 668 being unassociated with the contact information.
In some embodiments, in response to receiving user input associated with the suggested recipient, the device (e.g., 600) provides (e.g., displays) an interface for associating the suggested recipient with the contact information. For example, fig. 6AU illustrates an exemplary contact selection user interface 670 displayed in response to receiving user input 668. In some embodiments, the contact selection user interface (e.g., 670) includes one or more representations of one or more contacts (e.g., 670A). For example, the contact selection user interface 670 includes a plurality of representations each associated with a contact (e.g., Andrew, Belgium), including representation 670A associated with a contact named "Mary". In some embodiments, the contacts represented in the contact selection user interface (e.g., 670) are associated with contact information. In some embodiments, the contact information includes one or more of a phone number, an email address, a link, a network address, or other data used to address or direct data to a recipient (e.g., device, user account).
At fig. 6AU, the device 600 receives a user input 672 corresponding to selecting a representation of a contact. In some embodiments, in response to detecting a user input corresponding to selecting a representation of a contact, the device (e.g., 600) associates contact information associated with the selected contact (e.g., 670A) with the suggested recipient (e.g., 666E). For example, in response to receiving user input 672, device 600 associates the detected face associated with recipient indicator 666E with contact information associated with contact mary, named representative 670A.
Fig. 6AV illustrates the recipient confirmation interface 666. In some implementations, the device (e.g., 600) displays a recipient confirmation interface (e.g., 666 in fig. 6 AV) after (e.g., in response to) a user input (e.g., 672) corresponding to selecting a contact (e.g., 670A). For example, in response to receiving a selection of marie, the device 600 stops displaying the contact selection interface 670 and returns to display of the recipient confirmation interface 666, as shown in fig. 6 AV. In fig. 6AV, recipient indicator 666E now includes the name of the selected contact, mary and the phone number associated with the selected contact. In this example, the detected face that was not previously associated with contact information is now associated with mary's contact information. In some embodiments, the association between the contact and the detected face is persistent. For example, if the same face is detected in another set, the mary's name would be displayed next to the recipient indicator in the recipient confirmation interface without the need to select mary again at the contact selection user interface. In some embodiments, the device automatically selects a suggested recipient (e.g., 666E) in response to a user input (e.g., 672) corresponding to selecting a contact (e.g., 670A). For example, in response to user input 672 selecting mary, device 600 selects the suggested recipient mary and displays selection indicator 667A associated with recipient indicator 666E in fig. 6 AV.
In some embodiments, a device (e.g., 600) receives a user input associated with a suggested recipient and, in response to receiving this input, switches the selection of the suggested recipient. For example, in fig. 6AV, the user may select mary's indicator (e.g., 666E) to switch selections (from selected to unselected, or from unselected to selected). In some embodiments, in response to a user input (e.g., 674, described below), the device (e.g., 600) does not share the suggested media set with the unselected suggested recipients, which causes the suggested media set to be shared with the selected suggested recipients.
In some embodiments, the recipient confirmation interface (e.g., 666) includes one or more affordances for sharing respective suggested sets of media items. For example, as shown in FIG. 6AW, recipient confirmation interface 666 includes shared affordances 666G and 666H. In some embodiments, in response to receiving user input corresponding to selection of a sharing affordance, a device (e.g., 600) transmits data providing access to one or more media items of the suggested set of media items. In some embodiments, providing access comprises providing access through a particular cloud-based service, and optionally sending a message to the recipient indicating that such access has been provided, including data that is not available for accessing media through the cloud-based service. For example, providing access through a particular cloud-based service may be accomplished when the recipient has an account with the cloud-based service. In this case, access may be provided to the recipient's account (e.g., media items hosted by the cloud-based service but shared by the user of device 600) while preventing access to other accounts on the cloud-based service (or non-users of the cloud-based service). In some embodiments, providing access includes sharing a direct link that provides access to one or more suggested sets of media items. For example, providing a direct link may include sending a publicly accessible, but non-guessable, web address to the recipient. In some embodiments, sending the direct link does not prevent access by a party (e.g., a device) other than the recipient. For example, the recipient may forward the direct link to other unintended recipients, who may view the shared collection using the link. However, the option of sending a direct link provides an alternative option for sharing media items, particularly with recipients unrelated to the particular cloud-based service. For example, in response to selecting affordance 666H, device 600 transmits a direct link to the selected recipient.
In fig. 6AW, device 600 receives a user input 674 corresponding to selecting affordance 666G. In response to receiving the user input 674, the device 600 transmits data that provides access to one or more media items of the suggested set of media items through a particular cloud-based service (e.g., as described above). In some embodiments, in response to receiving user input (e.g., 674) corresponding to selection of a sharing affordance, device 600 displays a text record of a message conversation with the selected recipient that includes representations of a sharing suggestion set. In this example, such a message session would include Su, Anna, John, and Mary, and include a representation having one or more features described with respect to representation 604H shown in FIG. 6 AK.
6 AX-6 AAB illustrate an exemplary interface for stopping a collection of shared media items. Fig. 6AX illustrates a personalized media interface 658. In fig. 6AX, device 600 receives a user input 676 corresponding to selection affordance 658C. For example, affordance 658C may be an avatar (e.g., a face) associated with a user of device 600. In some embodiments, in response to receiving user input (e.g., 676) associated with a personalized media interface (e.g., 658), a device (e.g., 600) displays a sharing management interface (e.g., 678).
Fig. 6AY shows an exemplary sharing management interface 678. In some embodiments, the sharing management interface includes one or more options for managing one or more sets of media items that have been shared with one or more recipients. For example, the sharing management interface 678 includes a shared dynamic region 678A that includes affordances 678B-678D, each associated with a collection of media items that have been shared. Affordance 678B corresponds to the "Taihaohao lake" set of media items shared with William and Gray, as shown in FIG. 6 AK.
In some implementations, the device receives a user input (e.g., 679) associated with the shared set of media items (e.g., corresponding to selecting an affordance associated with the representation). For example, in FIG. 6AZ, device 600 receives user input 679 corresponding to selecting affordance 678B.
In some embodiments, in response to receiving user input (e.g., 679) associated with a shared set of media items, a device (e.g., 600) displays a shared set management interface (e.g., 678). For example, in response to receiving user input 679, device 600 displays an exemplary shared collection management interface 680, as shown in fig. 6 AAA. In some embodiments, the shared set management interface (e.g., 680) includes information related to the shared set, including one or more of the following (as shown in fig. 6 AAA): geographic location (e.g., taihao lake), time (e.g., 12 months 1 to 4 days), amount of shared media items (e.g., 22 photos and 1 video), expiration time (e.g., 1 month 8 days), and recipient information (e.g., collections shared with williams and gray).
The device optionally provides the user with an option to stop sharing a previously shared collection of media items that are still accessible by one or more recipients (e.g., access to the shared collection has not expired). In some implementations, an interface (e.g., 680) associated with the personalized media interface (e.g., 658) includes an option (e.g., 680A) to stop sharing the set of media items. For example, shared collection management interface 680 includes affordance 680A for ceasing to share associated shared collections from great lake.
At FIG. 6AAB, the device 600 receives a user input 682 corresponding to selection affordance 680A, representing a request to block (also referred to as stop) sharing of an associated set of media items. In some embodiments, in response to receiving user input representing a request to stop sharing a collection of media items, a device (e.g., 600) stops providing access to one or more recipients of the shared collection. For example, if device 600 previously provided access to a user account via a predetermined cloud-based service, such access (e.g., rights to access media items) may be revoked by transmitting a message to the cloud-based service (e.g., a server). For example, if device 600 previously provided a direct link for accessing a media item, the device may cause such link to be deactivated. For example, if the media item is hosted on a server and accessed via a link, the device 600 may send a message that causes the server to stop providing access to the media item when the device passes an address in the link (e.g., deactivates the link).
FIG. 6AAC illustrates an exemplary representation of an expired shared collection of media items. For example, FIG. 6AAC illustrates a message interface 604 that includes a textual record 604A and a representation 604H that represents a collection of media items shared by users of device 600, Linn, Taihao lake. In some implementations, the set of shared media items expires in response to an expiration time. For example, in fig. 6AK, representation 604H includes an expiration time of 1 month and 8 days. Thus, on day 1 month 9 after the expiration date, the representation 604H may appear in the text recording, as shown in fig. 6 AAC. In some implementations, the shared set of media items expires in response to a user input (e.g., 682) request to stop sharing. For example, in response to receiving user input 682, the shared collection may stop sharing, as described above, resulting in representation 604H being displayed in a text recording, as shown in fig. 6 AAC.
7A-7J are flow diagrams illustrating methods of sharing a suggested set of media items using an electronic device, according to some embodiments. Method 700 is performed at a device (e.g., 100, 300, 500, 600) having a display and one or more input devices (e.g., touch-sensitive display, touch-sensitive surface, mouse). Some operations in method 700 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 700 provides an intuitive way for sharing a suggested set of media items. The method reduces the cognitive burden on users sharing a collection of media items, thereby creating a more efficient human-machine interface. For battery-driven computing devices, enabling users to share a set of media items faster and more efficiently conserves power and increases the time interval between battery charges.
An electronic device (e.g., 600) receives (702) a first input (e.g., 608, 614, 616, 620, or 656) via the one or more input devices (e.g., one or more of a touch-sensitive display, a touch-sensitive surface, a mouse, etc.).
In response to receiving the first input, the electronic device (e.g., 600) displays (704) a set of suggested media items (e.g., represented by 612) for sharing (e.g., 603B, 603C) with the recipient on a display (e.g., 602), wherein the set of suggestions relates to a message conversation of the recipient (e.g., text record 604A of message interface 604 in fig. 6A-6D). In some embodiments, the suggested set of media items (e.g., represented by 612) includes one or more media items (e.g., one or more photos, one or more videos, or one or more of both). In some embodiments, the set of suggestions includes a media item (e.g., 612G) that depicts a recipient (e.g., 603B), and/or is obtained at an event known to have been attended by the recipient. For example, the event may be media shot at a geographic location "taihaohu" between dates 12 month 1 and 12 month 4, as shown in fig. 6N. In some embodiments, the suggested set includes a subset of media items (e.g., photos or videos) that satisfy selection criteria (e.g., media items taken within a particular temporal frame in a particular location or set of locations). In some embodiments, the set of suggestions includes all media items that meet the selection criteria. For example, the collection represented by the suggested collection interface 612 includes all media items captured by the user that meet the following selection criteria: shot at geographic location "taihaohu" between 12 months 1 and 12 months 4.
The suggested set of media items is displayed for sharing with a recipient associated with the recipient's message session, enabling the user to quickly identify media that the user may want to share with the recipient. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
After displaying the suggested set of media items, the electronic device (e.g., 600) receives (706) a second input (e.g., 630 of FIG. 6S, 646 of FIG. 6AH, or 647 of FIG. 6 AJ) via the one or more input devices, representing a request to transmit at least a portion of the suggested set of media items to a recipient.
In response to receiving the second input, the electronic device (e.g., 600) transmits (708) a message (e.g., 631 of fig. 6U or 604J of fig. 6 AK) to the recipient as part of the message session (e.g., as represented by the literal record 604A of fig. 6U or fig. 6 AK), the message providing access to at least a portion of the suggested set of media items (e.g., as represented by 612). For example, the electronic device inserts an affordance 604H representing the shared collection into a literal record of the session, such as in fig. 6 AK.
In some embodiments, further in response to receiving the first input, and while displaying at least a portion of the suggested set of media items, the electronic device (e.g., 600) displays (710) on the display an indication of the selection of media (e.g., 612E of fig. 6R, indicator 612H of fig. 6R) that identifies an initially selected group of media items from the suggested set that was automatically selected (e.g., without the selection input) for sharing. For example, in FIG. 6R, the device 600 displays a selection indicator 612H of a media item (e.g., a group of media items that make up the initial selection). Also, for example, in FIG. 6R, the share affordance 612E includes an indication of the selected media item (e.g., identifying the amount of items in the first set selected), and marks "send all" (e.g., selecting all of the media items). In some embodiments, the initially selected set of media items includes less than all of the media items in the suggested set of media items (e.g., as shown in fig. 6W). For example, in FIG. 6W, less than all of the media items are selected. In fig. 6W, media item 612G is not selected. Also in FIG. 6W, the share affordance 612E includes an indication of the selected media item (e.g., the amount of items in the first set selected), and indicates "23 sent" (e.g., 23 media items selected).
Displaying an indication of the selection of media, the indication identifying the initially selected set of media items, allows a user to quickly view a pre-selected set of media items from a collection of related media items, thereby reducing the number of inputs required to select and transmit media items to a recipient. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, while displaying the indication of the selected media that identifies the initial set of selected media items, the electronic device (e.g., 600) receives (712), via the one or more input devices, a second input (e.g., a selection (e.g., 647) of 630 or 604I) representing a request to transmit at least a portion of the set of suggested media items to the recipient. In response to receiving the second input, the electronic device (e.g., 600) transmits (714) a message (e.g., 631 as shown in fig. 6U, 604J as shown in fig. 6 AK) to the recipient (e.g., 603B, 603C) as part of the message session and provides access to the initially selected set of media items. In some embodiments, if less than all of the selections are made, the electronic device provides access to the selected media items (e.g., less than all).
A message is transmitted to the recipient as part of a message session that provides access to an initially selected set of media items, enabling a user to quickly send a pre-selected set of media items from an associated set of media items, thereby reducing the number of inputs required to select and transmit media items to the recipient. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the indication of the selected media is a first affordance (e.g., 612E), and the second input (e.g., 630, 646) corresponds to a selection of the first affordance (716), and the first affordance includes an indication of an amount of media items included in the initially selected set of media items. In some implementations, the amount of media items corresponds to a total number of selected media items. For example, affordance 612E includes the exemplary amount "all" in fig. 6S (e.g., all media items are currently selected). For example, affordance 612E includes an exemplary amount "22" in fig. 6AH (e.g., 22 media items are currently selected).
In some implementations, the initially selected set of media items includes (718) less than all of the media items in the suggested set of media items. For example, the suggestion set interface 612 of FIG. 6AH depicts fewer than all of the media items in the selected set of suggested media items.
In some implementations, the electronic device (e.g., 600) receives (722), via one or more input devices, an input (e.g., 632 of fig. 6V, 640 of fig. 6AB, 642 of fig. 6AD, 644 of fig. AF) representing a change (e.g., an additional selection or a removal selection) from the initially selected set of media items in the suggested set to form a user-selected set of media items from the suggested set, wherein the selection of the initial set of selected media items relative to at least one media item of the first set is different from the user-selected set of selected media items. The electronic device (e.g., 600) updates (724) the indication of the selected media item (e.g., update 612E as shown in fig. 6W, update 636D as shown in fig. 6AC, update 612E as shown in fig. 6AG, or replace 612H with 612I as shown in fig. 6W) based on the set of selected media items selected by the user. For example, the electronic device changes the send affordance 612E from "send all" to "send 23" to identify the number of items in the user's selection and/or to update the selection indicator associated with the media item. Upon displaying an indication of the selected media updated based on the user-selected set of selected media items, the electronic device (e.g., 600) receives (726) a second input (e.g., 630 of fig. 6S, 646 of fig. 6AH, or 647 of fig. 6 AJ) via the one or more input devices. For example, the second input represents a request to provide the recipient with access (e.g., transmission) to at least a portion of the set of suggested media items. In response to receiving the second input, the electronic device (e.g., 600) transmits (728) a message (e.g., 631 of fig. 6U, or 604J of fig. 6 AK) to the recipient as part of the message session, the message providing access to the user-selected set of selected media items.
Receiving input representing a change in a group of media items initially selected from the set of suggestions to form a user-selected group of selected media items from the set of suggestions, and transmitting a message to the recipient as part of a message conversation, the message providing access to the user-selected group of selected media items, allowing the user to quickly modify the pre-selected group of media items from the set of related media items, thereby reducing the number of inputs required to select and transmit media items to the recipient. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the indication of the selected media (e.g., 612E) includes (730) an indication of an amount of the currently selected media item from the suggested set, and updating the indication of the selected media includes updating an indication of an amount of the media item in the user-selected group (e.g., 612E shown in fig. 6V-6W) including displaying the selected media item, and the indication of the selected media is the second affordance (e.g., 612E of fig. 6 AH) and the second input (e.g., 646 of fig. 6 AH) corresponds to selection of the second affordance.
Updating the indication to include displaying an amount of the media items in a user-selected group of the selected media items, providing visual feedback to the user regarding the amount of the selected media items. Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, while the set of suggested media items is displayed, and while in the media item selection mode, the electronic device (e.g., 600) receives (e.g., 734), via the one or more input devices, a third input (e.g., 632 or 634) associated with the first media item (e.g., 612G) of the set of suggested media items (e.g., represented by interface 612). In accordance with the determination (736), the third input associated with the first media item is a first gesture (e.g., user input 632) (e.g., a tap on a single media), the electronic device: switching (738) whether to select the first media item without displaying the first media item in a top view (e.g., as shown in FIG. 6W); in accordance with the toggle that results in the first media item to be selected, displaying (740) on the display a selection indicator (612H of fig. 6V) associated with the first media item (e.g., 612G of fig. 6V); and in accordance with a switch (e.g., 612G of fig. 6W) that causes the first media item to be deselected, ceases to be displayed (742) on the display, and an indicator associated with the first media item (e.g., no longer displays 612G, as shown in fig. 6W). In accordance with a determination that the third input associated with the first media item is a second gesture (e.g., user input 634) different from the first gesture (e.g., a deep press gesture, a press and hold gesture, a pinch out gesture), the electronic device (e.g., 600): the first media item in a top view (e.g., 612G in a top view 636, as shown in fig. 6Y) is displayed (744) on the display without switching whether the first media item is selected (e.g., 612G remains unselected, as shown in fig. 6Y). In some implementations, in response to receiving a subsequent user input (e.g., corresponding to a first gesture, such as a tap) received at a location associated with a first media item displayed while in a top view (e.g., similar to user input 640 of fig. 6 AB), the electronic device switches the selection of the first media item. In some embodiments, in response to receiving a subsequent user input (e.g., corresponding to a gesture, such as a pinch gesture shown by the user as 845, shown in fig. 8 AG) received at a location associated with a first media item displayed in a top view, the electronic device exits the top view and optionally displays a grid map.
Entering or dropping into the media item selection mode, depending on whether the input causing one of the top views to be displayed is a second gesture or a first gesture, provides the user with more control of the device by allowing different results from the input regarding the gestures. Providing additional control over the device without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first gesture (e.g., user input 632) is a tap gesture, and the second gesture (e.g., user input 634) is selected from: a pinch gesture (e.g., two contacts moving past a threshold distance from each other), a long press gesture (e.g., a contact that exceeds a predetermined length of time), and a hard press gesture (e.g., a contact whose characteristic intensity exceeds a threshold intensity, such as a threshold intensity that is greater than a nominal contact detection intensity threshold at which a tap input may be detected).
In some embodiments, when one displays the first media item in the top view, and when in the media item selection mode (746), the electronic device (e.g., 600): receiving (e.g., 748), via the one or more input devices, a fourth input (e.g., 640) (e.g., such as 612F shown in FIG. 6Y, 612F shown in FIG. 6 AA) associated with the first media item displayed in the one top view. In response to receiving the fourth input (750), the electronic device: switching whether the first media item is selected (e.g., from unselected to selected, or from selected to unselected); in accordance with the toggle causing the first media item to be selected, displaying a selection indicator (e.g., 637B as shown in FIG. 6 AA) associated with the first media item on the display; and in accordance with the switch causing the first media item to be deselected, ceases to be displayed on the display, with the selection indicator associated with the first media item (e.g., as shown in fig. 6AC, 637B is no longer displayed).
In some embodiments, the set of suggested media items is determined to be relevant to the message session with the recipient according to an identified face associated with the recipient (e.g., a face displayed for recipient 603B of fig. 6A) (720), and wherein one or more media items in the set of suggested media items are selected for sharing with the recipient based on the identified face associated with the recipient. The identified face is identified in at least a portion of one or more media items in the set of suggested media items. For example, the collection represented by suggested collection interface 612 is suggested according to media item 612G (e.g., as shown in fig. 6N) that includes a description of the identified face of recipient 603B'. In some implementations, the electronic device selects a media item. In some embodiments, the cloud-based service selects a media item (e.g., accessible or transmittable by the electronic device to the electronic device).
Selecting media items based on the identified faces associated with the recipients allows a user to quickly identify media that the user may want to share with the recipients. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, it is determined from the known recipient-participated event that a set of suggested media items is relevant to the message session with the recipient (732), and one or more of the set of suggested media items is selected for sharing with the recipient based on indicating that the one or more media items in the set of suggested are associated with the known recipient-participated event (e.g., describing, time and place of shooting, or including identified metadata). For example, the suggestion set represented by suggestion set interface 612 (e.g., as shown in FIG. 6F) includes media captured in an event defined by the geographic location of the lake of great spacious and a time period of 12 months 1 day to 12 months 4 days, which is known to be attended by recipient 603B, which is named Williams (e.g., as identified by 612B and 612C of FIG. 6F). In some embodiments, the set of suggestions is suggested because the event is mentioned in the text record (e.g., 604A) of the session. For example, at FIG. 6F, the word record 604A includes a mention of the event "your good!defined in the following text by the geographic location of the Taihaohanhu and the time period of 12 months 1 day to 12 months 4 days! You can issue a photograph to me that was in taihao lake on the last weekend "(e.g., in this example, the last weekend corresponds to 12 months 1 to 12 months 4. in some embodiments, the event is determined by a time period and/or geographic location associated with the media item, and is automatically determined based on metadata.
Selecting media items by the recipient based on known events allows the user to quickly identify media that the user may wish to share with the recipient. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, receiving the first input includes receiving (752) the first input while displaying a text record (e.g., 604A) of a message conversation with the recipient on the display. For example, device 600 receives user input 608 in fig. 6E while displaying a text record of the session. For example, device 600 receives user input 614 in fig. 6G while displaying a text record of the session. For example, device 600 receives user input 616 in FIG. 6H while displaying a text record for a session. For example, the word record is a representation of at least a portion of a message exchanged between two users, such as a user of the first device (e.g., a user account) or a recipient (e.g., a user account or a recipient of the device).
Receiving input while displaying a text record of a message conversation with a recipient (which results in displaying a suggested set of media items shared with the recipient) allows a user to quickly view and share related media with the recipient by reducing the number of inputs required to access the related media. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the set of suggested media items is determined (754) to be relevant to the recipient's message session based on the identity of one or more participants (e.g., 603B and/or 603C) in the message session. For example, the participant may be a user associated with a user account (e.g., a cloud-based service, a social media service), or a contact (e.g., from a contact address book) associated with the electronic device (e.g., 600) or a user (e.g., an account) of the electronic device. In some embodiments, the set of media items is a set of media items in which one or more participants (or more than a threshold number of participants) in the conversation are displayed in the media items in the suggested set. For example, the media item set includes photographs and/or videos of some or all of the participants in the session participating in a camping trip, and thus the media item set is suggested for sharing with the participants in the session because the participants in the session participated in the camping trip.
Suggesting a set of media items determined to be relevant to a recipient's message conversation participant allows a user to quickly identify media that the user may want to share with the recipient. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the suggested set of media items (e.g., represented by interface 612) is determined (756) to be relevant to the message conversation with the recipient (e.g., 603B) based on the content (e.g., text or media items (images, videos)) of the text record (e.g., 604A of fig. 6D) of the message conversation. For example, the set of suggestions corresponding to interface 612 of FIG. 6F are compared to the text "hello!represented by message 604C of FIG. 6G! Can you give me a photograph of a lake too great at the end of the last week? "for example, the set of suggestions corresponding to interface 612 of FIG. 6F are related to the shared set received corresponding to representation 604B in literal log 604A shown in FIG. 6D (e.g., which is also associated with the location of" Taihaohao lake "and the dates from 12 month 1 to 12 month 4, since it represents media items captured by william on the same trip attended by the user of device 600).
Suggesting a set of media items determined to be relevant to the content of the text records of the message conversation with the recipient allows the user to quickly identify media that the user may want to share with the recipient. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the suggested set of media items (e.g., represented by interface 612) is determined (758) to be relevant to the message conversation with the recipient (e.g., 603B) based on text (e.g., 604C) of the message included in the text record (e.g., 604A of fig. 6G) of the message conversation. For example, the set of suggestions corresponding to interface 612 of FIG. 6F are compared to the text "hello!represented by message 604C of FIG. 6G! Can you give me a photograph of a lake too great at the end of the last week? "
In some embodiments, the set of suggested media items (e.g., represented by interface 612) is determined (760) to be relevant to the message session with the recipient (e.g., 603B) based on a reference to (e.g., including) a time (e.g., "last weekend" in message 604C) referenced in a text record (e.g., 604A of fig. 6G) of the message session. In some embodiments, a reference to time is a textual reference to a particular time, date or range of dates, or a relative description of time (e.g., last week, last month). For example, the set of suggestions corresponding to interface 612 of FIG. 6F are compared to the text "hello!represented by message 604C of FIG. 6G! Can you send me a photograph of taihao lake on the end of my week? "last weekend" as mentioned in "relates.
In some embodiments, the set of suggested media items (e.g., represented by interface 612) is determined (762) to be relevant to a message conversation with a recipient (e.g., 603B) based on the geographic location (e.g., "taihaohang" in message 604C) mentioned in (e.g., included in) the text record (e.g., 604A of fig. 6G) of the message conversation. For example, the set of suggestions corresponding to interface 612 of FIG. 6F are compared to the text "hello!represented by message 604C of FIG. 6G! Can you send a photograph of a lake too great on the last week to me? "the great lakes mentioned" are relevant.
In some embodiments, a suggested set of media items (e.g., represented by interface 612) is determined (764) to be relevant to a message conversation with a recipient (603B) based on a set of media items received from the recipient and represented in a textual record of the message conversation (e.g., representation 604B). In some embodiments, the electronic device receives, from the recipient, a media item in a collection of media items represented in a textual record. In some embodiments, the electronic device receives a link or other data from a remote device (e.g., a server) for accessing a media item in a collection of media items from a recipient represented in a textual record.
In some embodiments, prior to displaying (e.g., fig. 6B) the sharing affordance (e.g., 606C of fig. 6C) corresponding to the set of suggested media items, the electronic device (e.g., 600) displays (766) on the display a textual record (e.g., 604A of fig. 6B) of the message session concurrently with a keyboard region (e.g., 604E) that includes a suggestion region (e.g., 606) populated with input suggestions (e.g., 606A and 606B). For example, the input suggestions may include one or more of the following: automatic correction of the proposal, automatic completion of the proposal, etc. While displaying a text record of a message session concurrently with a keyboard region (e.g., as shown in fig. 6B), an electronic device (e.g., 600) replaces (768) a display of an input suggestion (e.g., 606B of fig. 6B) in the keyboard region with a sharing affordance (e.g., 606C of fig. 6C) that is displayed concurrently with the text record of the message session, wherein receiving the first input (e.g., 608 of fig. 6E) includes receiving an input corresponding to a selection of the sharing affordance. In some embodiments, the shared affordance is displayed concurrently with the keyboard region. In some embodiments, the sharing affordance is displayed in response to receiving a message from a recipient. For example, sharing affordance 606C may be displayed in response to receiving an indication that a recipient (e.g., 603B) has shared one or more media items (e.g., upon receiving representation 604B of fig. 6D), thereby providing a user of the first device with an interface that simplifies access for sharing media items back with the recipient.
Displaying an affordance concurrently with displaying the text record allows the user to quickly access media that the user may want to share with the recipient. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, prior to displaying the sharing affordance corresponding to the suggested set of media items, and while displaying the textual record (e.g., 604A of fig. 6D) of the message conversation with the recipient (770): the electronic device (e.g., 600) receives (772), via the one or more input devices, input (e.g., 607 of fig. 6D) associated with a text entry field (e.g., 604D) displayed concurrently with a text record of the message conversation. For example, the electronic device receives a tap on the text entry field 604D, or receives a character input (e.g., 607). In response to receiving the input associated with the text entry field, the electronic device (e.g., 600) displays (774) on the display a sharing affordance (e.g., 606C) that is displayed concurrently with a literal record of the message conversation.
In some embodiments, receiving the first input includes receiving (776) a selected input (e.g., 614) corresponding to a portion (e.g., 604F of fig. 6G) of a message (e.g., 604C) in a text record (e.g., 604A) of the message conversation. In some embodiments, a portion of the message (e.g., 604F) is determined to be related to a sharable collection of media items (e.g., the collection represented by interface 612). In some embodiments, the electronic device makes a portion of the message selectable. For example, the device is shown as "hello! Can you give me a photograph of a lake too great at the end of the last week? "the message selected by the recipient is displayed in the text record of the recipient. For example, in FIG. 6G, the "photograph of the lake of great lakes" is optional and the display has a selectable visual indication (e.g., underline). In response to a user selection of selectable text (e.g., user input 614), the device displays a suggested set of media items (e.g., a suggested set interface 612 as shown in fig. 6F or 6N, or in a top view 636).
Receiving input corresponding to selecting a portion of a message in a text record allows a user to quickly access media that the user may want to share with recipients associated with the selected portion, thereby reducing the amount of input required. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, prior to receiving the first input (e.g., 620 of fig. 6J), and while displaying the text record (e.g., 604A) of the message conversation with the recipient (e.g., 603B), the electronic device (e.g., 600) displays (780) a plurality of affordances (e.g., 610A of fig. 6I) including an affordance (e.g., 610B) associated with a first application (e.g., a photo application) and an affordance (e.g., 610C) associated with a second application (e.g., a music application) that is different from the first application hierarchical order. An electronic device (e.g., 600) receives (782), via the one or more input devices, an input corresponding to a selection of the affordance (e.g., 610B) associated with the first application. In response to receiving input corresponding to selection of an affordance associated with the first application, and while continuing to display a text record of a message conversation with the recipient, the electronic device (e.g., 600) displays (784) an interface (e.g., 610 of fig. 6J) associated with the first application on the display. While displaying the interface associated with the first application, the electronic device (e.g., 600) displays (786) the suggested set of media items (e.g., 612 of fig. 6F or 6N or in one of the top views 636) on the display for sharing with the recipient. In some embodiments, in response to user input received while displaying an interface associated with the first application (e.g., 610), the electronic device displays the suggested set of media items (e.g., 612 of fig. 6F or 6N) for sharing with the recipient. For example, at the display of page 618 at interface 610, as shown in FIG. 6J, the device receives a swipe gesture input (e.g., 620), and in response, displays page 612 as shown in FIG. 6L.
In some embodiments, the input corresponding to selection of the affordance (e.g., 610B) associated with the photo application is a first input, and the display of the photo application user interface (e.g., 610) includes displaying the suggested set of media items (e.g., 612) for sharing with the recipient (e.g., 603B). For example, the electronic device may display interfaces 610 and 612 as shown in fig. 6F in response to user input 616, and optionally without requiring a swipe from above interface 618. In some implementations, a first input (e.g., 620) is received while a photo application user interface (e.g., 610 of fig. 6J) is displayed (e.g., a swipe from above the "most recent photo" interface 618 to the interface 612, as shown in fig. 6J). In some embodiments, in response to additional input (e.g., 622 of fig. 6L) received while displaying the photo application user interface (e.g., a swipe up on the drawer grip), the device will expand the size of the displayed photo application user interface (e.g., become larger, occupy all displays, replace textual records). In some embodiments, the displayed set of media items (e.g., as shown in interface 612) are scrollable. For example, user input representing a request to scroll (e.g., swipe up or down at 612) causes additional media items in the collection to be displayed.
In some embodiments, the text record (e.g., 604A) of the message conversation with the recipient is displayed simultaneously with the set of suggested media items for sharing with the recipient (e.g., represented by interface 612) (778). For example, the text record 604A is displayed concurrently with the suggested collection interface 612 in FIG. 6F.
In some embodiments, transmitting a message to the recipient as part of the message session includes inserting (7102) a representation of the suggested set of media items (e.g., 631 of fig. 6U, or 604H of fig. 6 AK) into a textual record (e.g., 604A), the message providing access to at least a portion of the suggested set of media items. In some embodiments, the representation includes an indication of a status of the transmission (e.g., upload or download) (e.g., 2 nd of 23 is uploading, 10 th of 23 is downloading, uploading, downloading, etc.). In some embodiments, the representation includes an expiration time (e.g., expiration of a link or access) (e.g., expiration after 13 days, etc.). In some embodiments, the representation includes a receipt status (e.g., delivery, read, open, view, etc.). In some embodiments, the electronic device receives text to accompany the representation, the text being inserted into the word record with the representation. For example, at fig. 6AK, device 600 inserts the accompanying text "give you" as part of message 604J along with representation 604H. In some embodiments, the device inserts the representation of the collection into the text entry field (e.g., prior to transmission to a third party), then receives the accompanying text and subsequently transmits and inserts the representation and accompanying text into the word record. For example, at fig. 6J, device 600 receives the accompanying text in a text entry field (e.g., via user input at a keyboard), and then inserts (e.g., in response to user input 647) the text and representation into a literal record 604A (e.g., as shown in fig. 6 AK).
In some embodiments, a message is transmitted to the recipient as part of a message session, the message providing access to at least a portion of the set of suggested media items, comprising (794): in accordance with a determination that the recipient (e.g., 603B) is capable of receiving the message through the predetermined cloud-based service (e.g., the recipient is associated with a device that is capable of receiving messages through the cloud-based service for sending and/or receiving electronic messages (e.g., via the internet)), the electronic device (e.g., 600) provides access to at least a portion of the set of suggested media items through the predetermined cloud-based service, wherein the access provided through the predetermined cloud service restricts access by users other than the recipient; and in accordance with a determination that the recipient is not eligible to receive messages via the predetermined cloud-based service, the electronic device provides (798) access to at least a portion of the suggested set of media items by sending a link (e.g., a publicly accessible, non-guessable URL) to the recipient, wherein the access provided by sending the link does not restrict user access outside of the recipient. In some embodiments, access is provided only to the recipient through the cloud-based service, and the recipient does not allow or is unable to forward the access to another party (e.g., another user of the cloud-based service). For example, the access provided is limited to an account associated with the recipient.
Providing limited access to the media collection through a predetermined service increases the security of the user sharing the media items without requiring additional input, depending on whether the recipient is eligible to receive messages through the service. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, transmitting a message to the recipient as part of the message session, the message providing access to at least a portion of the suggested set of media items, includes transmitting (7100) the media items in the at least a portion of the suggested set of media items to the recipient.
In some embodiments, after transmitting a message (e.g., 604J of fig. 6 AK) to a recipient (e.g., 603B) as part of a message session, the message provides access to at least a portion of a suggested set of media items (788) (e.g., represented by interface 612): an electronic device (e.g., 600) receives (790), via one or more input devices, an input (e.g., 682 of fig. 6 AAB) representing a revocation of a recipient's access to at least a portion of the set of suggested media items. In response to receiving the input representing revoking the recipient's access, the electronic device transmits (792) data (e.g., to a server, such as a server of a cloud-based service for sending and/or receiving messages) to the recipient (e.g., 603B) that causes termination of access of at least a portion of the suggested set of media items provided in the message (e.g., 604J of fig. 6 AK). For example, the device 600 sends an instruction to a server of the cloud-based instant messaging service to revoke the user's access rights, or to deactivate a hyperlink.
Causing the recipient's access rights to be terminated increases the security of the user's sharing of the media item without requiring excessive input to ensure control of the media item. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, after transmitting the message to the recipient as part of a message conversation, the message provides access to at least a portion of the suggested set of media items (7104): the electronic device (e.g., 600) receives (7106) a fifth input (e.g., 656 of fig. 6 AN) via the one or more input devices. For example, a fifth input is associated with a photo application, such as a selection of a "personal-specific" tab 654B in interface 652, as shown in fig. 6 AN. In response to receiving the fifth input, the electronic device (e.g., 600) displays (7108) a prompt (e.g., 660 of fig. 6 AO) on the display to share the set of suggestions (e.g., represented by interface 612) with the suggested recipient(s) (e.g., one or more recipients that are different from the recipient (e.g., 603B) (e.g., recipient 660A of fig. 6AO, recipients associated with 666B-666D of fig. 6 AS). For example, selection of the "personal-specific" tab results in display of a sharing suggestion interface, including the set of suggested media items and one or more additional recipients (e.g., users, user accounts, contacts) with whom the set of suggestions are shared.
Displaying a prompt to share the set of suggestions with a recipient of the suggestions other than the recipient allows the user to quickly identify additional recipients that the user may want to share media. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, in response to the fifth input, the device displays a prompt for sharing a different set of suggestions with the recipient (e.g., 603B) than the set of suggestions displayed in response to the first input. For example, after navigating to a shared interface (e.g., a "personal-specific" tab) of a photo application (e.g., after displaying a messaging session in an instant messaging application), the device displays one or more additional suggestions that are sent to the recipient (e.g., 603B). Thus, the device provides a convenient path to share additional content (e.g., related to a conversation or other relationship) with a recipient with whom the user of the current device is in a sharing relationship (e.g., a previously-proposed suggestion or a user who has previously communicated access to a media item). In some embodiments, the device displays a different suggested media set if the suggested set was previously transmitted to the recipient. For example, after the user performs (e.g., sends to recipient 603B) the first suggestion (e.g., of suggestion collection interface 612), the device provides the new suggestion. In some embodiments, if the suggested set has not been previously transmitted to the recipient (e.g., 603B), the device displays a different suggested media set. For example, if the first suggestion is not performed by the user, the device provides a new suggestion. In some embodiments, if the first suggestion was not previously performed by the user, in response to receiving the fifth input, the device displays a prompt for sharing the set of suggestions with the recipient. For example, if the user has not previously transmitted access to the suggested set in the instant messaging application, the device provides a prompt and opportunity to again transmit the suggested set to the recipient (e.g., at a sharing interface of a photo application).
In some embodiments, the suggested recipients suggest based on (7110) one or more of: an identified face (e.g., shown in 660A of fig. 6 AO) associated with the suggested recipient (e.g., suo, anna or john referred to in fig. 6 AO) is identified in at least a portion of the media items of the suggested set; and a known event that the suggested recipient attended (e.g., where the suggested set includes media taken at the event), where the suggested set is associated with (e.g., shows, includes media taken at their time and place, tags with their metadata) the known event that the suggested recipient attended.
In some implementations, when an interface (e.g., 666 of fig. 6 AS) associated with the prompt (e.g., 660) is displayed to share the set of suggestions with the suggested recipients: an electronic device (e.g., 600) displays (7112) an indication (e.g., 666E) of the suggested recipient on a display, the indication including a depiction of the identified face associated with the suggested recipient.
In some embodiments, the suggested recipient (e.g., associated with 666E of fig. 6 AS) is not associated with contact information associated with a device (7114), and the identified face associated with the suggested recipient is shown in at least a portion of media items of a suggestion set. Upon displaying an indication of the suggested recipient, the indication including a depiction of the identified face associated with the suggested recipient, an electronic device (e.g., 600) receives (7116) via the one or more input devices, a sixth input (e.g., 668 of fig. 6 AT) representing a selection of the suggested recipient. In response to receiving the sixth input, the electronic device displays (7118) a prompt (e.g., 670 of fig. 6 AU) on the display to match the suggested recipient (e.g., represented by 666E of fig. 6 AT) with contact information associated with the device. The electronic device receives (7120) a seventh input (e.g., 672 of the AU of fig. 6) via the one or more input devices, representing a selection of an item of contact information (e.g., 670A) from the contact information associated with the device. In response to receiving the seventh input, the electronic device associates the suggested recipient with the item of contact information (7122). In some embodiments, the electronic device updates the indication of the suggested recipient based on the item of contact information (e.g., including the contact name as shown in fig. 6 AV). After associating the suggested recipient with the item of contact information, in response to a selection (e.g., user input 674 or 665) of an affordance (e.g., 664A of FIG. 6AR, 666G or 666H of AW of FIG. 6) included in an interface (e.g., 664 or 666) associated with the prompt (e.g., by accessing), the electronic device uses the item of contact information to provide a message (e.g., such as 604J of FIG. 6 AK) to the suggested and at least a portion of the set of suggested media items.
It should be noted that the details of the process described above with respect to method 700 (e.g., fig. 7A-7J) also apply in a similar manner to the methods described below. For example, method 900 optionally includes one or more features of the various methods described above with reference to method 700. For the sake of brevity, these details are not repeated below.
Figures 8A-8 AQ illustrate an exemplary user interface for sharing a suggested set of media items in relation to a received set of media items, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 9A-9G.
8A-8 AL illustrate exemplary user interfaces for viewing and managing a received shared collection of media items. Fig. 8A illustrates an exemplary message interface 804 (described above) displayed on the display 602 of the electronic device 600. In some embodiments, the message interface 804 includes one or more features of the message interface 604. Message interface 804 includes a text record 804A of the message session between the parties shown in contracting party area 804B. In this example, a session is conducted between users named Linn (e.g., user 805A), Williams (e.g., user 805B), and Gray (e.g., user 805C). In this example, ryan is a user of device 600 (e.g., an account login associated with ryan), communicating with william and gray (e.g., each using its own device remote from device 600).
In some implementations, a device (e.g., 600) receives an indication that a set of media items is shared with the device. For example, device 600 may receive a message that includes information for accessing the shared collection (e.g., the media items themselves, links, or other data for accessing the shared collection). In some embodiments, a device (e.g., 600) displays a representation (e.g., 804C) of the received shared collection. In some implementations, the device displays a representation of the received shared media collection in a text record (e.g., 804A) of the message conversation. For example, the message interface 804 also includes an exemplary representation 804C of the collection of shared media items that have been received for sharing with linn through william. For example, william has provided linne access to a shared collection of media items (e.g., as described above) (e.g., provided access to devices or accounts associated with linne through a cloud-based service). Techniques for providing access to a collection of media items are discussed above and incorporated herein.
In some embodiments, the received representation of the shared set (e.g., 804CD) includes an indication of a transmission status. For example, the indication of the transmission status may include an indication that the received shared collection is currently being uploaded or downloaded. As shown in FIG. 8A, representation 804C includes an indication of the upload status, including in the text "Williams is uploading the 2 nd in 30," which represents that the sender/sharer (Williams) is uploading a media item into the shared collection, and the progress of such uploading (the 2 nd of the 30 media items that have been uploaded). As shown in FIG. 8D, described in more detail below, representation 804C includes an indication of the download status, including the text "download 2 nd of 30," indicating that device 600 is currently downloading a media item of the shared collection, and the progress of such downloading.
8B-8C illustrate exemplary techniques (e.g., incomplete transmission (e.g., download or upload)) for previewing a received shared set of media items. In some embodiments, a device (e.g., 600) displays a preview (e.g., 808) of one or more media items in a received set of shared media items in response to user input (e.g., 806) received while the set of media items is currently being transmitted. For example, in FIG. 8B, device 600 receives user input 806 corresponding to selection representation 804A. As can be seen in fig. 8B, william (sender/sharer) still uploads the shared set of media items. In response to receiving user input 806, device 600 displays preview interface 808, as shown in FIG. 8C.
Fig. 8C illustrates an exemplary preview interface 808. In some embodiments, the preview interface (e.g., 808) includes an indication of the transmission status (e.g., 808A). For example, the preview interface 808 includes an indication 808A that william is currently uploading 30 photos. In some embodiments, the preview interface includes an indication (e.g., 808B) of the identity of the sender/sharer (e.g., including its name and an avatar with his face). For example, the preview interface 808 includes an indication 808B that williams is the sender/sharer. In some embodiments, the preview interface includes representations of one or more media items in the shared collection (e.g., 808C). In some embodiments, the representation in the preview interface presents a different media item than the completed download in at least one aspect (e.g., by device 600). For example, as shown in fig. 8C, the media item 808C is displayed as darkened (e.g., having a gray tone, also referred to as "grey") to indicate that the media item is being transmitted. In other examples, the media item may reduce the quality representation of the actual media item. In other examples, the media items may be displayed but not selectable (e.g., such that the device does not perform actions such as switching the selection of a media item or displaying one of the media items in a top view when receiving input attempting to select or view the media item display representation). Thus, previewing the collection as it is being transmitted allows the user to view some of the media items in the collection.
8D-8I illustrate various interfaces for downloading and accessing a received shared collection of media items. FIG. 8D illustrates an exemplary representation 804C of a shared collection of media items received while being downloaded (e.g., by device 600). In some embodiments, the device (e.g., 600) initiates download of media items of the received shared set of media items in response to user input (e.g., selection representation 804C). For example, in response to user input 806 of FIG. 8B representing a tap on representation 804C (e.g., before or after Williams has completed uploading the collection), device 600 begins downloading the shared collection and displays representation 804C with the download status, as shown in FIG. 8D.
The representation (e.g., 804C) of the shared collection may cease to be displayed while the received shared collection is being downloaded (e.g., due to user action, or due to a new message inserted into a text record that includes the representation). Accordingly, device 600 can display a notification (e.g., a pop-up window or banner) to provide the user with easy access to the collection upon completion of the download. Thus, the user may continue to use the device 600 and need not continuously monitor downloads or perform an additional number of inputs to find the original (e.g., selection) representation of the shared collection. Fig. 8E-8H are exemplary.
Fig. 8E shows a message interface 804 in which a new message has been received from william (e.g., user 805B) and added to the text record 804A. As shown in FIG. 8E, the new message "hello! Can you send a photograph of a lake too great on the last week to me? "has been received from william (e.g., a device used by william) through device 600 and displayed in the text record 804A, pushing the representation into the text record.
In fig. 8F, device 600 receives another new message in text record 804A, this time from gray (e.g., a device used by gray), shown as "no problem. Now it cannot, but will be sent later. "as can be seen in fig. 8F, the new message from gray stops the representation 804C from displaying. In this example, a newer message in the text record 804A causes older content (including the representation 804C) to be pushed off the display in favor of displaying the new content in the text record (e.g., in chronological order of the time of message transmission). In some embodiments, the textual record is scrollable. For example, device 600 can receive user input representing a request to scroll through the content of a word record (e.g., scroll up) and cause representation 804C to be displayed again.
In FIG. 8G, the device 600 has completed downloading the received shared set of representations 804C. In some embodiments, in response to detecting that the download of the received set of shared media items is complete, the device (e.g., 600) displays an affordance (e.g., 809) for accessing the download set. For example, in FIG. 8G, device 600 displays a notification 809, an exemplary affordance. In some embodiments, the affordance (e.g., 809) for accessing the downloaded collection is displayed in accordance with a determination that a representation (e.g., 804C) of the downloaded collection is not currently displayed on a display (e.g., 602 of device 600). For example, in FIG. 8G, device 600 determines that representation 804C is no longer displayed, and therefore displays notification 809. In some embodiments, if representation 804C is displayed when the download is complete, device 600 forgoes displaying notification 809 in response to the completion.
In some embodiments, a device (e.g., 600) receives user input (e.g., 810) associated with an affordance (e.g., 809) and, in response, displays one or more representations of media items in the received set of media items. For example, in FIG. 8H, device 600 receives user input 810 corresponding to selection notification 809 and, in response, displays shared collection interface 814 (e.g., in one top view as shown in FIG. 8J, described in more detail below) or shared collection interface 814 (e.g., in a grid diagram as shown in FIG. 8V, described in more detail below).
FIG. 8I illustrates an exemplary representation 804C for a collection of media items (e.g., completed download). For example, upon completion of a download initiated by user input (e.g., user input 806 of FIG. 8B) selection of representation 804C before the collection has been downloaded, device 600 displays representation 804C as shown in FIG. 8I. In fig. 8I, device 600 receives user input 812 corresponding to selection representation 804C and, in response, displays shared collection interface 814 (e.g., in one top view as shown in fig. 8J, described in more detail below) or shared collection interface 814 (e.g., in a grid diagram as shown in fig. 8V, described in more detail below).
In response to receiving the shared collection (e.g., 804C), the device 600 optionally provides the user with the option of quickly and easily sharing media back with other participants in the session. In some implementations, a device (e.g., 600) displays one or more prompts (e.g., 811A, 813) to share media back in response to receiving an indication that another user (e.g., 805B) shared media with a user of the device (e.g., 805A). In some embodiments, a prompt (e.g., 811A, 813) is displayed at the message interface (e.g., 804) concurrently with a text record (e.g., 804A) of the message conversation. For example, FIG. 8I shows a prompt 811A that is an input suggestion affordance in the input suggestion region 811. Prompt 811A includes the text "share back" prompting the user to reward the shared media. The input suggestion affordance 811A includes one or more features of the input suggestion affordance 606C described above. Fig. 8I also shows prompt 813 displayed next to representation 804C. In some embodiments, the device (e.g., 600) displays a prompt (e.g., 813) associated with (e.g., near, adjacent to) the received displayed representation (e.g., 804C) of the shared collection. For example, prompt 813 may continue to display 804C side-by-side so that the user always has a quick entry point to access an interface for sharing back media specifically relevant to the received shared collection represented by 804C. Thus, even when 804C ceases to be displayed, the device allows the user to scroll through the text records and select prompt 813. Additionally, if a suggestion area update is entered and the device stops displaying 811A (e.g., replacing it with other suggestions), the user may use prompt 813.
In some embodiments, user input corresponding to selection of a prompt (e.g., 811A, 813) to share back media causes display of the suggested collection interface (e.g., including one or more features described above with respect to interfaces 612 and/or 634). In some implementations, user input corresponding to selecting a prompt (e.g., 811A, 813) to share back media causes another prompt interface (e.g., 854, described below) to be displayed.
8J-8 AL illustrate exemplary interfaces for viewing and managing media items in a received collection of shared media items.
FIG. 8J illustrates an exemplary shared set interface 814 that represents a received shared set shared by Williams (e.g., represented by 804C in FIG. 8I). For example, the shared collection interface 814 may be displayed in response to user input 810 or 812. In some embodiments, the shared collection interface (e.g., 814) includes representations of one or more media items in the shared collection of media items. For example, FIG. 8J represents a top view of a media item 816A in the shared collection of media items from the received representation 804C.
A shared collection interface such as 814 can be used to view media items. For example, still referring to FIG. 8J, the shared collection interface 814 includes an up region 814A for viewing media items in an up style view (e.g., as described above). Shared collection interface 814 also includes friction area 814B, which includes a reduced-size representation of other media items in the received shared collection (e.g., relative to their actual size and/or relative to one of the media items displayed in the top view). In some implementations, a user input (e.g., 814B) corresponding to selection of a media item in a friction region causes a device (e.g., 600) to display the respective media item in an up region (e.g., 814A). In some implementations, user input corresponding to a directional swipe gesture (e.g., a left swipe or a right swipe) in a swipe area (e.g., 814B) causes a device (e.g., 600) to perform a swipe action through a set of media items and display a different media item based on when the swipe action ends. For example, after completing the rubbing action, the device 600 displays the media item in an up area 814A at the center of the rubbing area, or adjacent to an indicator (e.g., the point shown below area 814B).
The device receiving the shared collection of media items optionally provides the user with the option of saving one or more media items to a media library associated with the user. This may be beneficial because, as described above, access to the received shared media may expire. In some implementations, a recipient (e.g., of a shared collection) can add one or more media items from the shared collection to a media library. In some embodiments, the recipient may access the media items in the media library after the access provided by the sharer expires (e.g., after an expiration time or after access is revoked). For example, access provided by the sharer to the shared media may be hosted by a third party cloud service, which facilitates sharing by the sharer with the recipient. This may provide the benefit of simplifying the sharing process. For example, the sharer does not have to worry about creating and maintaining a private connection with the recipient (e.g., via proximity-related technologies such as Wi-Fi or bluetooth, or via non-proximity-related technologies such as File Transfer Protocol (FTP) or Virtual Private Network (VPN) (etc.)). Further, the sharer need not worry about facilitating third party hosting, such as handling manual configuration or hosting an environment (e.g., a website) for sharing. However, hosting may be temporary, thus limiting the time to access the shared collection.
In view of the foregoing, an interface for saving media items will be discussed below. In some embodiments, the share set interface provides the ability to save media items from the received share set. In some embodiments, the shared collection interface (e.g., 814) includes an affordance for saving one or more media items to a media library associated with a user of a device (e.g., 600). For example, still referring to FIG. 8J, shared collections interface 814 includes an affordance 814C. In this example, affordance 814C includes the text "add all to library". Affordance 814C may be used to quickly and easily add (e.g., save) all media items in the received shared collection to the media library. Thus, the user of device 600 need not perform additional input to individually select media items in order to ultimately add them to their media library. Further, the user may decide to save all of the media items in the received collection after viewing all or none of them (e.g., all media may be saved at the display of the first media item 816A, as shown in FIG. 8J, without swiping other media items). In some embodiments, the shared collection interface (e.g., 814) includes an indication of the amount of media in the collection of media items. For example, shared collection interface 814 includes indicator 814D, which indicates that the shared collection being viewed includes a total of 30 media items (e.g., photos).
In FIG. 8K, the device 600 receives user input 818 corresponding to selecting the affordance 814C. In some implementations, in response to receiving a request to save a media item (e.g., user input 818), a device (e.g., 600) causes one or more media items from the corresponding collection to be added to a media library (e.g., a user account associated with the device, or associated with the device). For example, in response to receiving user input 818, device 600 causes all media items (e.g., 30 photos) to be added to the media library associated with device 600. In some embodiments, adding media to the media library includes saving the media item to local storage (e.g., on the device 600). In some embodiments, adding media to the media library includes saving the media item to a remote store (e.g., a cloud-based media library maintained on a cloud-based service provider server). Examples of cloud-based services include "iCloud" provided by apple inc, cupertino, california.
Fig. 8L illustrates an exemplary save confirm indicator 820. In some implementations, a device (e.g., 600) displays a save confirm indicator in response to a request to save one or more media items to a media library (e.g., associated with the device). For example, device 600 displays save confirm indicator 820 in response to receiving user input 818 corresponding to selection of "add all to library" affordance 814C in FIG. 8K. In some embodiments, saving the confirmation indicator includes saving an indication of an amount of media. For example, the save confirm indicator 820 indicates that 30 media items (e.g., photos) are added to the text in the media library as: "30 photos added to the library".
FIG. 8M illustrates a collection interface 814 that is shared after a media item is successfully added to the library. In some embodiments, after the media is added to the library, the affordance for adding the media to the library stops being displayed. For example, 814C is no longer displayed at interface 814. In some embodiments, after the media is added to the library, an affordance is displayed for viewing the media in the media library. For example, affordance 814F is now displayed, and represents "view library". Affordance 814F includes one or more features described with respect to affordance 814L, which is described in detail below with respect to fig. 8 AJ-8 AL.
The device optionally provides the user with the option of adding fewer than all of the media items in the received shared collection to the media library. 8N-8 AI illustrate various interfaces for selecting and saving fewer than all of the media items in the received shared collection.
FIG. 8N illustrates a shared collection interface 814 (e.g., the same as shown in FIG. 8J). In FIG. 8N, shared collection interface 814 includes a selection affordance 814E. In some embodiments, a device (e.g., 600) enters a selection mode in response to receiving a user input (e.g., 822) representing a request to enter the selection mode. In this example, the selection affordance 814E may be used to enter a selection mode in which the device allows a user to customize the selection of media items to be saved to their media library (e.g., select less than all of the media items).
In FIG. 8N, the device 600 receives a user input 822 corresponding to selection of the selection affordance 814E. In response to receiving the user input 822, the device 600 enters a selection mode.
FIG. 8O illustrates an exemplary sharing user interface 814 in an exemplary selection mode. For example, device 600 displays sharing user interface 814 in response to receiving user input 822, as shown in fig. 8O. In some implementations, entering the sharing mode includes displaying (e.g., on the display 602 of the device 600) one or more selection indicators associated with one or more media items of the set of media items. For example, in FIG. 8O, the sharing user interface 814 includes a selection indicator 824A associated with the media item 816A, and a selection indicator 824A associated with the media item 816B (e.g., in the friction region).
In some embodiments, while in the selection mode, the device (e.g., 600) disables saving one or more affordances for the media item. For example, in FIG. 8O, the device 600 changes the appearance of the affordance 814C (e.g., it appears grayed out) and makes it unselectable. For example, disabling the affordance 814C for adding all media items to the library prevents the user from inadvertently selecting (e.g., accidental saving of all media rather than custom selection) as the device enters a selection mode (e.g., for custom selection). In some embodiments, in response to entering the selection mode, the device (e.g., 600) displays an affordance for saving a selection of a media item. For example, in FIG. 8O, the device 600 displays a completion affordance 814G (e.g., an alternate affordance 814E) that, when selected, causes the currently selected media item to be added to a media library associated with the device.
In some implementations, while in the selection mode, the device (e.g., 600) displays an indication of the amount of the currently selected media item. For example, in FIG. 8O, the shared collection interface 814 includes an indicator 814D that has been updated to indicate the amount (e.g., number) of media items in the currently selected shared collection (e.g., selecting "30 photos").
In some embodiments, in response to entering the selection mode, an initial set of media items is currently selected. In some embodiments, the initial set of media items includes all of the media items in the set of media items. For example, in FIG. 8O, by default, all 30 media items in the collection are selected (e.g., upon initially entering the selection mode (first time)). In some implementations, the initial set of media items includes less than all of the media items in the collection. For example, upon entering the selection mode (as shown in FIG. 8O), one or more media items are not selected by default (e.g., their associated selection indicators are not displayed). For example, less than all of the media items are selected, similar to that shown in FIG. 8S. In some implementations, one or more media items (e.g., less than all) are selected based on selection criteria. For example, the selection criteria are discussed above and are not repeated here for the sake of brevity. In some embodiments, the initial set of media items is an empty set (e.g., does not include media items). For example, upon entering the selection mode, media items are not selected by default, but the user can add to the empty set by selecting one or more media items.
As described above with respect to a top view 636 (e.g., fig. 6Y-6 AD), the device 600 can navigate among media items displayed in the top view (e.g., in region 814A) and select and/or deselect media items while in the selection mode. In some embodiments, in response to user input (e.g., 826) received while displaying a first media item in an upper view, a device (e.g., device 600) replaces display of the first media item in an upper view (e.g., 816A) with display of a second media item in an upper view (e.g., 816B). For example, in FIG. 8P, while the media item 816A is displayed in the area 814A, the device 600 receives a user input 826 representing a left swipe gesture in the top view 814A.
FIG. 8Q illustrates an exemplary second media item from the collection displayed in one of the top views. For example, in response to receiving user input 826 representing a left swipe gesture in the top view area 814A while the media item 816A is displayed in the area 814A, the device 600 replaces the display of the media item 816A in the area 814A with the media item 816B. Thus, a user may navigate between one of the media items displayed in the top view to examine a particular media item in detail to determine whether such media item is included in a selection.
In some implementations, navigating between media items in an upper view can be performed while not in the selection mode. For example, if user input 826 is received without being in the selection mode (e.g., at interface 814, as shown in fig. 8J or 8M), device 600 replaces the display of media item 816A with media item 816B in area 814A.
8R-8U illustrate selection and saving of a custom selection of an exemplary set of media items from a received shared collection of media items. In some implementations, a device (e.g., 600) receives user input (e.g., at a location associated with a media item) (e.g., in a selection mode) representing a request to switch a selection of the media item. In some implementations, the user input representing a request to switch media selections corresponds to selection of a selection indicator. For example, in FIG. 8R, the device 600 receives a user input 828 corresponding to selection of a selection indicator 824A associated with a media item 816B.
FIG. 8S illustrates an exemplary media item not currently selected in the top view. For example, in response to receiving user input 828, device 600 displays sharing interface 814, as shown in fig. 8S. Since the user input 828 was received when the media item 816B was selected, the user input 828 causes it to become unselected and, thus, the selection identifier 824A ceases to be displayed, as shown in FIG. 8S. In some implementations, a device (e.g., 600) displays a non-selected indicator (e.g., 824B) associated with a currently non-selected media item (e.g., while the device is in a selection mode). For example, in FIG. 8S, an unselected indicator 824B is displayed. Displaying a non-selection indicator associated with a media item (e.g., an unfilled area for a selection indicator) may, for example, affirmatively indicate to the user device that the user device is in a selection mode, and/or that the corresponding media item is not selected.
In some implementations, the user input representing the request to switch the selection of the media item corresponds to an input at a location of the media item. For example, a user touch anywhere on the media item 816B (e.g., not just on the selection indicator or unselected indicator) causes the device 600 to select it or deselect it, depending on the current selection state.
In some implementations, the user input representing a request to switch media selections is a selection non-selection indicator (e.g., 824C). For example, in fig. 8S, interface 814 includes an unselected indicator 824B (e.g., an empty circle). In some embodiments, the selection is received while the non-selection indicator is displayed. For example, if the user input 828 of FIG. 8R is instead received when the media item 816B is not currently selected (e.g., as shown in FIG. 8S, described below), the device 600 causes the media item 816B to be selected and the selection indicator 824A to be displayed (e.g., as shown in FIG. 8R).
In some embodiments, the device (e.g., 600) updates the displayed indication of the amount of the currently selected media item after (e.g., in response to) the change in the currently selected media item. For example, in FIG. 8S, the share set interface 814 includes an indicator 814D that has been updated to indicate the amount (e.g., number) of media items in the currently selected share set (e.g., select "29 photos").
After customizing the selection of the media item, the device optionally provides the user with an option to add the selected item to the media library (e.g., as previously discussed). At FIG. 8T, after receiving the user input 828 to deselect the media item 816B, the device 600 receives the user input 830 on the completed affordance 814G. In some implementations, in response to user input (e.g., 830), the device (e.g., 600) causes one or more media items in the user-selected set of media items to be added to the media library. For example, in response to receiving user input 830, device 600 causes 29 selected media items indicated in fig. 8T to be added to the media library, the media items representing a user-selected set of media items from the received shared collection (e.g., associated with the device, or associated with a user (e.g., an account) associated with the device). In this example, all media items except media item 816B are added to the media library in response to user input 830.
Fig. 8U illustrates an exemplary save confirm indicator 832. In some embodiments, a device (e.g., 600) displays a save confirmation indicator (e.g., 832) in response to user input (e.g., 830) representing a request to add one or more media items from a media library collection. The save confirm indicator 832 indicates that the operation of adding one or more media items was successful. In some implementations, saving the confirmation indicator (e.g., 832) includes an indication of the amount of media items added to the media library. For example, save confirmation indicator 832 indicates that 29 media items (e.g., photos) are saved to the user's media library (e.g., of device 600), indicating that: "29 photos are added to the library". In some implementations, if one or more media items are not successfully added to the media library, the device (e.g., 600) displays an indication.
FIG. 8U also illustrates an exemplary shared collection interface (e.g., 814) after saving one or more media items. For example, the share set interface 814, as shown in FIG. 8U, is displayed in response to user input 830. In some implementations, the device (e.g., 600) exits the selection mode after one or more media items are added to the library. For example, as shown in FIG. 8U, shared collection interface 814 is no longer in the selection mode. For example, affordance 814E is again displayed, and no selection indicators (e.g., 824A) or unselected indicators (e.g., as in 824B) are displayed (e.g., as in fig. 8T).
8V-8 AI illustrate various interfaces for viewing a collection of media items in a grid graph, for moving between the grid graph and a top view, or for selecting media items in either view.
Fig. 8V illustrates a shared collection interface 814 arranged in a grid graph (e.g., as described above with respect to fig. 6X). In some embodiments, in response to user input, the device (e.g., 600) displays the shared collection interface (e.g., 814) in a grid graph (e.g., fig. 8V). For example, the shared collections interface 814 may be displayed in response to a user input selection (e.g., 810) of a notification that the download has completed or a user input selection (e.g., 812) of a representation of a collection in a message script. To reiterate, the apparatus 600 may initially display the shared collection in a grid graph or in a top view. In some embodiments, displaying a collection in a grid or a top view in response to user input corresponding to a representation of the selected collection is user configurable.
In some embodiments, the shared collection interface (e.g., 814) in the grid map includes one or more features of the suggested collection interface 612 as described above with reference to fig. 6N. For example, the share set interface (e.g., 814) includes an indication of the amount of media items. For example, still referring to fig. 8V, indicator 814I of interface 814 indicates that the suggested set includes 30 photos. In some embodiments, the shared collection interface (e.g., 814) includes a save affordance (e.g., to add to a media library) for causing one or more media items from a respective collection of media items to be saved. For example, the save affordance 814J of the selectable interface 814 is to add one or more media items to a media library, as described above. In some embodiments, the shared collection interface (e.g., 814) includes representations of one or more media items in the respective collection of media items. For example, the share set interface 814 includes representations of media items 816A and 816B, each of which represents a media item in the received share set represented by the share set interface 814 (the Algorian's Lash set, FIG. 8A). In some embodiments, the displayed set of media items (e.g., as shown in interface 814) are scrollable (e.g., to display additional content or elements as described herein with respect to interface 814). For example, user input representing a request to scroll (e.g., swipe up or down at 814) causes additional media items in the collection to be displayed.
In some embodiments, the device (e.g., 600) transitions the displayed shared collection interface (e.g., 814) from a grid map to a top view. In some embodiments, the device (e.g., 600) transitions the displayed shared collection interface (e.g., 814) from one top view to a grid graph. In some implementations, the transition is performed in response to a user input. For example, device 600 may receive the user input (e.g., tap, gesture, key press) of fig. 8J or 8U while one is in the top view and transition to the grid diagram shown in fig. 8V. For example, the user input that results in the transition to the grid map may be a "pinch" gesture (e.g., two contacts moving toward each other more than a threshold distance) (e.g., in one of the regions 814A in the top view). In some embodiments, the device (e.g., 600) transitions the displayed shared collection interface (e.g., 814) from a grid map to an upper view. For example, device 600 may receive user input (e.g., taps, gestures, keys) while in the grid map and transition to a top view as shown in fig. 8Z-8 AA and 8 AD-8 AE. In some embodiments, the user input that causes the transition to the one top view may be an un-pinch gesture (e.g., as shown in fig. 8Z-8 AA and 8 AD-8 AE). In some implementations, the user input that causes the transition to the one in the top view can be a long press gesture (e.g., a contact that exceeds a predetermined length of time). In some embodiments, the user input that causes the transition to one of the top views may be a hard-tap gesture (e.g., a contact having a characteristic intensity that exceeds a threshold intensity, such as a threshold intensity that is greater than a nominal contact detection intensity threshold at which a tap input may be detected).
While displaying media items in the grid map, the device optionally provides the user with the option of selecting and adding one or more media items in the received collection to the media library. In some embodiments, while in the grid graph, a device (e.g., 600) receives a user input (e.g., 834) representing a request to enter a selection mode and, in response, enters the selection mode. For example, in FIG. 8V, the device 600 receives a user input 834 corresponding to selection of the selection affordance 814E. In response, device 600 displays shared collection interface 814 in the selection mode, as shown in FIG. 8W.
FIG. 8W illustrates an exemplary shared collection interface 814 in a grid graph while in a selection mode. For example, device 600 displays sharing user interface 814 in response to receiving user input 834, as shown in fig. 8W. In some implementations, entering the sharing mode includes displaying (e.g., on the display 602 of the device 600) one or more selection indicators associated with one or more media items of the set of media items. In some embodiments, the share set interface (e.g., 814) while in the selection mode includes one or more selection indicators indicating whether one or more media items are currently selected. In some implementations, the selection indicator is visually associated with the currently selected media item and is optionally not displayed when the same media item is not selected. In some embodiments, the share set interface (e.g., 814) while in the selection mode includes one or more unselected indicators indicating whether one or more media items are currently selected. In some implementations, the unselected indicator is visually associated with the currently unselected media item and is optionally not displayed when the same media item is selected.
For example, in FIG. 8W, the sharing user interface 814 includes a selection indicator 824A associated with the media item 816A, and a selection indicator 824A associated with the media item 816B. Thus, both media items 816A and 816B are currently selected.
In some implementations, in response to entering the selection mode, the device (e.g., 600) displays an affordance for saving a selection of a media item. For example, in FIG. 8W, the device 600 displays a completion affordance 814G (e.g., an alternate affordance 814E) that, when selected, causes the currently selected media item to be added to a media library associated with the device.
In some implementations, while in the selection mode, the device (e.g., 600) displays an indication of the amount of the currently selected media item. For example, in FIG. 8W, the shared collection interface 814 includes an indicator 814D that has been updated to indicate the amount (e.g., number) of media items in the currently selected shared collection (e.g., selecting "30 photos").
In some embodiments, in response to entering the selection mode, an initial set of media items is currently selected. In some embodiments, the initial set of media items includes all of the media items in the collection of media items. For example, in FIG. 8W, all 30 media items of the collection are selected by default when the selection mode is entered (e.g., when the selection mode is initially entered) (the first time). In some embodiments, the initial set of media items includes less than all of the media items in the collection. For example, upon entering the selection mode (as shown in FIG. 8W), one or more media items are not selected by default (e.g., do not include a selection indicator associated with them). In some embodiments, the initial set of media items is an empty set (e.g., does not include media items). For example, upon entering the selection mode, media items are not selected by default, but the user can add to the empty set by selecting one or more media items.
As previously described, while in the selection mode, the user may customize the selection of media items. In some embodiments, while displaying the shared collection interface (e.g., 814) in the grid graph, and while in the selection mode, the device (e.g., 600) receives user input representing a request to switch media item selections. For example, in FIG. 8X, the device 600 receives a user input 836 representing a request to toggle selection of a media item 816B that is currently selected while the shared collection interface 814 is in the grid diagram. At FIG. 8Y, in response to receiving user input 836, device 600 has toggled to select media item 816B, which is not currently selected (e.g., selection indicator 824A is no longer displayed in association with media item 814B and non-selection indicator 824B is now displayed in association with media item 814B).
In some implementations, the device (e.g., 600) updates the indication of the amount of the currently selected media item in response to a switch selection of the media item. For example, FIG. 8Y shows that indicator 814D has been updated to indicate that the amount of the selected media item has changed to 29 items (30 items from FIG. 8X). In some embodiments, the indication of the amount of the currently selected media item is an affordance (e.g., for saving the media item). For example, FIG. 8Y illustrates that the save affordance 814J has been updated to indicate that the amount of media items selected is 29 items (e.g., out of a total of 30 items in the set indicated by indicator 814I), and now designates: "add 29 to the library".
8Z-8 AC illustrate exemplary techniques for transitioning between a grid map and an upper view while remaining in a selection mode. As described above, in response to user input, the shared collection interface (e.g., 814) may transition between a grid graph and a top view. For example, while in a selection mode in the grid map, the device optionally provides the user with the option of transitioning to a top view while remaining in the selection mode. In some embodiments, a device (e.g., 600) receives user input representing a request to transition to an upper view. In some implementations, the user input is associated with a location of the media item. For example, in FIG. 8Z, the device 600 receives user input 838 at a location associated with the media item 816B. In this example, the user input 838 is a cancel pinch gesture that is centered on the media item 816B.
In some implementations, in response to receiving user input representing a request to transition to one of the top views associated with a location of a media item, a device (e.g., 600) displays the media item in the one of the top views. For example, in FIG. 8AA, the device 600 displays the media item 816B in a top view in response to receiving the user input 838. Further, since the user input 838 is a de-pinch gesture associated with the location of the media item 816B (e.g., centered), one up view is one up view of the media item 816B.
In some embodiments, if a request to transition between a grid map and an upper view is received while a device (e.g., 600) is currently in a selection mode, the device remains in the selection mode after transitioning between the grid map and the upper view. For example, in FIG. 8AA, after transitioning to a top view, the shared collection interface 814 remains in the selection mode (e.g., display complete affordances when a selection indicator and unselected indicators are displayed).
While in the selection mode, the user may wish to view a particular media item in a top view without switching the selection of that media item. In some embodiments, in response to receiving user input associated with a media item that causes a transition between an up view and a grid map while in a selection mode, a device (e.g., 600) enters an up view without toggling the selection of the media item. For example, in response to receiving the user input 838 of FIG. 8Z (which was not selected) on media item 816B, the device transitions to a top view, as shown in FIG. 8AA, but media item 816B is still not selected.
In some implementations, if the user input associated with the location of the media item is a first gesture (e.g., 836), the device (e.g., 600) switches selections and if the user input (e.g., 838) is a second gesture different from the first gesture, transitions to an upper view without switching selections. For example, in response to a flick gesture represented by user input 836 (FIG. 8X), device 600 toggles the selection of media item 816B. However, in response to the un-pinch gesture represented by user input 838 (FIG. 8Z) at the location of media item 816B, device 600 transitions to one of the top views of media item 816B without toggling the selection. In this way, the user can examine the media item of interest in a top view without causing a selection. While one is in the top view, the user can then switch the selection of the media item (e.g., or other media items), as shown in fig. 8 AAB-fig. 8AC, as described below.
In some implementations, the currently selected set of media items remains unchanged (e.g., transitions from or to the grid map) in response to a transition between the grid map and an upper view. For example, as shown in FIG. 8Z, the total number of currently selected media items is 29, while the unselected media items are only media item 816B. As shown in FIG. 8AA, after transitioning to a top view, the currently selected set of media items remains 29 items and the unselected media items remain 816B. Thus, the device may move between different views of the media items in the collection without inadvertently changing the currently selected set of media items.
Fig. 8 AB-8 AC illustrate exemplary techniques for switching the selection of media items while in a top view. Upon entering a top view, as shown in FIG. 8AA, the device optionally provides the user with the option to make changes to the currently selected set of media items. In FIG. 8B, the device 600 receives user input 840 representing a request to toggle selection of media item 816B (currently unselected) in the top view. In fig. 8AC, the device 600 toggles selection of the media item 816B (to be selected) in response to receiving the user input 840.
8 AD-8 AG illustrate exemplary techniques for simultaneously transitioning between views of a shared collection interface and entering a selection mode. In some embodiments, a device (e.g., 600) receives user input (e.g., 842) representing a request to transition between a grid graph and a top view while not in a selection mode, and in response transitions between the grid graph and the top view and enters the selection mode. For example, fig. 8AD shows shared collection interface 814 when not in select mode (e.g., as also shown in fig. 8V). In FIG. 8AD, device 600 receives a user input 842 representing a cancel pinch gesture (e.g., as described above) and, in response, displays shared collection interface 814 in a top view, as shown in FIG. 8 AE.
FIG. 8AE illustrates a shared collection interface 814 in a top view while in the select mode. For example, in response to receiving the depinch gesture user input 842, device 600 displays shared collection interface 814 in one of the top views and enters a selection mode. In some embodiments, the user input representing a request to transition between the grid graph and a top view while not in the selection mode is associated with a location associated with the media item. For example, the user input 842 is a dismissal pinch gesture that is centered around the location of the media item 816B, thus entering a top view of the media item. In some embodiments, in response to a user input representing a request to transition between the grid map and an upper view while in the non-selection mode, if the user input is a first gesture, the device enters an upper view without entering the selection mode, and if the user input is a second gesture different from the first gesture, the device enters an upper view and enters the selection mode. In some embodiments, the gesture is: a flick gesture, a pinch gesture, a cancel pinch gesture, a deep press, or one of a press and a hold. For example, the first gesture may be a tap gesture and the second gesture may be a cancel pinch gesture. For example, as shown in FIG. 8AD, because the user input 842 is an un-pinch gesture (e.g., an exemplary second gesture), the device 600 transitions to a display in the top view and enters a selection mode (as shown in FIG. 8 AE). If the user input 842 is a tap gesture (e.g., the exemplary first gesture) received while not in the selection mode, the device 600 will transition to a display in the top view without entering the selection mode. For example, in response to a flick gesture, device 600 will display shared collection interface 814, as shown in FIG. 8J, but with media item 816B displayed in area 814A.
After entering a top view and selection mode in response to the same user input (e.g., as shown in fig. 8 AE), the user may toggle selection of media items as previously described. For example, in fig. 8AF, device 600 receives user input 844 associated with the location of media item 816B (e.g., selection of selection indicator 824A of media item 816B). As shown in FIG. 8AG, in response to receiving user input 844, device 600 switches selection of media item 816B to non-selection (e.g., now displaying non-selection indicator 824B and no longer displaying selection indicator 824A; indicator 814D updates from "30" to "29" to indicate that an item has been deselected).
Fig. 8 AG-8 AH illustrate an exemplary technique for transitioning between a top view and a grid map. For example, while in a top view, the device optionally provides the user with the option of changing the grid map. In some implementations, in response to receiving user input (e.g., 845) while displaying one of the collection of media items in the top view, the device (e.g., 600) transitions the one in the top view to the grid map. For example, in FIG. 8AG, the device receives user input 845 representing a pinch gesture (e.g., a contact in which two contacts are closely coupled), and in response displays interface 814 in the grid map, as shown in FIG. 8 AH.
Fig. 8AH shows an exemplary grid map. At fig. 8H, the device 600 receives a user input 846 corresponding to a selection (e.g., a tap) of the save affordance 814J. In response to receiving the user input 846, the device 600 causes the selected media item to be added to a media library associated with the device, as previously described. In some implementations, the device (e.g., 600) displays a confirmation that the media item has been successfully added. For example, in fig. 8AI, device 600 displays confirmation indicator 848 after user input 846 is received (e.g., in response to or after a media item has been successfully added to the library).
8 AJ-8 AL illustrate exemplary interfaces for viewing media items that have been added to a media library. FIG. 8AJ illustrates an exemplary sharing set interface after one or more media items are added to a media library. In some embodiments, after adding one or more media items to the media library, the device (e.g., 600) displays an affordance (e.g., a browsing interface of a photo application) for viewing the media items in the library. For example, as shown in FIG. 8AJ, save affordance 814J has been replaced with affordance 814L, representing "view library". In fig. 8AK, a device (e.g., 600) receives a user input 850 corresponding to selecting an affordance 814L. In response to receiving the user input 850, the device 600 displays the library interface 852 of fig. 8 AL.
In some implementations, after a media item is added to the library, the device (e.g., 600) exits the selection mode. For example, simply turning back to FIG. 8AJ, interface 814 is no longer in the selection mode (e.g., no selection indicator or unselected indicator is displayed). For example, the selection mode is no longer required because the device has caused the addition of the requested media item selection to the media library.
Fig. 8AL illustrates an exemplary library interface 852. In some implementations, a device (e.g., 600) displays a library interface (e.g., 852) in response to a user input (e.g., 850). In this example, library interface 852 shows media items included in a media library associated with device 600. Notably, library interface 852 includes an area 852A that includes representations of one or more media items in a media library associated with device 600. As shown, area 852A includes media items from the set of shared media items received from william and is saved in fig. 8 (e.g., in response to user input 846). For example, including representations of media items 816A and 816B. In some embodiments, the library interface includes representations of media items added from the received shared collection and representations of media items that were in the library prior to being added from the received shared collection. In this example, representations of media items from a user library not added by the williams-shared collection are also included (e.g., media items 854F and 854G described below in connection with figure 8 AM).
In some embodiments, the library interface is associated with a photo application. For example, displaying library interface 852 may include opening or launching a photo application.
After receiving the share set, the device optionally provides the user with an option to view a personalized media interface (e.g., as described above with respect to interface 658 of fig. 6O-6 AAB). In some embodiments, the device receives user input representing a request to view the personalized media interface, and in response, displays the personalized media interface. For example, in response to user selection of the affordance 852B of fig. 8AL, the device 600 displays a personalized media interface (e.g., similar to 658).
FIG. 8 AM-FIG. 8AQ illustrate an exemplary interface for sharing back one or more media items. The device optionally provides the user with the option to share back one or more media items (e.g., collections) after receiving a shared collection from another user (e.g., a user account or a device associated with the user). In some embodiments, a device (e.g., 600) outputs (e.g., displays) a sharing prompt (e.g., 854) to share a media item with another user (e.g., a user other than the user of device 600). FIG. 8AM illustrates an exemplary prompt 854 for sharing one or more collections of media items with a sender of a shared collection. In this example, william (recipient user 803B) shares a media collection, represented by 804C in a text record 804A, with a user of device 600 (user 803A). Based on receiving an indication of a collection shared by williams, the device 600 displays a share prompt 854 that suggests one or more media items (e.g., suggested media items) to share with williams. Thus, the sharing hints provide quick and easy access to sharing suggestions (e.g., a collection of suggestions) in order to reward for sharing of media.
In some embodiments, the prompt to share the media item includes one or more features of the share prompt 854. In some embodiments, the prompt (e.g., 854) includes an indication of context. In some embodiments, the context is a context that is related to both the received shared set and the suggested set. For example, the prompt 854 includes an area 854A that includes text indicating that the sharer (william) shared media items related to the location of the lake taihao and times of 12 months 1 through 4 days and suggested media items related to this context (e.g., 854F and 854G) that may be used to share photos of the lake taihao 12 months 1 through 4 days of william sharing. Do you want to share your? "in some embodiments, the prompt (e.g., 854) includes a representation of one or more suggested media items. For example, the prompt 854 includes representations of media items 854F and 854G from the suggestion set. In some embodiments, the prompt (e.g., 854) includes one or more of the features of the interfaces 612 and/or 814, as described herein. For example, the prompt 854 includes a title card 854B (e.g., that includes an indication of a geographic location (taihao lake) associated with a set of recommendations, an indication of a time period associated with the set of recommendations (e.g., "12 months 1 through 4 days" (e.g., 12 months 1 through 4 days)), and a representative image (e.g., from the set of recommendations)).
In some embodiments, the prompt includes an affordance for sharing the suggested set with the recipient (e.g., a user sharing the received set that caused the prompt to be displayed). For example, prompt 854 includes a sharing affordance 854C. In some embodiments, the affordance (e.g., 854C) for sharing includes one or more of the features described above with respect to any of affordances 604I, 612E, 664A, 666G, or 666H. For example, if the user changes the selection of media items for sharing, affordance 854C may be updated to reflect the number of media items selected (e.g., "share 22" if 22 media items are selected).
In some embodiments, the prompt (e.g., 854) includes an affordance (e.g., 854D) for entering the selection mode. For example, prompt 854 includes a selection affordance 854D for entering a selection mode. Selection affordance 854D includes one or more features as described above with respect to affordance 814E. For example, in response to a user selection of the selection affordance 854D, the device 600 enters a selection mode (e.g., allows a user to customize the media items selected for sharing). In some embodiments, a prompt (e.g., 854) is displayed (e.g., initially) in the selection mode. For example, the device 600 can display the prompt 854 in a selection mode without first requiring user input (e.g., selecting the affordance 854D). In some implementations, less than all of the media items of the suggested set are selected for sharing. For example, the suggested set of fig. 8AM includes 23 photos and 1 video (e.g., as shown by indicator 854E), and fewer than 24 media items (e.g., only 20 media items) can be initially (e.g., automatically) selected for sharing. In some implementations, the media items are selected (e.g., automatically) based on selection criteria (e.g., as described above).
As discussed above (e.g., with respect to fig. 6), a suggestion set based on suggestions determined to be related to content in a textual record (e.g., a received shared set) is equally applicable to a prompt (e.g., 811A, 813, or 854) to share a media item after receipt of the shared set, and is incorporated herein. In some embodiments, the suggested media items are related to a context-based received set of media items. For example, suggested media items are contextually related (e.g., a time or time range, a geographic location or set of geographic locations, identified faces described in the media item, or other metadata). In some implementations, the context is determined based on the received set of media items. For example, the context is a time range and/or a geographic location, or corresponds to an identified event associated with the received shared collection. In the example of FIG. 8AM, the set of recommendations relates to the context of events defined by geographic location and time-location Taihaohu and the time range 12 months 1 to 4 days. Notably, the shared set of receptions from william, represented by 804C, also corresponds to the geographic location and time range of 12 months 1 to 4 days of lawy lake (e.g., as shown by 804C in fig. 8A). Thus, for example, the device 600 displays a share prompt 854 that suggests sharing a too expensive lake set because it relates to context (e.g., geographic location) that is also relevant to the received shared set from william.
In some embodiments, the suggested media items are not included in the received shared collection. For example, the device 600 suggests that media items (e.g., the collection represented by the interface 854) that were not received in the collection shared by williams (e.g., in the shared collection represented by 804C) are shared with williams. In this way, device 600 prevents the sharing or display of duplicate media items (e.g., that have been shared between and thus owned by two users). In some embodiments, the device (e.g., 600) identifies one or more media items sharing a context as being performed by the device. In some embodiments, the identifying is performed by a remote device (e.g., a server).
In some embodiments, the prompt (e.g., 854) is displayed in response to an additional user action following receipt of an indication of a shared set from another user (e.g., 804C of fig. 8A). For example, the prompt 854 may be displayed after the user of the device 600 accesses the personalized media interface (e.g., selecting the affordance 852B in fig. 8AL, with the tab tag "personal-specific"). For example, the prompt may be displayed after the user views the interface associated with the received shared collection and adds media to the media library (e.g., after or in response to user input: 808 of FIG. 8K, 830 of FIG. 8T, or 846 of FIG. 8 AH). For example, in fig. 8AK, after adding a media item from a collection received from william to a media library, the device 600 receives user input representing a request to close (e.g., stop displaying) the interface 814 and, in response, displays a prompt 854. Thus, after the user is finished viewing and/or saving media items from the received shared collection, the device displays a prompt (e.g., 854).
In some embodiments, the prompt is displayed in response to receiving an indication that the first user (e.g., 803B) has shared the first set with the second user (e.g., of device 600). For example, as described above, in response to receiving that william (e.g., user 803B) has shared a set of taihao lakes associated with representation 804C in fig. 8I, device 600 displays one or more of the following exemplary prompts: a share affordance 811A (e.g., including the text "share back") or a share affordance 813. In some implementations, the prompt (e.g., 811A and/or 813) is displayed concurrently with the text record (e.g., 804A) of the message conversation with the recipient (e.g., user 803B). In some embodiments, the cue is an affordance (e.g., 811A or 813 of fig. 8I).
In fig. 8AN, the device 600 receives a user input 856 corresponding to selection of the affordance 854C, the user input representing a request to share the suggested set of media items. In some implementations, in response to receiving a request to share the suggested set of media items (e.g., 856), a device (e.g., 600) prepares to share one or more media items from the suggested set (e.g., select a media item).
Fig. 8AO illustrates an exemplary message interface 804. FIG. 8AO shows an exemplary representation 804D of a suggested media collection that is shared back. In some embodiments, preparing to share the suggested set includes inserting a representation of a media item into a text entry field (e.g., text entry field 804F of fig. 8 AO). For example, the device 600 inserts the representation 804D into the text entry field 804F in fig. 8AO, allowing the user to optionally add accompanying text (or other content) prior to sharing (e.g., by selecting an affordance, such as 804E). In fig. 8AP, the device receives a user input 858 corresponding to selection of affordance 804E and, in response, shares the set of suggestions (e.g., the selected media item) with other users (e.g., 803B, william).
FIG. 8AQ illustrates an exemplary representation of a suggestion set that has been shared back. In some embodiments, sharing includes transmitting a message that provides access to media items of a media collection. In some embodiments, sharing includes inserting the media item or a representation of the media item into one or more of the text records of the message conversation. For example, as shown in fig. 8AQ, device 600 has caused representation 804D to be inserted into the text record in response to user input 858.
In some embodiments, in response to receiving a request to share the suggested set of media items (e.g., 856), a device (e.g., 600) immediately transmits a message providing access to the media items of the suggested set (e.g., the selected media items). For example, the device 600 may display the message interface 804 as shown in fig. 8AQ in response to user input 856 at prompt 854, and optionally without user input 858, or provide an opportunity to add accompanying content.
In some implementations, the representation of the collection of media items is associated with the displayed receipt indication. For example, in fig. 8AQ, representation 804D is associated with a receipt indicator 804G, which indicates that a set of sharing suggestions (e.g., messages comprising the representation) have been delivered to one or more recipients (e.g., users 803B and 803C). In some embodiments, the receive indicator includes information regarding the recipient's activity access and/or viewing of the shared collection. For example, rather than (or in addition to) indicating "delivered," indicator 804G may include text such as: "read", "read at a certain time", "viewed n times", "open", "not viewed", etc.
Fig. 9A-9G are flow diagrams illustrating a method 900 of using an electronic device, according to some embodiments. The method 700 is performed at a device (e.g., 100, 300, 500, 600) having a display and one or more input devices. Some operations in method 900 are optionally combined, the order of some operations is optionally changed, and some operations are optionally omitted.
As described below, the method 900 provides an intuitive way for sharing a suggested set of media items related to a received set of media items. The method reduces the cognitive burden on the user to share a suggested set of media items in relation to a received set of media items, thereby creating a more efficient human-machine interface. For battery-operated computing devices, users are enabled to share suggested sets of media items in relation to a received set of media items, saving power faster and more efficiently and increasing the time between battery charges.
An electronic device (e.g., 600) receives (902), from an external device (e.g., 100, 300, 500), an indication that a first user (e.g., a device or an account associated with the first user) shares a first set of media items (e.g., provides access to (e.g., via a link and/or a respective permission) or sends an actual media item) with a second user (e.g., a device (e.g., 600) or an account associated with the second user). For example, in FIG. 8A, device 600 receives a request indicating that user 805B ("Williams") has shared a collection of media items represented by representation 804C, as shown in FIG. 8A.
After receiving (e.g., in response to) an indication that the first user shares the first set of media items with the second user, the electronic device (e.g., 600) outputs (904) (e.g., displays on a display) a prompt (e.g., 811A of fig. 8I, 813 of fig. 8I, or 854 of fig. 8M) to share (e.g., share with the device or an account associated with the first user) with the first user (e.g., 805B), one or more suggested media items (e.g., 854F, 854G of fig. 8 AM) associated with the second user (e.g., 805A) (e.g., including a locally and/or remotely stored media library, or locally stored on the device), the one or more suggested media items related to the first set of media items based on context, wherein based on the first set of media items (e.g., context is a time range and/or at a geographic location, or corresponding to an identified event), and the one or more suggested media items are not included (908) in the first collection (e.g., one or more media items are included in a media library of the second user and then an indication of the shared first collection is received).
Displaying a prompt for sharing with the first user, based on the context, one or more suggested media items associated with the second user, the media items being related to the first set of media items (shared by the first user), allows the second user to quickly identify media items that the user may want to share with the first user. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the prompt (e.g., 854) is output in response to a user input (e.g., selection of 811A or 813 of fig. 8I, or user input 846 of fig. 8AH, or selection of 852B of fig. 8 AL). In some embodiments, the prompt is output in response to one or more media items in the first set being added to a media library associated with the electronic device (e.g., after display interface 814, as shown in fig. 8 AI).
In some embodiments, identifying one or more media items of a shared context is performed by an electronic device (e.g., 600). In some embodiments, identifying one or more media items of the shared context is performed by one or more remote devices (e.g., servers).
In some embodiments, upon displaying a representation of one or more suggested media items associated with the second user (e.g., 854F or 854G of fig. 8AM, 804D of fig. 8 AO), the electronic device (e.g., 600) displays (910) on the display, the first affordance (e.g., 854C of fig. 8AM, or 804E of fig. 8 AO).
The electronic device (e.g., 600) receives (912) a first input (e.g., 856 of fig. 8AN, or 858 of fig. 8 AP) via the one or more input devices, the first input representing a selection of a first affordance (e.g., 854C of fig. 8AM, or 804E of fig. 8 AO).
In response to receiving the first input (e.g., 856 of fig. 8AN, or 858 of fig. 8 AP), the electronic device (e.g., 600) transmits (914) a message (e.g., a message comprising representation 804D of fig. 8 AQ) to the first user (e.g., 805B) (e.g., as part of a message session with a second user) and provides access to at least a portion of the one or more suggested media items (e.g., transmits the media item itself, or provides a link or other data and/or rights to access the media item).
In some embodiments, after receiving an indication (e.g., 804C of fig. 8A) that a first user (e.g., 805B) shares a first set of media items with a second user (e.g., 805A) and before outputting a prompt for sharing, the electronic device (e.g., 600) receives (916), via the one or more input devices, a second input (e.g., 818 of fig. 8K, 830 of fig. 8T, 846 of fig. 8 AH) representing a request to add one or more media items of the first set of media items to a media library associated with the second user (e.g., a media library stored locally on a device (e.g., 600) associated with the second user, and/or a media library associated with the second user stored remotely (e.g., cloud-based media storage)). In response to receiving the second input, the electronic device (e.g., 600) causes (918) one or more media items in the first set of media items to be added to a media library associated with the second user (e.g., as indicated by 820 of fig. 8L, 832 of fig. 8U, or 848 of fig. 8 AI). After causing the one or more media items in the first set to be added to the media library associated with the second user, the electronic device (e.g., 600) displays (920) a prompt (e.g., 854 of fig. 8 AM) to share the one or more suggested media items associated with the second user, wherein the one or more suggested media items associated with the second user are selected (922) from the media library associated with the second user and the one or more suggested media items associated with the second user exclude (924) the one or more media items from the first set that have been added to the media library associated with the second user. For example, after adding a media item from the collection received by william, as shown by the interface 814 of fig. 8AJ, to the media library associated with the electronic device, the electronic device displays the prompt 854 of fig. 8AM suggesting sharing of the media item from the media library — however, the suggested media item does not include media items from william (e.g., the media items represented in the AM prompt of fig. 8 (e.g., 854F and 854G) do not include the media items represented in fig. 8AJ from the william collection). In this way, the electronic device avoids sharing suggested media items received from the recipient with the recipient.
Selecting the suggested media items from the user's media library that includes media added from the first collection, and excluding media items from the first collection from suggestions allows a user to quickly identify and send media items that have not been received from the first user without requiring excessive input or placing a high cognitive burden on the user. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, a context is determined (926) based on the identified face identified under the following conditions: one or more media items in the first set of media items and one or more media items in the one or more suggested media items. In some implementations, the one or more media items in the first set of media items and the one or more media items in the one or more suggested media items are different media items. For example, the exemplary first set represented in interface 814J of FIG. 8AJ received from William (e.g., 805B) includes a media item 814K that includes a description of a person's identified face (e.g., William) of a snowboard, and the exemplary one or more suggested media items represented in interface 854 of FIG. 8AM includes a description of the identified same face in media item 854F of the same person depicting the skateboard. Thus, the one or more suggested media items are determined to be relevant to the first set based on the context determined from the faces in both of the two media items 814K and 854F.
Based on the context determined using the faces identified in the first set and in the one or more suggested media items, the user is allowed to quickly identify media that the second user may want to share with the first user, in particular, that depicts faces that are common to both sets of media. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some implementations, a context is determined (928) based on events associated with the first set of media items and the one or more suggested media items. For example, the exemplary first set represented in interface 814 of FIG. 8AJ received from William (e.g., 805B) includes media items associated with the event "Taihao lake" occurring on 12 month 1 through 12 month 4, and the exemplary one or more suggested media items represented in interface 854 of FIG. 8AM also includes one or more media items associated with the event named "Taihao lake" occurring on 12 month 1 through 12 month 4.
Based on the event usage determined context associated with the first set and the one or more suggested media items, the user is allowed to quickly identify media that the second user may want to share with the first user. Performing the optimization operation without further input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the event corresponds to a time range, and wherein the first set and the one or more suggested media items each include (930) one or more media items captured within the time range. In some embodiments, a time range is a range that begins at a first specific time and/or a first specific date on a day and ends at a second specific time and/or a second specific date on a day. In some embodiments, the media item includes or is associated with metadata that indicates that the media item was captured at a particular time within a timeframe. For example, the exemplary first set received from william (e.g., 805B) represented in interface 814 of fig. 8AJ includes media items associated with the event "taihao lake" occurring during the time range of 12 months 1 day to 12 months 4 day, and the exemplary one or more suggested media items represented in interface 854 of fig. 8AM also includes one or more media items associated with the event named "taihao lake" occurring during the time range of 12 months 1 day to 12 months 4 day.
In some embodiments, the event corresponds to a geographic location, and the first set and the one or more suggested media items each include (932) one or more media items captured at the geographic location. In some embodiments, the media item includes or is associated with metadata that indicates that the media item was captured at or near a geographic location (e.g., at another geographic location). In some embodiments, the event corresponds to multiple geographic locations (e.g., media items taken at multiple locations during road travel). For example, the exemplary first set represented in interface 814 of FIG. 8AJ received from Williams (805B) includes media items associated with the event "Taihao lake" occurring in the geographically expensive lake, and the exemplary one or more suggested media items represented in interface 854 of FIG. 8AM further includes one or more media items associated with the event "Taihao lake" occurring in the geographically expensive lake.
In some embodiments, after receiving an indication that the first user has shared the first set of media items with the second user, the electronic device (e.g., 600) displays (934) on the display a second affordance associated with the first set of media items (e.g., 804C of FIG. 8A, 8D, or 8I; 814J of FIG. 8 AJ). In some embodiments, the affordance associated with the first set is displayed concurrently with transmitting (e.g., downloading/uploading) the progress indicator (e.g., "download," "upload," "rory is uploading 23 photos," "downloading 23 photos," etc.). For example, as shown in fig. 8A, affordance 804C includes an indication that the respective media item is being uploaded (e.g., by a sender device, associated with william) and an upload status (e.g., 2 nd out of 30). For example, as shown in fig. 8D, affordance 804C includes an indication that the respective media item is being downloaded (e.g., by the recipient device, associated with linne) and a download status (e.g., 2 nd out of 30). In some embodiments, in response to user input (e.g., 806 of FIG. 8B) of a selection of an affordance while a transmission is in progress (e.g., downloading and/or uploading), a preview of one or more media items in the first collection is displayed. In some embodiments, the preview includes representations of one or more media items: reduced size and/or mass, displayed in gray, and/or not selectable. For example, in response to user input 806, the electronic device displays interface 808, as shown in fig. 8C.
In some implementations, the electronic device (e.g., 600) receives (936), via one or more input devices, a third input (e.g., 810 of fig. 8H or 812 of fig. 8I) representing a selection of a second affordance (e.g., 804C) associated with the first set of media items. In response to receiving a third input representing selection of the second affordance, the electronic device displays (938) an interface (e.g., 814 shown in fig. 8J, 8O, 8V, or 8W) for viewing the first set on the display, the interface including a description of at least a portion of the first set of media items (e.g., 816A, 816B). For example, the interface includes a grid of one top view of media items or multiple photographs from the first collection.
In some embodiments, while displaying the interface for viewing the first set, the electronic device (e.g., 600) displays (940) the third affordance (e.g., 814C of fig. 8J, 814G of fig. 8T, 8V, or 814J of fig. 8 AH). In some embodiments, the electronic device (e.g., 600) receives (942) a fourth input (e.g., 818 of fig. 8K, 830 of fig. 8T, 846 of fig. 8 AH) representing a selection of the third affordance via the one or more input devices. In response to receiving a fourth input representing a selection of the third affordance, the electronic device (e.g., 600) causes (944) one or more media items in the first set to be added to a media library associated with the second user. For example, the electronic device adds (e.g., selects) the media item to a media library on the local storage, or transmits a command to the cloud-based service to add the media item to a remote media library (e.g., a cloud-based service).
In some embodiments, prior to receiving the fourth input representing the selection of the third affordance, and while displaying the interface for viewing the first set, the electronic device (e.g., 600) receives (946) a fifth input (e.g., 822 of fig. 8N, 834 of fig. 8V, or 842 of fig. 8 AD) via the one or more input devices. In response to receiving the fifth input, the electronic device (e.g., 600) enters (948) a media item selection mode (e.g., as shown in FIG. 8O, FIG. 8W, or FIG. 8 AE). While in the media item selection mode (950): upon displaying the first media item representation in the first set of media items (e.g., 816B of fig. 8R, 816B of fig. 8X), the electronic device (e.g., 600) receives (952), via the one or more input devices, a sixth input (e.g., 828 of fig. 8R, 836 of fig. 8X) associated with a location of the displayed representation of the first media item. In response to receiving the sixth input (954), the electronic device (e.g., 600): switching whether the first media item is selected (e.g., as shown in FIG. 8S or FIG. 8Y); in accordance with the switch resulting in selection of the first media item, the electronic device (e.g., 600) displays a selection indicator (e.g., 824A of fig. 8R or 8W) associated with the displayed representation of the first media item on the display; and in accordance with the switch resulting in deselecting the first media item, the electronic device (e.g., 600) stops displaying a selection indicator associated with the representation of the first media item on the display (e.g., as shown in fig. 8S or fig. 8Y).
In some embodiments, in accordance with a determination (956) that the sixth input is a first gesture (e.g., a tap on a media item) (e.g., user input 828 of fig. R8 or user input 836 of fig. 8X), the electronic device (e.g., 600): switching whether the first media item is selected (e.g., as shown in FIG. 8S or FIG. 8Y); in accordance with the switch resulting in selection of a first media item, displaying a selection indicator (e.g., 824A of fig. 8R or 8W) associated with the displayed representation of the first media item on the display; and in accordance with the switch causing deselection of the first media item, ceasing to display on the display the selection indicator associated with the representation of the first media item. In some implementations, in accordance with the switch resulting in deselection of the first media item, the electronic device displays an unselected indicator (e.g., 824B of fig. 8S or 8Y). In accordance with a determination that the sixth input is a second gesture (e.g., user input 838 of fig. 8Z) different from the first gesture (e.g., a deep press gesture, a press and hold gesture, or a media item-centric un-pinch gesture), the electronic device: displaying (958) a top view (e.g., 814 as shown in FIG. 8 AA) of a first media item (e.g., 816B) on the display without toggling whether the first media item is selected. For example, in fig. 8X-8Y, in response to user input 836 (e.g., a tap on media item 816B), the device toggles the selection of media item 816B without entering a top view. In fig. 8Z-8 AA, in response to user input 838 (e.g., an un-pinch centered on media item 816B), the device enters a top view without toggling the selection of media item 816B. In some embodiments, a subsequent user input (e.g., which is a first gesture, such as a tap) is received on one of the media items displayed upon the up-view toggle selection (e.g., as in fig. 8 AB-fig. 8AC) (e.g., user input 840 of fig. 8 AB).
Switching or dropping switching and entering one of the top views of the media items depending on whether the input is a first gesture or a second gesture, respectively, provides the user with more control of the device by allowing different results from the input related to the gestures. Providing additional control over the device without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, displaying a representation (e.g., 816B of fig. 8 AA) of a first media item (e.g., 816B) of the first set of media items includes displaying (960) a top view (e.g., as shown in fig. 8Q, 8AA, or 8 AG) of the first media item on the display, wherein a sixth input (e.g., 828 of fig. 8R, 840 of fig. 8AB, or 844 of fig. 8 AF) is received while displaying the top view of the one of the first media item.
In some embodiments, while in the media item selection mode, the electronic device (e.g., 600) receives (962), via one or more input devices, input (e.g., 828 of fig. 8R, 836 of fig. 8X, 840 of fig. 8AB, or 844 of fig. 8 AF) defining a user-selected group of media items in a first set of media items (e.g., selecting and/or deselecting one or more media items), wherein the user-selected group of media items in the first set includes less than all of the media items in the first set (e.g., 29 of the 30 as shown in fig. 8 AH). After receiving input defining a set of media items selected by the user in the first set, the electronic device receives (964), via the one or more input devices, a fourth input (e.g., 830 of fig. 8T, or 846 of fig. 8 AH) representing a selection of the third affordance (e.g., 814G or 814J). In response to receiving a fourth input representing a selection of the third affordance, the electronic device causes (966) a group of media items in the first set selected by the user to be added to a media library associated with the second user (e.g., saving photos to a locally stored media library and/or causing a cloud-based service to save photos to a remotely stored media library) without causing media items in the first set that are not included in the group of media libraries selected by the user to be added to the media library associated with the second user. For example, in response to receiving the user input 846 of fig. 8AH, the electronic device causes 29 selected media items to be added to the media library, and does not cause 1 unselected media item (media item 816B) to be added to the media library.
In some embodiments, the interface for viewing the first set (e.g., 814 as shown in fig. 8 AD) includes (968) a plurality of representations of media items (e.g., 816A and 816B as shown in fig. 8 AD) arranged in a grid (e.g., disposed aligned along one or more vertical or horizontal axes), the plurality of representations including a representation of the first media item (e.g., 816B), and in response to receiving the fifth input (970): in accordance with a determination that the fifth input is a third gesture (e.g., a tap on the media item), the electronic device (e.g., 600): an up view of the first media item is displayed (972) on the display without entering a media item selection mode (e.g., such as the one shown in fig. 8J in the up view, but with media item 816B displayed in area 814A). In accordance with a determination (974) that the fifth input is a fourth gesture (e.g., user input 842 of fig. 8 AD) different from the third gesture (e.g., a tap gesture) (e.g., a deep press gesture, a press and hold gesture, or a media item-centric un-pinch gesture), the electronic device displays (976) one of the top views of the first media item (e.g., as shown at 8 AE) on the display; and enters (978) a media item selection mode (e.g., as shown in fig. 8 AE). In some implementations, the fifth input is received while the one displays the first media item in the top view and while in the media item selection mode. For example, while displaying a top view, as shown in FIG. 8N, the electronic device may enter a selection mode in response to selecting the affordance 814F (e.g., user input 822).
Entering or dropping into the media item selection mode, depending on whether the input causing one of the top views to be displayed is a fourth gesture or a third gesture, provides the user with more control of the device by allowing different results from the input regarding the gestures. Providing additional control over the device without cluttering the UI with additional displayed controls enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the first set of media items includes a first set of media items included in a media library associated with the second user, the first set of media items including a second set of media items not included in the media library associated with the second user, and displaying an interface for viewing the first set including a description of at least a portion of the first set of media items (e.g., a single media item in a top view, or a title card and a plurality of photographs from the first set) includes: representations of the first set of media items not included in the media library associated with the second user are displayed (980) without displaying representations of the second set of media items included in the media library associated with the second user. For example, the electronic device (e.g., 600) does not display (e.g., in interface 814) representations of media items in the first collection that were already included in the second user (e.g., 805A) media library before the first user (e.g., 805B) shared the first collection. This may avoid adding duplicate media items to the library, for example, by not displaying media items that are not new to the user, and make more efficient use of display space.
Forgoing display of the representation of the second group of media items contained in the media library associated with the second user allows the user to view only media items not already included in their media library, avoiding display and viewing duplicate media items. Performing an operation without further user input when a set of conditions has been met enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the electronic device (e.g., 600) receives (982) a seventh input (e.g., 806 of fig. 8B) via one or more input devices, the seventh input representing a selection of the second affordance (e.g., 804C). In response to receiving a seventh input representing a selection of the second affordance, and in accordance with a determination that the first set of media items is not downloaded (e.g., the downloaded data represents at least a portion of the set of media items) (e.g., the user has not previously selected 804C, or otherwise viewed the shared collection), the electronic device initiates (984) a download of the first set of media items (e.g., the media items represented by the affordance 804C, and shown in the interface 814). After initiating the download of the first set of media items, the electronic device detects (986) completion of the download of the first set of media items. In response to detecting (988) that the download of the first set of media items is complete: in accordance with a determination that the second affordance is not currently displayed (e.g., affordance 804C is no longer displayed in the text record shown in fig. 8F, such as if the text record is growing, or if the device is no longer displaying a text record and/or a messaging application, as shown in fig. 6 AL), the electronic device displays (990) a fourth affordance (e.g., 809 of fig. 8G) associated with the first set of media items on the display; in accordance with a determination that the second affordance is currently being displayed, the electronic device forgoes (992) displaying the fourth affordance. The electronic device receives (994) an eighth input (e.g., 810 of FIG. 8H) via the one or more input devices, the eighth input representing selection of a fourth affordance associated with the first set of media items. In response to receiving the eighth input, the electronic device displays (996) an interface (e.g., 814 shown in fig. 8J, 8O, 8V, or 8W) for viewing the first set on the display, the interface including a description of at least a portion of the first set of media items.
Displaying a third affordance that causes an interface to view the first set to be displayed upon completion of the download without displaying content that does not include the same first affordance, allows the user quick access to the first set without having to enter too much to find the first affordance for selection. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user errors in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, the indication that the first user has shared the first set of media items with the second user includes a link to access the first set of media items, and wherein displaying the second affordance (e.g., 804C of fig. 8A) associated with the first set of media items includes concurrently displaying on the display: a second affordance associated with the first set of media items; and an expiration time indicating when a link for accessing the first set of media items expires. For example, representation 804C in FIG. 8A indicates an expiration time of 1 month 8 date in the included text "link to 1 month 8 date expired".
In some embodiments, a ninth input (e.g., such as 812 of fig. 8I) representing a selection of a second affordance (e.g., 804C of fig. 8I) associated with the first set of media items is received via one or more input devices after expiration of a link for accessing the first set of media items. In response to receiving a ninth input representing a selection of the second affordance: in accordance with a determination that at least a portion of a first set of media items has been previously downloaded, the electronic device (e.g., 600) displays an interface on the display for viewing the first set, the interface including a description of at least a portion of the first set of media items (e.g., 814 shown in fig. 8 AJ). For example, in response to selecting a representation such as 804C of FIG. 8I after the access expires, if one or more media items from the first collection have been downloaded (e.g., before the expiration), the electronic device displays a shared collection interface such as 814 shown in FIG. 8AJ or 854 shown in FIG. 8 AL. The sharing interface may be displayed in a top view or grid diagram. In accordance with a determination that at least a portion of the first set of media items has not been previously downloaded, the electronic device forgoes displaying, on the display, an interface for viewing the first set, the interface including a description of at least a portion of the first set of media items. For example, selection of 804C after the access expires may result in an error message being displayed or no effect being displayed (e.g., no display of the interface being caused in response). In some embodiments, the electronic device displays (e.g., is associated with the second affordance) an indication that the link has expired (e.g., as shown in representation 604H of fig. 6 AAC).
In accordance with at least a portion of the downloaded first collection, a user interface for viewing the first collection is displayed despite selection of the expired link in response, providing the user with quick access to the downloaded media without requiring excessive input to find the downloaded media items in the library interface. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide appropriate inputs and reducing user error in operating/interacting with the device), thereby further reducing power usage and extending the battery life of the device by enabling the user to use the device more quickly and efficiently.
Note that the details of the process described above with respect to method 900 (e.g., fig. 9A-9G) also apply in a similar manner to the methods described below/above. For example, method 900 optionally includes one or more features of the various methods described above with reference to method 700. For the sake of brevity, these details are not repeated in the following.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the technology and its practical applications. Those skilled in the art are thus well able to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
Although the present disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It is to be understood that such changes and modifications are to be considered as included within the scope of the disclosure and examples as defined by the following claims.
As described above, one aspect of the present technology is to collect and use data from various sources to improve the delivery of sharing suggestions or any other content to users that may be of interest to them. The present disclosure contemplates that, in some instances, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, phone numbers, email addresses, twitter account numbers, home addresses, data or records relating to a user's health condition or fitness level (e.g., vital sign measurements, medication information, athletic information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be useful to benefit the user. For example, the personal information data may be used to deliver goals and related sharing suggestions that are of greater interest to the user. Thus, using such personal information data enables the user to view sharing advice, but control whether to share content with others. In addition, the present disclosure also contemplates other uses for which personal information data is beneficial to a user. For example, health and fitness data may be used to provide insight into the overall health condition of a user, or may be used as positive feedback for individuals using technology to pursue health goals.
The present disclosure contemplates that entities responsible for collecting, analyzing, publishing, transmitting, storing, or otherwise using such personal information data will comply with established privacy policies and/or privacy practices. In particular, such entities should enforce and adhere to the use of privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining privacy and security of personal information data. Users can conveniently access such policies and should update as data is collected and/or used. The user's personal information should be collected as legitimate and legitimate uses of the entity and should not be shared or sold outside of these legitimate uses. Furthermore, such acquisition/sharing should be done after receiving the user's informed consent. Furthermore, such entities should consider taking any necessary steps to defend and secure access to such personal information data and to ensure that others who have access to the personal information data comply with their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be adjusted to the particular type of personal information data collected and/or accessed, and to applicable laws and standards including specific considerations of jurisdiction. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state laws, such as the < health insurance convenience and disclaimer act > (HIPAA); however, healthcare data in other countries may be subject to other regulations and policies and should be processed accordingly. Therefore, different privacy practices for different types of personal data should be maintained in each country.
Regardless of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, where sharing suggestions are provided, the techniques of the present invention may be configured to allow a user to opt-in to "join" or "opt-out of" participating in the collection of personal information data during registration service or at any time thereafter. As another example, the user may choose not to provide or limit the data used to determine sharing suggestions related to user activities or devices. In addition to providing "opt-in" and "opt-out" options, the present disclosure contemplates providing notifications related to accessing or using personal information. For example, the user may notify the user when the application is downloaded that their personal information data is to be accessed, and then remind the user again before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, the risk can be minimized by limiting data collection and deleting data. In addition, and when applicable, including in certain health-related applications, data de-identification may be used to protect the privacy of the user. The de-identification may be facilitated by removing certain identifiers (e.g., date of birth, etc.), controlling the amount or specificity of the data stored (e.g., collecting location data at the city level, rather than at the address level), controlling the manner in which the data is stored (e.g., aggregating data among users), and/or other methods, if appropriate.
Thus, while this disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, this disclosure also contemplates that various embodiments may be implemented without the need to access such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, content may be selected and suggested to the user by inferring preferences or relevance based on non-personal information data or an absolute minimum amount of personal information, such as content requested by a device associated with the user, other non-personal information available to a shared suggestion service, or publicly available information.

Claims (25)

1. A computer-implemented method, comprising:
at a device having a display and one or more input devices:
receiving, from an external device, an indication that a first user has shared a first set of media items with a second user;
after receiving the indication that the first user has shared a first set of media items with the second user, outputting a prompt to share, with the first user, one or more suggested media items associated with the second user, the one or more suggested media items being related to the first set of media items based on context, wherein:
The context is determined based on the first set of media items, and
the one or more suggested media items are not included in the first set.
2. The method of claim 1, further comprising:
while displaying representations of the one or more suggested media items associated with the second user, displaying a first affordance on the display;
receiving, via the one or more input devices, a first input representing a selection of the first affordance;
in response to receiving the first input, transmitting a message to the first user and providing access to the at least a portion of the one or more suggested media items.
3. The method of any of claims 1-2, further comprising:
after receiving the indication that the first user has shared the first set of media items with the second user, and before outputting the prompt for sharing, receiving, via the one or more input devices, a second input representing a request to add one or more media items of the first set of media items to a media library associated with the second user;
in response to receiving the second input, causing the one or more media items in the first set of media items to be added to the media library associated with the second user; and
After causing one or more media items in the first set to be added to the media library associated with the second user, displaying a prompt to share the one or more suggested media items associated with the second user, wherein:
the one or more suggested media items associated with the second user are selected from the media library associated with the second user, and
the one or more suggested media items associated with the second user exclude the one or more media items from the first collection that have been added to the media library associated with the second user.
4. The method of any of claims 1-3, wherein the context is determined based on an identified face, the identified face being identified in:
one or more media items in the first set of media items, and
one or more of the one or more suggested media items.
5. The method of any of claims 1-4, wherein the context is determined based on events associated with the first set of media items and the one or more suggested media items.
6. The method of claim 5, wherein the event corresponds to a time range, and
wherein the first set and the one or more suggested media items respectively include one or more media items captured within the time horizon.
7. The method of any of claims 5-6, wherein the event corresponds to a geographic location, and
wherein the first set and the one or more suggested media items respectively comprise one or more media items captured at the geographic location.
8. The method of any of claims 1 to 7, further comprising:
after receiving the indication that the first user has shared the first set of media items with the second user, displaying, on the display, a second affordance associated with the first set of media items.
9. The method of claim 8, further comprising:
receiving, via the one or more input devices, a third input representing a selection of the second affordance associated with the first set of media items; and
in response to receiving the third input representing selection of the second affordance, displaying, on the display, an interface for viewing a description of the first set that includes at least a portion of the first set of media items.
10. The method of claim 9, further comprising:
while displaying the interface for viewing the first set, displaying a third affordance;
receiving, via the one or more input devices, a fourth input representing a selection of the third affordance; and
in response to receiving the fourth input representing selection of the third affordance, causing one or more media items in the first set to be added to a media library associated with the second user.
11. The method of claim 10, further comprising:
prior to receiving the fourth input representing selection of the third affordance and while displaying the interface for viewing the first set, receiving a fifth input via the one or more input devices;
in response to receiving the fifth input, entering a media item selection mode; and
while in the media item selection mode:
while displaying the representation of the first media item in the first set of media items, receiving, via the one or more input devices, a sixth input associated with a position of the displayed representation of the first media item; and
In response to receiving the sixth input:
switching whether the first media item is selected;
in accordance with a switch that causes the first media item to be selected, displaying on the display a selection indicator associated with the displayed representation of the first media item; and
in accordance with a switch that causes the first media item to be unselected, ceasing to display the selection indicator associated with the representation of the first media item on the display.
12. The method of claim 11, further comprising:
in accordance with a determination that the sixth input is a first gesture:
switching whether the first media item is selected;
in accordance with a switch that causes the first media item to be selected, displaying the selection indicator associated with the displayed representation of the first media item on the display; and
in accordance with a switch that causes the first media item to be unselected, ceasing to display the selection indicator associated with the representation of the first media item on the display; and
in accordance with a determination that the sixth input is a second gesture different from the first gesture:
displaying a top view of the first media item on the display without switching whether the first media item is selected.
13. The method of any of claims 11-12, wherein displaying the representation of the first media item in the first set of media items comprises displaying a top view of the first media item on the display, wherein the sixth input is received while the top view of the first media item is displayed.
14. The method of any of claims 11 to 13, further comprising:
while in the media item selection mode, receiving, via the one or more input devices, input defining a user-selected group of media items in the first set of media items, wherein the user-selected group of media items in the first set includes fewer than all of the media items in the first set;
after receiving input defining the user-selected group of media items in the first set, receiving the fourth input representing selection of the third affordance via the one or more input devices; and
in response to receiving the fourth input representing selection of the third affordance, causing the user-selected group of media items in the first collection to be added to a media library associated with the second user without causing media items in the first collection that are not included in the user-selected group of media items to be added to the media library associated with the second user.
15. The method of any of claims 11-14, wherein the interface for viewing the first collection includes a plurality of representations of media items arranged in a grid, wherein the plurality of representations includes the representation of the first media item, the method further comprising:
in response to receiving the fifth input:
in accordance with a determination that the fifth input is a third gesture:
displaying a top view of the first media item on the display without entering the media item selection mode; and
in accordance with a determination that the fifth input is a fourth gesture that is different from the third gesture:
displaying a top view of the first media item on the display; and
entering the media item selection mode.
16. The method of any of claims 9-15, wherein the first set of media items includes a first group of media items included in a media library associated with the second user,
wherein the first set of media items includes a second group of media items that is not included in the media library associated with the second user, and
wherein displaying the interface for viewing the first set that includes a description of at least a portion of the first set of media items comprises:
Displaying representations of the first group of media items that are not included in the media library associated with the second user, and not displaying representations of the second group of media items that are included in the media library associated with the second user.
17. The method of any of claims 8 to 16, further comprising:
receiving, via the one or more input devices, a seventh input representing a selection of the second affordance;
in response to receiving the seventh input representing selection of the second affordance and in accordance with a determination that the first set of media items has not been downloaded, initiating a download of the first set of media items;
after initiating the download of the first set of media items, detecting completion of the download of the first set of media items;
in response to detecting completion of the download of the first set of media items:
in accordance with a determination that the second affordance is not currently displayed, displaying, on the display, a fourth affordance associated with the first set of media items;
in accordance with a determination that the second affordance is currently being displayed, forgoing display of the fourth affordance; receiving, via the one or more input devices, an eighth input representing a selection of the fourth affordance associated with the first set of media items; and
In response to receiving the eighth input, displaying on the display an interface for viewing a description of the first set that includes at least a portion of the first set of media items.
18. The method of any of claims 8-17, wherein the indication that the first user has shared the first set of media items with the second user includes a link to access the first set of media items, and wherein displaying the second affordance associated with the first set of media items includes concurrently displaying on the display:
the second affordance associated with the first set of media items; and
an expiration time indicating when the link for accessing the first set of media items expires.
19. The method of claim 18, further comprising:
receiving, via the one or more input devices, a ninth input representing a selection of the second affordance associated with the first set of media items after expiration of the link to access the first set of media items; and
in response to receiving the ninth input representing selection of the second affordance:
In accordance with a determination that the at least a portion of the first set of media items has been previously downloaded, displaying, on the display, an interface for viewing a description of the first set that includes the at least a portion of the first set of media items;
in accordance with a determination that the at least a portion of the first set of media items has not been previously downloaded, forgoing displaying, on the display, an interface for viewing the first set that includes a description of the at least a portion of the first set of media items.
20. A non-transitory computer-readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display and one or more input devices, the one or more programs including instructions for:
receiving, from an external device, an indication that a first user has shared a first set of media items with a second user;
after receiving the indication that the first user has shared the first set of media items with the second user, outputting a prompt to share, with the first user, one or more suggested media items associated with the second user, the one or more suggested media items related to the first set of media items based on context, wherein:
The context is determined based on the first set of media items, and
the one or more suggested media items are not included in the first set.
21. An electronic device, comprising:
a display;
one or more input devices;
one or more processors; and
memory storing one or more programs configured for execution by the one or more processors, the one or more programs including instructions for:
receiving, from an external device, an indication that a first user has shared a first set of media items with a second user;
after receiving the indication that the first user has shared a first set of media items with the second user, outputting a prompt to share, with the first user, one or more suggested media items associated with the second user, the one or more suggested media items related to the first set of media items based on context, wherein:
the context is determined based on the first set of media items, and
the one or more suggested media items are not included in the first set.
22. An electronic device, comprising:
A display;
one or more input devices;
means for receiving, from an external device, an indication that a first user has shared a first set of media items with a second user;
means for outputting, after receiving an indication that the first user has shared the first set of media items with the second user, a prompt to share with the first user one or more suggested media items associated with the second user, the one or more suggested media items related to the first set of media items based on context, wherein:
the context is determined based on the first set of media items, and
the one or more suggested media items are not included in the first set.
23. A non-transitory computer-readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device with a display and a touch-sensitive surface, the one or more programs comprising instructions for performing the method of any of claims 1-19.
24. An electronic device, comprising:
a display;
one or more input devices;
One or more processors; and
memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any of claims 1-19.
25. An electronic device, comprising:
a display;
one or more input devices; and
apparatus for performing the method of any one of claims 1 to 19.
CN202111244490.6A 2018-05-07 2018-09-28 User interface for sharing contextually relevant media content Pending CN114327225A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201862668018P 2018-05-07 2018-05-07
US62/668,018 2018-05-07
DKPA201870385A DK180171B1 (en) 2018-05-07 2018-06-12 USER INTERFACES FOR SHARING CONTEXTUALLY RELEVANT MEDIA CONTENT
DKPA201870385 2018-06-12
CN201811136445.7A CN110456971B (en) 2018-05-07 2018-09-28 User interface for sharing contextually relevant media content

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811136445.7A Division CN110456971B (en) 2018-05-07 2018-09-28 User interface for sharing contextually relevant media content

Publications (1)

Publication Number Publication Date
CN114327225A true CN114327225A (en) 2022-04-12

Family

ID=68466049

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111244490.6A Pending CN114327225A (en) 2018-05-07 2018-09-28 User interface for sharing contextually relevant media content
CN201811136445.7A Active CN110456971B (en) 2018-05-07 2018-09-28 User interface for sharing contextually relevant media content

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201811136445.7A Active CN110456971B (en) 2018-05-07 2018-09-28 User interface for sharing contextually relevant media content

Country Status (2)

Country Link
CN (2) CN114327225A (en)
WO (1) WO2019217009A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111562865B (en) * 2020-04-30 2022-04-29 维沃移动通信有限公司 Information sharing method and device, electronic equipment and storage medium
CN112748844B (en) * 2020-12-31 2022-12-20 维沃移动通信有限公司 Message processing method and device and electronic equipment
EP4341792A1 (en) * 2021-05-17 2024-03-27 Apple Inc. Devices, methods, and graphical user interfaces for displaying media items shared from distinct applications
US11875016B2 (en) 2021-05-17 2024-01-16 Apple Inc. Devices, methods, and graphical user interfaces for displaying media items shared from distinct applications
US11693553B2 (en) 2021-05-17 2023-07-04 Apple Inc. Devices, methods, and graphical user interfaces for automatically providing shared content to applications
CN113852540B (en) * 2021-09-24 2023-07-28 维沃移动通信有限公司 Information transmission method, information transmission device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150227782A1 (en) * 2014-02-13 2015-08-13 Apple Inc. Systems and methods for sending digital images
US20160073034A1 (en) * 2014-09-04 2016-03-10 Samsung Electronics Co., Ltd. Image display apparatus and image display method
US20170093780A1 (en) * 2015-09-28 2017-03-30 Google Inc. Sharing images and image albums over a communication network
CN106575149A (en) * 2014-05-31 2017-04-19 苹果公司 Message user interfaces for capture and transmittal of media and location content

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3859005A (en) 1973-08-13 1975-01-07 Albert L Huebner Erosion reduction in wet turbines
US4826405A (en) 1985-10-15 1989-05-02 Aeroquip Corporation Fan blade fabrication system
KR100595924B1 (en) 1998-01-26 2006-07-05 웨인 웨스터만 Method and apparatus for integrating manual input
US7218226B2 (en) 2004-03-01 2007-05-15 Apple Inc. Acceleration-based theft detection system for portable electronic devices
US7688306B2 (en) 2000-10-02 2010-03-30 Apple Inc. Methods and apparatuses for operating a portable device based on an accelerometer
US6677932B1 (en) 2001-01-28 2004-01-13 Finger Works, Inc. System and method for recognizing touch typing under limited tactile feedback conditions
US6570557B1 (en) 2001-02-10 2003-05-27 Finger Works, Inc. Multi-touch system and method for emulating modifier keys via fingertip chords
US7657849B2 (en) 2005-12-23 2010-02-02 Apple Inc. Unlocking a device by performing gestures on an unlock image
US7503007B2 (en) * 2006-05-16 2009-03-10 International Business Machines Corporation Context enhanced messaging and collaboration system
US8745139B2 (en) * 2009-05-22 2014-06-03 Cisco Technology, Inc. Configuring channels for sharing media
WO2013169849A2 (en) 2012-05-09 2013-11-14 Industries Llc Yknots Device, method, and graphical user interface for displaying user interface objects corresponding to an application
WO2014105916A2 (en) * 2012-12-26 2014-07-03 Google Inc. Promoting sharing in a social network system
CN104903834B (en) 2012-12-29 2019-07-05 苹果公司 For equipment, method and the graphic user interface in touch input to transition between display output relation
US9325783B2 (en) * 2013-08-07 2016-04-26 Google Inc. Systems and methods for inferential sharing of photos
US9338242B1 (en) * 2013-09-09 2016-05-10 Amazon Technologies, Inc. Processes for generating content sharing recommendations
US20150180980A1 (en) * 2013-12-24 2015-06-25 Dropbox, Inc. Systems and methods for preserving shared virtual spaces on a content management system
US9519408B2 (en) * 2013-12-31 2016-12-13 Google Inc. Systems and methods for guided user actions
KR20160017954A (en) * 2014-08-07 2016-02-17 삼성전자주식회사 Electronic device and method for controlling transmission in electronic device
KR20160087640A (en) * 2015-01-14 2016-07-22 엘지전자 주식회사 Mobile terminal and method for controlling the same
US20170063753A1 (en) * 2015-08-27 2017-03-02 Pinterest, Inc. Suggesting object identifiers to include in a communication

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150227782A1 (en) * 2014-02-13 2015-08-13 Apple Inc. Systems and methods for sending digital images
CN106575149A (en) * 2014-05-31 2017-04-19 苹果公司 Message user interfaces for capture and transmittal of media and location content
CN107122049A (en) * 2014-05-31 2017-09-01 苹果公司 For capturing the message user interface with transmission media and location conten
US20160073034A1 (en) * 2014-09-04 2016-03-10 Samsung Electronics Co., Ltd. Image display apparatus and image display method
US20170093780A1 (en) * 2015-09-28 2017-03-30 Google Inc. Sharing images and image albums over a communication network

Also Published As

Publication number Publication date
CN110456971A (en) 2019-11-15
CN110456971B (en) 2021-11-02
WO2019217009A1 (en) 2019-11-14

Similar Documents

Publication Publication Date Title
AU2019266054B2 (en) User interfaces for sharing contextually relevant media content
CN111108740B (en) Electronic device, computer-readable storage medium, and method for displaying visual indicators of participants in a communication session
CN114706522A (en) User interface for sharing content with other electronic devices
CN110058775B (en) Displaying and updating application view sets
CN110456971B (en) User interface for sharing contextually relevant media content
CN113939793B (en) User interface for electronic voice communication
CN114327356A (en) User interface for content applications
CN117331800A (en) User interface for logging user activity
CN116508021A (en) Method and user interface for processing user request
AU2022200514B2 (en) User interfaces for sharing contextually relevant media content
KR20240019144A (en) User interfaces for messaging conversations
CN111684403A (en) Media capture lock affordance for graphical user interface
US11671554B2 (en) User interfaces for providing live video
CN110456948B (en) User interface for recommending and consuming content on electronic devices
CN116195261A (en) User interface for managing audio of media items
CN115698933A (en) User interface for transitioning between selection modes
US20240004521A1 (en) User interfaces for sharing contextually relevant media content
KR20240049307A (en) Low bandwidth and emergency communications user interfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination