CN116520975A - System and method for booting instructions or support using virtual objects - Google Patents

System and method for booting instructions or support using virtual objects Download PDF

Info

Publication number
CN116520975A
CN116520975A CN202310091239.3A CN202310091239A CN116520975A CN 116520975 A CN116520975 A CN 116520975A CN 202310091239 A CN202310091239 A CN 202310091239A CN 116520975 A CN116520975 A CN 116520975A
Authority
CN
China
Prior art keywords
product
input
examples
annotation
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310091239.3A
Other languages
Chinese (zh)
Inventor
J·崔
O·R·卡恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/156,342 external-priority patent/US20230245410A1/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN202410208475.3A priority Critical patent/CN118092645A/en
Publication of CN116520975A publication Critical patent/CN116520975A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to systems and methods for using boot instructions or support for virtual objects. Disclosed herein is a method for providing boot instructions or support using virtual objects. The method comprises the following steps: at a first computing device in communication with one or more input devices and a second computing device, capturing one or more images using the one or more input devices, determining an identification of an object using the one or more images, and transmitting the identification of the object to the second computing device. The method further comprises the steps of: the method includes receiving, from the second computing device, an indication of a first input received at the second computing device, and presenting a first view of the object, the first view including a first annotation corresponding to the first input received at the second computing device.

Description

System and method for booting instructions or support using virtual objects
Technical Field
The present disclosure relates generally to systems and methods for providing guidance instructions or customer support using virtual representations of products, and in particular to systems and methods for presenting views of products including annotations.
Background
Customer support represents a variety of customer services that assist customers in properly using products, and includes assistance in planning, installation, training, troubleshooting, maintenance, upgrades, and product handling. It is desirable to provide users with an improved customer support experience.
Disclosure of Invention
The present disclosure relates generally to presenting a first user with a view of a product (also referred to herein more generally as an object) containing annotations. In some examples, the annotation may be presented according to (e.g., in response to) user input by a second user (e.g., a customer support person). In some examples, the annotations may be presented on the product (e.g., overlaid on the physical product or presented in proximity to the physical object and/or on a virtual representation of the physical product). For example, in some examples, one or more images may be captured using one or more input devices at a first computing device in communication with the one or more input devices and a second computing device. In some examples, the one or more images may be used to determine an identification of the product. In some examples, an identification of the product may be sent to the second computing device. In some examples, an indication of a first input received at a second computing device may be received from the second computing device. In some examples, a first view of a product may be presented that includes a first annotation corresponding to a first input received at a second computing device.
For example, in some examples, at a first computing device in communication with one or more input devices and a second computing device, an identification of a product is received from the second computing device. In some examples, a first view of a product including a virtual representation of the product is presented, and an indication of a first input is detected using the one or more input devices, wherein the first input includes an interaction with the virtual representation of the product. In some examples, an indication of the first input or a first annotation corresponding to the first input is then sent to the second computing device.
The present disclosure also relates to user input by a first user (e.g., a customer support person) providing user input to enable presentation of annotations for a product of a second user. In some examples, at a first computing device (customer service representative device) in communication with one or more input devices and a second computing device (customer/client/user device), an identification of a product may be received from the second computing device. A first view of a product including a virtual representation of the product may be presented to a first user using a first computing device. In some examples, the indication of the first input may be detected using the one or more input devices in communication with the first computing device. The first input may include an interaction (e.g., a gesture) with a virtual representation of the product. In some examples, an indication of the first input or a first annotation corresponding to the first input may be sent to the second computing device (e.g., for displaying the annotation to a second user of the second computing device).
Drawings
FIG. 1 illustrates an exemplary block diagram of a computing system in accordance with examples of this disclosure.
Fig. 2A illustrates an environment of a first user (e.g., a user of a product) according to an example of the present disclosure.
Fig. 2B illustrates an environment of a second user (e.g., customer service representative) according to an example of the present disclosure.
Fig. 3A-3D illustrate additional views of an environment of a first user or an environment of a second user according to examples of the present disclosure.
Fig. 4A-4F illustrate additional views of an environment of a first user or an environment of a second user according to examples of the present disclosure.
Fig. 5 illustrates an exemplary process for presenting a first view of a product including annotations according to an example of the present disclosure.
Fig. 6 illustrates an example process for sending an indication or annotation corresponding to a first input according to an example of this disclosure.
Detailed Description
In the following description of the examples, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific examples which may be practiced. It is to be understood that other examples may be utilized and structural changes may be made without departing from the scope of the disclosed examples.
The present disclosure relates generally to presenting a first user with a view of a product (also referred to herein as an object) containing annotations. In some examples, the annotation may be presented according to (e.g., in response to) user input by a second user (e.g., a customer support person). In some examples, the annotations may be presented on the product (e.g., overlaid on the physical product or presented in proximity to the physical object and/or on a virtual representation of the physical product). For example, in some examples, one or more images may be captured using one or more input devices at a first computing device (client/user device) in communication with the one or more input devices and a second computing device (customer service representative device). In some examples, the one or more images may be used to determine an identification of the product. In some examples, an identification of the product may be sent to the second computing device. In some examples, an indication of a first input received at a second computing device may be received from the second computing device. In some examples, a first view of a product may be presented that includes a first annotation corresponding to a first input received at a second computing device.
The present disclosure also relates to user input by a first user (e.g., a customer support person) providing user input to enable presentation of annotations for a product of a second user. In some examples, at a first computing device (customer service representative device) in communication with one or more input devices and a second computing device (customer/client/user device), an identification of a product may be received from the second computing device. A first view of a product including a virtual representation of the product may be presented to a first user using a first computing device. In some examples, the indication of the first input may be detected using the one or more input devices in communication with the first computing device. The first input may include an interaction (e.g., a gesture) with a virtual representation of the product. In some examples, an indication of the first input or a first annotation corresponding to the first input may be sent to the second computing device (e.g., for displaying the annotation to a second user of the second computing device).
It should be appreciated that although exemplary annotations are primarily described, these annotations may additionally or alternatively be animated. Additionally, it should be appreciated that while the examples described herein focus primarily on annotations in the context of customer service representatives and products, the systems and methods described herein may be used for annotations or animations outside of the context of customer services and products (e.g., typically for annotations of objects). Additionally, it should be appreciated that the annotation techniques described herein may be used to guide instructions without input from a customer service representative.
Fig. 1 illustrates an exemplary block diagram of a computing system 100 (alternatively referred to as a computing device or system) according to examples of the disclosure. In some examples, as shown in fig. 1, computing system 100 includes a processor 102, a memory 104, a display 106, a speaker 108, a microphone 110, an orientation sensor 112, a position sensor 114, an image sensor 116, a body tracking sensor 118, and a communication circuit 120, which optionally communicate over a communication bus 122 of computing system 100. In some examples, computing system 100 may include more than one processor, more than one memory, more than one display, more than one speaker, more than one microphone, more than one orientation sensor, more than one position sensor, more than one image sensor, and/or more than one body tracking sensor, optionally in communication over more than one communication bus. While FIG. 1 illustrates an exemplary computing system, it should be appreciated that in some examples, multiple instances of computing system 100 (or variations on computing system 100) may be used by multiple users, and that these different instances of the computing system may communicate (e.g., via communication circuitry 120).
Processor 102 may be configured to perform the processes described herein (e.g., process 500 and process 600). Processor 102 includes one or more general-purpose processors, one or more graphics processors, and/or one or more digital signal processors. In some examples, memory 104 is a non-transitory computer-readable storage medium (e.g., flash memory, random access memory, or other volatile or non-volatile memory or storage device) that stores computer-readable instructions (e.g., programs) configured to be executed by processor 102 to perform the processes described herein. In some examples, memory 104 may include more than one non-transitory computer-readable storage medium. A non-transitory computer-readable storage medium may be any medium (e.g., excluding signals) that can tangibly contain or store computer-executable instructions for use by or in connection with an instruction execution system, apparatus, and device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer readable storage medium may include, but is not limited to, magnetic, optical, and/or semiconductor memory such as magnetic disks, optical disks based on CD, DVD, or blu-ray technology, and persistent solid state memory such as flash drives, solid state drives, etc.
The computing system 100 also includes a display 106 (generally referred to herein as a display generation component). In some examples, the display 106 includes a single display (e.g., a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED), or other type of display). In some examples, the display 106 includes a plurality of displays. In some examples, the display 106 may include a display with touch sensing capabilities (e.g., a touch screen) or a projector (holographic projector, retinal projector, etc.). In some examples, computing system 100 includes microphone 110 or other suitable audio sensor. The computing system 100 uses the microphone 110 to detect sound from the user and/or the user's real world environment. In some examples, microphone 110 includes a microphone array (a plurality of microphones) that are optionally co-operated to identify ambient sound levels.
The computing system 100 includes an orientation sensor 112 for detecting an orientation and/or movement of the computing system 100 and a position sensor 114 configured to detect a position of the computing system 100 and/or the display 106. For example, the computing system 100 uses the orientation sensor 112 to track changes in the position and orientation of one or more stationary objects in a real-world environment. Orientation sensor 112 optionally includes one or more gyroscopes, one or more inertial measurement units, and/or one or more accelerometers. For example, the position sensor 114 optionally includes a global positioning satellite receiver to determine the absolute position of the computing system in the physical world. In some examples, computing system 100 may use orientation sensor 112, image sensor 116, or both to determine its orientation and position. For example, computing system 100 may perform simultaneous localization and mapping (SLAM) techniques, visual Odometer (VO) techniques, visual Inertial Odometer (VIO) techniques, and so forth.
Computing system 100 optionally includes an image sensor 116, which optionally includes one or more visible light image sensors, such as a Charge Coupled Device (CCD) sensor, and/or a Complementary Metal Oxide Semiconductor (CMOS) sensor operable to obtain an image of a physical object in a real-world environment. In some examples, the image sensor 116 further includes one or more infrared sensors, such as passive infrared sensors or active infrared sensors, configured to detect infrared light in a real-world environment. For example, an active infrared sensor includes an emitter configured to emit infrared light into a real world environment. The image sensor 116 also optionally includes one or more cameras configured to capture movement of the physical object in the real world environment. The image sensor 116 also optionally includes one or more depth sensors configured to detect the distance of the physical object from the computing system 100. In some examples, information from one or more depth sensors allows a device to identify and distinguish objects in a real-world environment from other objects in the real-world environment. In some examples, one or more depth sensors allow the computing system to determine textures and/or topography of objects in the real-world environment. In some examples, computing system 100 uses a combination of CCD sensors, infrared sensors, and depth sensors to detect a physical environment surrounding computing system 100. In some examples, the image sensor 116 includes a plurality of image sensors that work in concert and are configured to capture different information of physical objects in the real-world environment. In some examples, computing system 100 uses image sensor 116 to detect a position and orientation of one or more objects of a product in a real world environment. For example, the computing system 100 uses the image sensor 116 to track the position and orientation of one or more stationary objects in a real-world environment.
The computing system 100 optionally includes a body tracking sensor 118. In some examples, the body tracking sensor 118 optionally includes a hand tracking sensor and/or an eye tracking sensor. The body tracking sensor 118 is configured to track the position/location of one or more portions of the user's hand or eye and/or the motion of one or more portions of the user's hand or eye relative to the real world environment or the augmented reality environment. In some examples, the body tracking sensor 118 may use an image sensor 116 (e.g., one or more infrared cameras, three-dimensional cameras, depth cameras, etc.) that captures two-dimensional information and three-dimensional information from the real world, including information about one or more hands or one or more eyes (e.g., of a human user). In some examples, the hand may be resolved with sufficient resolution to distinguish between the finger and its corresponding location. In some examples, one or more image sensors 116 are positioned relative to the user to define a field of view of the image sensor and an interaction space in which finger/hand positions, orientations, and/or movements captured by the image sensor are used as input (e.g., to distinguish from an idle hand of the user or other hands of other people in the real world environment). Tracking the finger and/or hand for input (e.g., gesture input) may be advantageous because it does not require the user to touch, hold, or wear any controllers, sensors, or other active or passive circuitry for tracking. In some examples, a user's hand is able to interact with (e.g., grab, move, touch, point to, etc.) virtual objects in a three-dimensional environment, optionally as if the virtual objects were real physical objects in a physical environment.
The communication circuitry 120 optionally includes circuitry for communicating with electronic devices, networks (e.g., the internet), intranets, wired and/or wireless networks, cellular networks, wireless Local Area Networks (LANs), and the like. The communication circuitry 120 optionally includes circuitry for using Near Field Communication (NFC) and/or short range communication (e.g.,) And a circuit for performing communication.
It should be understood that computing system 100 is not limited to the components and configurations of FIG. 1, but may include fewer, other, or additional components in various configurations. In some examples, computing system 100 may include or may be implemented as a head-mounted system that may enable a person to sense and/or interact with an extended reality (XR) environment by displaying the XR environment, such as using a projection-based system.
As described herein, the computing system 100 enables presentation of annotations for physical objects. In some examples, the physical object may include any physical object or electronic device for which a user is seeking customer service assistance products. For example, the physical object may be a consumer electronic product, such as a mobile phone, tablet, laptop or computer. Additionally or alternatively, the computing system 100 enables a virtual representation (e.g., virtual object) of a product to be presented to one or more users. These annotations may provide a guiding experience for customer support, among other applications. In some examples, the virtual representation of the product is displayed to the customer service representative such that the customer service representative is able to view and interact with the virtual object in its three-dimensional environment (e.g., similar to if the virtual object is a real physical object in the physical environment of the customer service representative). Additionally or alternatively, the computing system 100 enables a virtual representation (e.g., virtual object) of a product to be presented to a user (e.g., product owner) so that annotations (e.g., corresponding to input from a customer service representative) can be presented to the user. In some examples, the virtual representation of the product is presented with the physical product of the user in a side-by-side presentation to enable annotation of the physical product, the virtual representation, or both. In other examples, the virtual representation of the product may not be presented to the user, and the annotation may be presented on the physical product. In some examples, a view of the second virtual representation of the product is presented to the customer service representative. The first virtual representation of the product may represent a status of the product of the user and the second virtual representation of the product may be provided to the customer service representative to provide input for user-side annotations.
Fig. 2A-2B illustrate an exemplary environment of a first user (e.g., owner of a product) and a second user (e.g., customer service representative) according to examples of the present disclosure. In some examples, environment 200 is presented to a first user using a computing system (e.g., a computing system corresponding to computing system 100) to enable an XR environment (e.g., comprising physical objects and/or virtual objects) to be presented. The environment 200 may be presented in various ways. For example, the environment 200 may be displayed on a handheld device (e.g., phone, tablet, etc.) using images captured from a camera, with virtual content optionally overlaid. Alternatively, the environment 200 may be viewed using a pass-through video through a head mounted display having an opaque display, with virtual content optionally superimposed with the pass-through video. Alternatively, the environment 200 may be viewed through a transparent or semi-transparent display, with physical objects being viewed directly by the user through the display and virtual content overlaid thereon. In some examples, environment 200 is presented to the first user using a different computing system to enable an XR environment to be presented. For example, fig. 2A illustrates an environment 200 of a user of a product according to examples of the present disclosure. As shown in FIG. 2A, the user's environment includes physical objects such as pictures of flowers, sofas, tables, and computing systems including a computer 202 and a monitor 204.
In some examples, a first user contacts a customer service representative to obtain customer support associated with computer 202 (e.g., referred to as a product or object). It should be appreciated that this is merely an example, and that the first user may contact the customer service representative for customer support associated with one or more electronic devices (such as mobile devices, tablet computers, laptops, desktop computers, displays/monitors, gaming systems, streaming media devices, etc.). In these examples, the first user's environment would include such an electronic device in addition to or in lieu of computer 202. In some examples, the computing system 100 captures (e.g., using a sensor of the computing system 100 such as the image sensor 116) the product for use in the customer service session, optionally when the user initiates the customer service session. In some examples, computing system 100 identifies a product and/or a state of the product and sends the identification of the product and/or the state of the product to a customer service representative. It should be appreciated that this is merely an example, and in some examples, if a user is contacting a customer service representative with respect to an alternative electronic device/product, the computing system 100 may capture and identify the electronic device/product. For example, as shown in fig. 2A, computing system 100 captures one or more images of computer 202 using one or more input devices, such as image sensor 116. The computing system 100 then uses the one or more images to determine an identification of the product. For example, the computing system 100 (e.g., programs or instructions in the processor 102 and/or the memory 104) may determine from the image that the product is the computer 202. The identification may occur in various ways. In some examples, a user may identify a product (e.g., by selecting a product using gestures to enter information about the product, etc.). In some examples, the computing system may identify the product without user input. For example, the identification may include performing computer vision techniques that identify a type of product (e.g., display, desktop computer, laptop computer, tablet computer, phone, etc.) or a particular model of product (e.g., desktop computer model X manufactured by company Y) based on the image from the image sensor 116. In some examples, computing system 100 may also identify a product based on a user account associated with the computing system (e.g., computing system 100) that is also associated with another product (e.g., computer 202) that corresponds to the type or model of the identified product. In some examples, the computing system may identify the product from a catalog of products (e.g., from one or more particular suppliers). The computing system 100 then transmits (e.g., using the communication circuit 120) the identification of the product (e.g., computer 202) to a second computing system, which may be a computing system similar to the computing system 100 of the customer service representative. In some examples, the identification of the product received at the second computing system enables the second computing system to present a virtual representation of the product (e.g., computer 202) to the customer service representative within the physical environment (or XR environment) of the customer service representative.
For example, FIG. 2B illustrates a customer service representative environment 205 in accordance with an example of the present disclosure. As shown in FIG. 2B, the environment 205 of the customer service representative includes physical objects, such as lights, televisions, and tables, although in some examples, the environment may be an XR environment without physical objects. In some examples, the virtual representation 206 of the computer 202 is presented to the customer service representative within the customer service representative's environment 205 (e.g., on a table). In this way, the customer service representative is presented with a view of the user's product while maintaining the user's privacy (e.g., without sending images of the user's environment's content to the customer service representative) and enabling the customer service representative to interact with the virtual representation 206 (e.g., as described with reference to fig. 3A-4F). As described herein, in some examples, the status of the product may also be sent from the first computing system of the user and received by the second computing system of the customer service representative. In this way, the virtual representation of the product may be presented in a manner consistent with the state of the physical product (e.g., a real view of the physical configuration of the device, a representation of the removal of the housing, shell, or other component, etc.).
Fig. 3A-3D illustrate additional views of the environment of a user of a product or the environment of a customer service representative according to examples of the present disclosure. Fig. 3A-3D correspond to a first exemplary interaction in a customer service session. For example, as shown in FIG. 3A, a customer service representative interacts with the virtual representation 206 to provide instructions for removal of the cover of the product. In some examples, the interaction may be a gesture of a hand of the customer service representative detected using a body tracking sensor 118 of a computing system of the customer service representative. In some examples, the gesture may be a flick gesture (e.g., flick the top of the housing of the virtual representation 206), a rotate gesture (e.g., a rotation of a hand at a position of the housing corresponding to the handle of the virtual representation 206), or any other suitable gesture. It should be understood that these gestures are representative gestures, but other gestures or other non-gesture inputs may also be used.
In accordance with the second computing system detecting a gesture of the customer service representative, the second computing system may send an indication of the gesture, an annotation associated with the detected gesture, an animation associated with the detected gesture, a component or virtual representation of the product for which the gesture is intended, and so forth. Subsequently, and as shown in FIG. 3B, the user's environment 200 is updated with annotations corresponding to actions of the customer service representative. In this way, the user may see a visual representation of the instruction (e.g., the customer service representative is indicating the steps the user is taking). As shown in fig. 3B, in some examples, the annotation may be presented directly on the physical product, such as on the computer 202 (e.g., counter-clockwise curved arrow 203 on the computer 202). In some examples, the annotation may be an annotation/visualization on or on a physical product, or may be a 2D, 2.5D, or 3D animation on or corresponding to a product. In some examples, the annotation may be based on input by the customer service representative and may correspond to a detected gesture. For example, if the customer service representative performs a gesture (e.g., a tap or a rotation) at the location of the shell, the corresponding annotation may be a curved arrow displayed at the location of the shell that corresponds to the corresponding gesture input of the customer service representative on the virtual representation of the product. As shown, the counterclockwise curved arrow 203 annotation may identify the location and direction of user input. In some examples, an animation may be presented that shows the lift-off of the shell after rotation (e.g., the lift-off of a virtual shell of a physical product), with or without simulation and/or including a virtual hand of a tool for performing the lift-off. In some examples, the presentation of the virtual hand or the tool (or other input device) may correspond to whether the customer service representative uses the hand or the tool (or other input device) to provide the input. In some examples, the animation may include a record or representation of the customer service representative's interactions with the virtual representation. For example, the second computing system may record body gestures of the customer service representative determined using the body tracking sensor 118 and send the recorded gestures to the first computing system of the user. The first computing system may then replay the gestures, for example, by presenting a virtual representation of the hands or tools of a customer service representative interacting with computer 202. In some examples, these annotations and/or animations may be saved for future reference and/or playback or retransmission (e.g., if the user requires additional annotations to complete a given step). These annotations and/or animations may be saved in a profile accessible to a single user or to multiple users and/or user accounts. In some examples, complete guidance instructions may be created that allow the user to complete guidance instructions including animations step-by-step without the customer service representative interactively guiding the user. In some examples, these guidance instructions may be used for training or other educational related purposes.
Additionally or alternatively, the user's environment 200 may include a virtual representation 208 of the product. In some examples, as shown in fig. 3B, annotations or animations may additionally or alternatively be presented on the virtual representation 208 of the product (curved arrow 207 counterclockwise). In some examples, presenting the annotation or animation of the virtual product may enable the user to view the annotation or animation without the user's physical interaction with the physical product or without the annotation or animation interfering with the user's physical interaction. In some examples, the annotation or animation on the physical product (or corresponding to the physical product) and the annotation or animation on the virtual representation 208 (or corresponding to the virtual representation) may be the same. For example, fig. 3B shows a counter-clockwise curved arrow 207 annotation (or animation) on both the physical product and the virtual representation. However, it should be understood that the presentation of annotations or animations may be different between the physical product and the virtual representation and may include virtual hands and/or tools to perform tasks. For example, in some examples, annotations may be presented on or correspond to physical products, while animations may be presented on or correspond to virtual representations. In alternative examples, annotations may be presented on or correspond to virtual representations, while animations may be presented on or correspond to physical products.
In some examples, presentation of the annotation or animation may change during user interaction with the physical product. For example, in the examples of fig. 2A-2B, the presentation of the annotation and/or animation includes a curved arrow, and then the annotation and/or animation has an up arrow (or animation of the lid lifting up) that guides the user to lift the lid. In some examples, while presenting the curved arrow annotation, the computing system may detect that the user interacted with the physical product and rotate the handle on top of the lid. In some examples, after detecting rotation of the handle of the physical product, the computing system may present a subsequent animation or a modified animation, with the arrow pointing upward to lift the lid off, or present an animation of the virtual lid lifting upward. However, this example is not intended to be limiting, and one skilled in the art will appreciate that various animations/annotations may be presented.
In some examples, and as shown in fig. 3C, the user may remove the cover 210 of the computer 202 as directed by the customer service representative. For example, the cover 210 may be placed on a table by a user. In addition, the view of the physical product in fig. 3C presents the computer 202 without the cover (e.g., the internal contents of the computer 202 are shown, as shown in fig. 4A). After this user action, in some examples, the annotation or animation may be removed. Alternatively, the annotation or animation may remain until further action is taken. For example, and in some examples, the annotation or animation may remain until the user completes the task. In alternative examples, the annotation or animation may remain until the user provides input (e.g., gestures, buttons, verbal commands, etc.) for removing the annotation or animation. In alternative examples, the annotation or animation may remain until the customer service representative receives feedback that the task has been completed. In addition, computing system 100 may update the status of the user's product and send the update to the computing system of the customer representative. The update may occur in response to a user action or some other trigger received by computing system 100. As shown in fig. 3D, the computing system of the customer service representative may update the environment 205 to reflect the user actions and to show the virtual representation 206 without the cover and the virtual representation 212 of the cover placed on the table.
In some examples, multiple virtual representations of the product are presented to a customer service representative. In some examples, the first representation of the product may represent a state of a physical product of the user, and the second virtual representation of the product may be provided to a customer service representative to provide input for user-side annotations or animations. In some examples, one of the two virtual representations presented to the customer service representative may be displayed in a picture-in-picture window 216. In some examples, the first virtual representation or the second virtual representation (optionally in a picture-in-picture window) may be hidden or revealed from view as appropriate. For example, both the first virtual representation and the second virtual representation may be displayed when necessary (e.g., to present the state of the user device to the customer service representative when the state of the user product is not synchronized with the representation of the product for input from the customer service representative), and only one virtual representation may be presented when the state of the user product is synchronized with the virtual representation for input from the customer service representative. In some examples, two virtual representations of the product are visible to the customer service representative.
Although fig. 3A-3D illustrate annotations (or animations) for a user, it should be understood that in some examples, the annotations or animations may also be presented at the customer service representative side along with the virtual representation(s). In some examples, annotations or animations presented to the user are also presented on the virtual representation of the state of the user's product on the customer service side. Additionally or alternatively, the same or different annotations (or animations) may be presented on a virtual representation of the product visible to the customer service representative. In some examples, the annotation or animation may be cleared when the computing system receives an indication that the user has completed a task corresponding to the annotation/animation when presented on the customer service side. For example, the curved arrow shown in fig. 3B may be presented on one or both virtual representations presented on the customer service side and may be cleared when the user removes the cover. Alternatively, the annotation or animation may be cleared when the customer service representative provides input for removing the annotation and/or in response to another trigger. For example, the user may individually indicate that the cover has been removed or request additional guidance (e.g., subsequent annotations or animations) from the customer service representative.
Fig. 4A-4F illustrate additional views of an environment 200 of a user of a product or an environment 205 of a customer service representative according to examples of the present disclosure. As described herein, in some examples, the customer service representative may use the virtual representation of the product to provide further input to provide further guidance or instructions to the user (e.g., using subsequent annotations or animations). For example, FIG. 4A shows a view of computer 202 in a updated state after cap 210 is removed. As shown in FIG. 4A, in this updated state, a plurality of internal components of computer 202 are presented, including board 302 and board 304. It should be understood that plates 302 and 304 are exemplary components, but other components may be present. As described above, in some examples, the computing system 100 (e.g., using programs or instructions in the processor 102 and/or the memory 104 executed by the processor 102) may determine the user's product and corresponding status (e.g., update status). The updated status may be sent to the customer service representative. Thus, as shown in FIG. 4B, a view of the virtual representation of the product by the customer service representative may be updated to reflect the update of the status of the user's product. For example, FIG. 4B shows a virtual representation 206 of a product with an internal view of the virtual representations of the corresponding boards 306 and 308.
The customer representative may provide further user input (e.g., a second input subsequent to the user input for removing the cover) to continue providing guidance support to the user. For example, as shown in FIG. 4B, a customer service representative interacts with the plate 308 to remove it from the product. In some examples, the interaction may be a gesture of a hand of the customer service representative detected using a body tracking sensor 118 of a computer device of the customer service representative. In some examples, the gesture may be a flick gesture (e.g., a virtual representation of the flick panel 308) or a pull gesture (e.g., a grabbing of a hand at a location of the housing with the virtual representation of the panel 308 and pulling the hand away from the virtual representation 206). It should be understood that these gestures are representative gestures, as described above, but other gestures or other non-gesture inputs may also be used.
Input from the customer service representative may cause annotations or animations to be presented to the user. For example, a tap may cause an annotation of the board 304 that indicates the selection of the virtual representation of the board 308. The annotation may include highlighting or outlining the plate 304 or otherwise changing the appearance of the plate to indicate selection. In some examples, the pull gesture may cause a display to direct the user to remove an annotation or animation of the panel 304. For example, as shown in FIG. 4C, a second annotation is presented to the user's environment 200 to instruct the user to remove the board 304 corresponding to the action of the customer service representative on the virtual representation of the board 308. In some examples, a virtual arrow 403 may be presented from the plate 304, the arrow showing the pull direction of the removal plate 304. In some examples, an animation of a virtual board from a location of a physical board may be shown, and the animation may include a virtual hand performing a pull and/or a virtual tool for removing the board. As described herein, in some examples, annotations or animations may be presented with a physical product (e.g., on, emanating from, or near the product) and/or may be presented with a virtual representation that is concurrently presented in environment 200. In some examples, presenting the annotation or animation on the virtual product may enable viewing of the annotation or animation without the user's physical interaction with the product or without the annotation or animation interfering with the user's physical interaction. In some examples, the annotation or animation on the physical product (or corresponding to the physical product) and the annotation or animation on the virtual representation (or corresponding to the virtual representation) may be the same. In some examples, the annotation or animation on the physical product (or corresponding to the physical product) and the annotation or animation on the virtual representation (or corresponding to the virtual representation) may be different.
In some examples, the user may continue with the removal of the plate 304. For example, and as shown in fig. 4D, the user begins to remove the plate 304. In some examples, and as shown in fig. 4D, presentation of the annotation or animation ceases when the user removes the panel 304. However, and as described above, in some examples, the annotation or animation may remain until triggered later. For example, and as described above, the annotation or animation may remain until the user provides input (e.g., gestures, buttons, verbal commands, etc.) for removing the annotation or animation. In alternative examples, the annotation or animation may remain until the customer service representative receives feedback that the task has been completed. Additionally, in some examples, the customer service representative may initiate a move to a next step in the program using user interface controls presented to the customer service representative.
In some examples, and as shown in fig. 4E, the user removes the board 304 and places the board 304 on a table beside the computer 202. The computing system 100 may determine that the board 304 is removed from the computer 202 (e.g., using the image sensor 116) and/or placed within the user's environment. The updated state of the computer 202 may be sent to the computing system of the customer service representative to update the virtual representation presented in the environment 205 accordingly. For example, and as shown in FIG. 4F, the customer service representative's environment is updated, showing the virtual representation 206 with the board 308 removed and placed next to the virtual representation 206 on the table.
While the examples described herein primarily include users (e.g., clients) interacting with a customer service representative, it should be understood that the same or similar interactions may occur between any two or more users. For example, when authorized by a user, a computing system of a first user may identify a physical object in the environment of the first user and its state, send the identification and state of the object to a computing system of a second user (e.g., a friend or family member of the first user), and receive annotations, animations, gestures, or combinations thereof from the computing system of the second user in response to interactions from the second user with the virtual representation of the physical object. In some examples, the computing system of the first user may also send the identity and state of the object to one or more computing systems of other users, and receive annotations, animations, gestures, or combinations thereof from the one or more computing systems in response to interactions from the other users with the corresponding virtual representations of the physical objects. It should be noted that the computing system only identifies physical objects within the user's environment, and does not identify or enable the computer service representative to see the user's environment. In this way, the privacy of the user may be maintained.
In some examples, a second user (e.g., a customer service representative) may record their interactions with the virtual representation of the object as, for example, a set of instructions to be executed. The set of instructions may include an ordered series of gestures, annotations, animations, or combinations thereof associated with a physical object or virtual representation to be presented to a user. This advantageously allows the user to be given the appropriate instructions without the customer service representative having to de-repeat the interactions. In some examples, the recorded set of instructions may be presented to the user without communicating with the customer service representative. In these examples, the user may be presented with a set of instructions corresponding to the identity of their object and their state. For example, the computing system may present a first gesture, animation, or annotation and provide the user with the ability to advance through the instructions at a desired rate or time (e.g., advance or retract through the instructions using user interface controls presented in an XR environment). Additionally or alternatively, the user's computing system may automatically advance through the instructions in response to detecting completion of the corresponding task, as described above. In some examples, the computing system may include a built-in boot experience that does not require any interaction with the customer service representative. In other words, these built-in boot experiences do not require a specific customer service representative to pre-record any behavior. In particular, these built-in guiding experiences represent initial surveys or tutorials for general learning and/or rehabilitation experiences.
In some examples, such as when working with a mobile device, tablet, laptop or desktop (and other products/objects), additional tools may be required to complete a diagnosis or repair. In some examples, in addition to identifying the products, the user's computing system may identify the tools and send information about the tools of the tools (e.g., identification, status, etc.) to the customer service representative (without sending an image of the user's physical environment to the customer service representative). As described herein, annotations and/or animations for a product, for a tool, and/or for both may be presented to a user. For example, a customer service representative may be able to have an animation help a user identify a tool in the environment and/or show how to use the tool to perform an animation of a given step. Alternatively, the customer service representative may save the instructions to the computing system 100 so that the user may access the instructions and repair the product at a later time (e.g., when the computing system detects the product and tools needed for repair). In an example, the computing system may include a profile full of instructions so that the user can access and use the instructions without having to contact the customer service representative to assist. Alternatively, the user may view the archive for educational purposes. In an example, the boot instructions may include a step-by-step program that a user may complete step-by-step. In an example, a user may access certain profiles based on a membership basis or product purchase.
As described above, in some examples, annotations or animations may be shared using one or more virtual representations of a physical product. Fig. 5 illustrates an exemplary process for presenting a first view of a product according to an example of the present disclosure. Process 500 is optionally performed at a first computing device (e.g., a user's electronic device or a computing system corresponding to computing system 100) in communication with one or more input devices and a second computing device (e.g., a customer service representative's electronic device or a system corresponding to computing system 100). In some examples, some operations in process 500 are optionally combined and/or optionally omitted and/or optionally altered. In some examples, process 500 is performed by processor 102 and memory 104. For example, at 502, one or more images are captured using one or more input devices (e.g., image sensor 116). At 504, an identification of the product is determined using one or more images (e.g., using a catalog of products and/or sharing a user account). At 506, an identification of the product is sent to the second computing device (e.g., without sending one or more images of the user's environment). At 508, an indication of a first input (e.g., user input from a customer service representative) is received from a second computing device. In some examples, the indication may be the gesture itself, an annotation determined by the second computing device based on the input, or an animation determined by the second computing device based on the input, a location of the input, or a combination thereof. At 510, a first view of the product is presented (e.g., to a user) that includes a first annotation corresponding to an input received at a second computing device. In some examples, the annotation may also include a representation of the customer service representative (e.g., their hand), a device for use by the customer service representative, and the like.
Additionally or alternatively, in some examples, the process further includes determining a first state of the product using the one or more images and transmitting the first state of the product to the second computing device. Presenting the first view of the product may include a representation of the product in a first state. In some examples, a first state of a product may be received from the product. For example, the product may determine its current state and communicate that state to the first computing device.
Additionally or alternatively, in some examples, the process further comprises: one or more input devices are used to detect a modification of a product corresponding to a first annotation, determine a second state of the product, and send the second state of the product to a second computing device while presenting a first view of the product including the first annotation.
Additionally or alternatively, in some examples, the process further comprises: in accordance with detecting the modification of the product, presenting a second view of the product, the second view including a representation of the product in a second state of the product; and stopping the presentation of the first annotation.
Additionally or alternatively, in some examples, the process further comprises: the method includes receiving, from a second computing device, an indication of a second input received at the second computing device, and presenting a third view of the product, the third view including a second annotation corresponding to the second input received at the second computing device.
Additionally or alternatively, in some examples, the first input includes a rotation input at a respective location corresponding to a virtual representation of the product at the second computing device, and the first annotation includes a virtual arrow having a curved shape that represents a rotation at the respective location corresponding to the first view of the product.
Additionally or alternatively, in some examples, presenting the first view includes displaying the product using one or more images. Additionally or alternatively, in some examples, the first view is presented by a transparent or semi-transparent display.
Additionally or alternatively, in some examples, the first annotation includes an animation on the product or on the product that corresponds to being presented in the first view of the product. Additionally or alternatively, in some examples, the method further includes presenting the virtual representation of the product concurrently with the first view of the product. Additionally or alternatively, in some examples, presenting the first view of the product including the first annotation includes presenting the first annotation on the product, and the process may further include presenting the same first annotation on a virtual representation of the product. Additionally or alternatively, in some examples, presenting the first view of the product including the first annotation includes presenting the first annotation on the product, and the process may further include presenting a second annotation on the virtual representation of the product that is different from the first annotation. In some examples, the first view of the product may include a first animation and the second view of the product may include the first animation. In some examples, the first view of the product may include a first animation and the second view of the product may include a second animation.
Additionally or alternatively, in some examples, determining the identity of the product may include determining the identity based on a user account shared by the product and the first computing device and/or a catalog of the product.
Additionally or alternatively, in some examples, the first input includes a gesture input at a respective location corresponding to a virtual representation of the product at the second computing device, and the first annotation corresponds to the gesture input and a respective location corresponding to the first view of the product.
Additionally or alternatively, in some examples, the identification of the object is sent to the second computing device without sending the one or more images.
Some examples of the disclosure may relate to an electronic device including one or more processors; a memory; and one or more programs. The one or more programs may be stored in the memory and may be configured to be executed by the one or more processors. The one or more programs may include instructions for performing any of the processes described above. Some examples of the disclosure may involve a non-transitory computer readable storage medium storing one or more programs. The one or more programs may include instructions, which when executed by one or more processors of the electronic device, cause the electronic device to perform any of the processes described above.
As described above, in some examples, annotations may be shared using a virtual representation of a physical product. Fig. 6 illustrates an example process for sending an indication or annotation corresponding to a first input according to an example of this disclosure. Process 600 is optionally performed at a first computing device (e.g., an electronic device of a customer service representative or a computing system corresponding to computing system 100) in communication with one or more input devices and a second computing device (e.g., an electronic device of a user or a computing system corresponding to computing system 100). In some examples, some operations in process 600 are optionally combined and/or optionally omitted and/or optionally altered. In some examples, process 600 is performed by processor 102 and memory 104. For example, at 602, an identification of a product is received from a second computing device. At 604, a first view of a product is presented that includes a virtual representation of the product. At 606, an indication of a first input is detected using one or more input devices, wherein the first input includes interactions (of a customer service representative) with a virtual representation of the product. At 608, an indication of the first input or a first annotation corresponding to the first input is sent to the second computing device.
Additionally or alternatively, in some examples, presenting the first view further includes presenting a first annotation corresponding to the first input.
Additionally or alternatively, in some examples, the process further includes receiving a first state of the product from the second computing device. Presenting the first view of the product may include a virtual representation of the product in a first state.
Additionally or alternatively, in some examples, the process further comprises: receiving, from the second computing device, a second state of the product, the second state corresponding to a modification of the product detected by the second computing device that corresponds to the first annotation; presenting a second view of the product, the second view comprising a virtual representation of the product in a second state of the product; and stopping the presentation of the first annotation. Additionally or alternatively, in some examples, the process further comprises: the method includes detecting, using the one or more input devices, an indication of the second input, and sending, to the second computing device, the indication of the second input or a second annotation corresponding to the second input. Additionally or alternatively, in some examples, presenting the second view further includes presenting a second annotation corresponding to the second input.
Additionally or alternatively, in some examples, the first input includes a rotation input at a respective location corresponding to a virtual representation of the product, and the first annotation includes a virtual arrow having a curved shape that represents a rotation at the respective location corresponding to a view of the product presented at the second computing device. Additionally or alternatively, in some examples, the first annotation comprises an animation on or corresponding to the virtual representation of the product.
Additionally or alternatively, in some examples, the process further includes concurrently presenting, with the first view of the product, a second view of the product that includes a second virtual representation of the product.
Additionally or alternatively, in some examples, the user input is detected to interact with a virtual representation of the product, and a second virtual representation of the product represents a state of the product detected by the second computing device.
Additionally or alternatively, in some examples, the first view of the product includes a first annotation and the second view of the product includes the first annotation. In some examples, the first view of the product may include a first annotation and the second view of the product includes a second annotation. In some examples, the first view of the product may include a first animation and the second view of the product includes the first animation. In some examples, the first view of the product includes a first animation and the second view of the product includes a second animation.
Additionally or alternatively, in some examples, determining the identity of the product includes determining the identity based on a user account shared by the product and the second computing device and/or a catalog of the product.
Additionally or alternatively, in some examples, the first input includes a gesture input at a respective location corresponding to the virtual representation of the product, and the first annotation corresponds to the gesture input and a respective location corresponding to a first view of the product presented at the second computing device.
Some examples of the disclosure may relate to an electronic device including one or more processors; a memory; and one or more programs. The one or more programs may be stored in the memory and may be configured to be executed by the one or more processors. The one or more programs may include instructions for performing any of the processes described above. Some examples of the disclosure may involve a non-transitory computer readable storage medium storing one or more programs. The one or more programs may include instructions, which when executed by one or more processors of the electronic device, cause the electronic device to perform any of the processes described above.
Although examples of the present disclosure have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It is to be understood that such variations and modifications are to be considered included within the scope of the examples of the present disclosure as defined by the appended claims.

Claims (15)

1. A method, comprising:
at a first computing device in communication with one or more input devices and a second computing device:
receiving an identification of an object from the second computing device;
presenting a first view of the object comprising a first virtual representation of the object;
Detecting, using the one or more input devices, an indication of a first input, the first input comprising an interaction with the first virtual representation of the object; and
an indication of the first input or a first annotation corresponding to the first input is sent to the second computing device.
2. The method of claim 1, wherein presenting the first view further comprises presenting the first annotation corresponding to the first input.
3. The method of any of claims 1-2, further comprising:
receiving a first state of the object from the second computing device;
wherein presenting the first view of the object comprises the first virtual representation of the object in the first state.
4. A method according to any one of claims 1 to 3, further comprising:
receiving a second state of the object from the second computing device, the second state corresponding to a modification of the object detected by the second computing device that corresponds to the first annotation; and
presenting a second view of the object, the second view comprising the first virtual representation of the object in the second state; and stopping presentation of the first annotation.
5. The method of any one of claims 1 to 4, further comprising:
detecting an indication of a second input using the one or more input devices; and
the indication of the second input or a second annotation corresponding to the second input is sent to the second computing device.
6. The method of claim 5, wherein presenting the second view further comprises presenting the second annotation corresponding to the second input.
7. The method of any of claims 1-6, wherein the first input comprises a rotational input at a respective location corresponding to the first virtual representation of the object, and the first annotation comprises a virtual arrow having a curved shape representing the rotational input at the respective location corresponding to a view of the object presented at the second computing device.
8. The method of any of claims 1-7, wherein the first annotation comprises an animation on or corresponding to the first virtual representation of the object.
9. The method of any one of claims 1 to 8, further comprising:
A second view of the object including a second virtual representation of the object is presented simultaneously with the first view of the object.
10. The method of claim 9, wherein the first input is detected as interacting with the first virtual representation of the object, and wherein the second virtual representation of the object represents a state of the object detected by the second computing device.
11. The method of any one of claims 1 to 10, wherein:
the first view of the object includes the first annotation and the second view of the object includes the first annotation;
the first view of the object includes the first annotation and the second view of the object includes a second annotation;
the first view of the object comprises a first animation and the second view of the object comprises the first animation; or (b)
The first view of the object comprises a first animation and the second view of the object comprises a second animation.
12. The method of any of claims 1 to 11, wherein determining the identification of the object comprises determining the identification based on one or more of:
A user account shared by the object and the second computing device; or (b)
A directory of objects.
13. The method of any of claims 1-12, wherein the first input comprises a gesture input at a respective location corresponding to the first virtual representation of the object, and the first annotation corresponds to the gesture input and the respective location corresponding to a first view of the object presented at the second computing device.
14. An electronic device, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-13.
15. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform the method of any of claims 1-13.
CN202310091239.3A 2022-01-28 2023-01-20 System and method for booting instructions or support using virtual objects Pending CN116520975A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410208475.3A CN118092645A (en) 2022-01-28 2023-01-20 System and method for booting instructions or support using virtual objects

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/304,458 2022-01-28
US18/156,342 US20230245410A1 (en) 2022-01-28 2023-01-18 Systems and methods of guided instructions or support using a virtual object
US18/156,342 2023-01-18

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410208475.3A Division CN118092645A (en) 2022-01-28 2023-01-20 System and method for booting instructions or support using virtual objects

Publications (1)

Publication Number Publication Date
CN116520975A true CN116520975A (en) 2023-08-01

Family

ID=87392843

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202410208475.3A Pending CN118092645A (en) 2022-01-28 2023-01-20 System and method for booting instructions or support using virtual objects
CN202310091239.3A Pending CN116520975A (en) 2022-01-28 2023-01-20 System and method for booting instructions or support using virtual objects

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202410208475.3A Pending CN118092645A (en) 2022-01-28 2023-01-20 System and method for booting instructions or support using virtual objects

Country Status (1)

Country Link
CN (2) CN118092645A (en)

Also Published As

Publication number Publication date
CN118092645A (en) 2024-05-28

Similar Documents

Publication Publication Date Title
US11093045B2 (en) Systems and methods to augment user interaction with the environment outside of a vehicle
KR102473259B1 (en) Gaze target application launcher
US20210407203A1 (en) Augmented reality experiences using speech and text captions
US11678004B2 (en) Recording remote expert sessions
US11340707B2 (en) Hand gesture-based emojis
US9483113B1 (en) Providing user input to a computing device with an eye closure
US9288471B1 (en) Rotatable imaging assembly for providing multiple fields of view
CN110476142A (en) Virtual objects user interface is shown
EP3427125A1 (en) Intelligent object sizing and placement in augmented / virtual reality environment
US20170193302A1 (en) Task management system and method using augmented reality devices
CN107810465A (en) For producing the system and method for drawing surface
CN108038726B (en) Article display method and device
US11954268B2 (en) Augmented reality eyewear 3D painting
CN104364753A (en) Approaches for highlighting active interface elements
JP2013141207A (en) Multi-user interaction with handheld projectors
WO2013112603A1 (en) Recognition of image on external display
US9389703B1 (en) Virtual screen bezel
US20160227868A1 (en) Removable face shield for augmented reality device
US20210406542A1 (en) Augmented reality eyewear with mood sharing
CN116520975A (en) System and method for booting instructions or support using virtual objects
US20230245410A1 (en) Systems and methods of guided instructions or support using a virtual object
WO2023028569A1 (en) Product comparison and upgrade in a virtual environment
SE1500055A1 (en) Method and data presenting device for facilitating work at an industrial site assisted by a remote user and a process control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication