WO2015072195A1 - 表示制御装置、表示制御方法、およびプログラム - Google Patents
表示制御装置、表示制御方法、およびプログラム Download PDFInfo
- Publication number
- WO2015072195A1 WO2015072195A1 PCT/JP2014/071387 JP2014071387W WO2015072195A1 WO 2015072195 A1 WO2015072195 A1 WO 2015072195A1 JP 2014071387 W JP2014071387 W JP 2014071387W WO 2015072195 A1 WO2015072195 A1 WO 2015072195A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- annotation
- user
- display
- display control
- output
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
- G06F40/169—Annotation, e.g. comment data or footnotes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
Definitions
- the present disclosure relates to a display control device, a display control method, and a program.
- AR augmented reality
- Information presented to the user in AR technology is also called annotation, and can be visualized using various forms of virtual objects such as text, icons, or animation.
- Patent Document 1 describes a technique for realizing the operation of the AR virtual object without impairing the user's immersion in the AR space.
- Patent Document 1 The AR technology proposed in the above-mentioned Patent Document 1 has not been developed yet, and it is hard to say that technologies for utilizing AR in various aspects have been sufficiently proposed.
- a technique for facilitating the interaction between users using the AR technique is one that has not been sufficiently proposed.
- the present disclosure proposes a new and improved image processing apparatus, image processing method, and program capable of facilitating the interaction between users using the AR technology.
- an image acquisition unit that acquires a moving image corresponding to the field of view of the first user in real time, and display control that displays the moving image toward a second user different from the first user.
- An annotation detection unit that detects that the annotation input by the second user with respect to the moving image is displayed in the field of view of the first user or output for the display;
- the display control unit is further provided with a display control device that displays the displayed or output annotation to the second user.
- the processor detects that the annotation input by the second user with respect to the moving image is displayed in the first user's field of view or output for the display; and A display control method including displaying the displayed or output annotation for the second user is provided.
- the function which acquires the moving image corresponding to a 1st user's visual field in real time, and the function which displays the said moving image toward the 2nd user different from the said 1st user A function for detecting that the annotation input by the second user with respect to the moving image is displayed in the field of view of the first user or output for the display, and the display or output
- a program for causing a computer to realize a function of displaying the annotations directed to the second user is provided.
- interaction between users using AR technology can be made smoother.
- FIG. 1 is a diagram illustrating a schematic configuration of a system according to a first embodiment of the present disclosure.
- FIG. It is a figure showing a schematic structure of a device concerning a 1st embodiment of this indication.
- 1 is a diagram illustrating a schematic functional configuration of a system according to a first embodiment of the present disclosure.
- FIG. FIG. 3 is a diagram illustrating a display example on a wearable display according to the first embodiment of the present disclosure. It is a figure which shows the example of a display with the tablet terminal in 1st Embodiment of this indication. It is a figure showing a schematic functional composition of a system concerning a modification of a 1st embodiment of this indication.
- FIG. 3 is a diagram illustrating a schematic functional configuration of a system according to a second embodiment of the present disclosure. It is a figure showing an example of a sequence in a 2nd embodiment of this indication. It is a figure which shows the example of a display in the tablet terminal in 2nd Embodiment of this indication.
- FIG. 5 is a diagram illustrating a schematic functional configuration of a system according to a third embodiment of the present disclosure. It is a figure which shows the 1st example of a display with a tablet terminal when the output rate is temporarily set to 0 in 3rd Embodiment of this indication. It is a figure which shows the 2nd example of a display with a tablet terminal when the output rate is temporarily set to 0 in 3rd Embodiment of this indication.
- FIG. 6 is a diagram illustrating a schematic functional configuration of a system according to a fourth embodiment of the present disclosure. It is a figure which shows the 1st display example with the tablet terminal in 4th Embodiment of this indication. It is a figure which shows the 2nd example of a display with the tablet terminal in 4th Embodiment of this indication.
- FIG. 3 is a block diagram for describing a hardware configuration that can realize an information processing apparatus according to an embodiment of the present disclosure.
- FIG. 1 is a diagram illustrating a schematic configuration of a system according to the first embodiment of the present disclosure.
- the system 10 includes a wearable display 100, a smartphone 150, a server 200, and a tablet terminal 300.
- the wearable display 100 and the smartphone 150 are connected by, for example, Bluetooth (registered trademark).
- the server 200 is connected to the smartphone 150 and the tablet terminal 300 via various wired or wireless networks.
- a moving image captured by a camera (head mounted camera) mounted on the glasses-type wearable display 100 worn by the first user is real-time on the tablet terminal 300 via the smartphone 150 and the server 200. Will be streamed.
- the distributed moving image may have a range, an inclination, or the like processed so as to correspond to the first user's field of view.
- the 2nd user who browses a moving image with the tablet terminal 300 can experience as if sharing vision with the 1st user.
- the second user who browses the moving image on the tablet terminal 300 can input an annotation to the moving image distributed in a streaming manner.
- the annotation is information presented in addition to the image of the real space viewed by the first user, and can take various forms such as text, icons, or animation.
- the second user can, for example, add a comment to what appears in the field of view of the first user or provide information to the first user by inputting an annotation. That is, the second user can intervene in the first user's experience with the annotation.
- the annotation input by the second user on the tablet terminal 300 is transmitted to the smartphone 150 via the server 200 and displayed in the first user's field of view on the wearable display 100 under the control of the smartphone 150.
- the annotation may be displayed transparently in the first field of view, or may be combined with an image displayed for the first user. In this way, in the system 10, the interaction between the users is established through the moving image that is streamed.
- FIG. 2 is a diagram illustrating a schematic configuration of an apparatus according to the first embodiment of the present disclosure.
- the component of each apparatus is limited and illustrated in the part relevant to the function of embodiment described below, and each apparatus can further contain the component which is not illustrated.
- the detailed configuration of each device refer to the description of the hardware configuration of the information processing device described later.
- the configuration of each device included in the system 10 will be described with reference to FIG.
- the wearable display 100 includes a display 110, a camera 120, and a communication unit 130.
- the display 110 is, for example, an LCD or an organic EL display, and presents various types of information to the first user wearing the wearable display 100. More specifically, the display 110 may be a transmissive display or a sealed display. When the display 110 is a transmissive type, the first user directly sees the surrounding real space through the display 110. The display 110 electronically displays additional information such as annotations and superimposes them on an image in real space. On the other hand, when the display 110 is a sealed type, the first user indirectly indirectly surrounds the surrounding real space with the real space image obtained by processing the moving image captured by the camera 120 so as to correspond to the first user's field of view. Is visible. The display 110 displays an image obtained by combining additional information such as annotation with an image in the real space. In the following description, it is assumed that the display 110 can be either a transmission type or a sealed type unless otherwise specified.
- the camera 120 is the above-described head mounted camera.
- a moving image captured by the camera 120 is processed so as to correspond to the field of view of the first user by, for example, a processor of one of the devices described later, and then displayed on the tablet terminal 300 toward the second user.
- the communication unit 130 is a communication circuit that performs communication using Bluetooth (registered trademark) with the communication unit 180 a of the smartphone 150.
- the display 110 and the camera 120 included in the wearable display 100 are remotely controlled by the smartphone 150.
- the wearable display 100 may include a processor and a memory for controlling the display 110 and the camera 120 and other information processing.
- a display or camera included in the smartphone 150 may be used instead of the wearable display 100. That is, the functions of the wearable display 100 and the smartphone 150 in the present embodiment may be realized by a single device including a display, a camera, a processor, and a memory, and are distributed to a plurality of devices as in the illustrated example, for example. May be realized.
- the smartphone 150 includes a processor 160, a memory 170, and a communication unit 180.
- the processor 160 executes various types of information processing in the smartphone 150. For example, the processor 160 performs control for causing the display 110 of the wearable display 100 to display the annotation received from the server 200 via the communication unit 180b. At this time, the processor 160 may notify the server 200 that the annotation has been displayed on the display 110. Further, the processor 160 may process the moving image captured by the camera 120 of the wearable display 100 in order to distribute it to the tablet terminal 300 via the server 200. The notification of the annotation display may be transmitted to the server 200 via the communication unit 180b together with the processed moving image.
- the memory 170 stores various data used in processing by the processor 160.
- the functions realized by the processor 160 of the smartphone 150, the processor 210 of the server 200, and the processor 310 of the tablet terminal 300 are interchangeable.
- functionality described as being implemented by processor 160 may be implemented by processor 210 or processor 310 in other embodiments.
- functionality described as being implemented by processor 210 may be implemented by processor 160 or processor 310 in other embodiments.
- the functions described as being realized by the processor 310 are the same as the data stored in the memory 170 of the smartphone 150, the memory 220 of the server 200, and the memory 320 of the tablet terminal 300 in each embodiment. It depends on the function that the processor implements.
- the communication unit 180 performs network communication between the communication unit 180 a that is a communication circuit that performs communication by Bluetooth (registered trademark) with the communication unit 130 of the wearable display 100 and the communication unit 230 of the server 200. And a communication unit 180b which is a communication circuit. Network communication between the communication unit 180b and the communication unit 230 can be executed via various wired or wireless networks such as Wi-Fi, a mobile phone network, and the Internet. The same applies to network communication between the communication unit 230 and the communication unit 330 of the tablet terminal 300.
- the server 200 includes a processor 210, a memory 220, and a communication unit 230.
- the processor 210 executes various types of information processing in the server 200.
- the processor 210 transfers the annotation received from the tablet terminal 300 via the communication unit 230 to the smartphone 150.
- the server 200 may aggregate annotations input from the respective tablet terminals 300 and transfer them to the smartphone 150.
- the processor 210 distributes the moving image received from the smartphone 150 via the communication unit 230 to one or a plurality of tablet terminals 300.
- the processor 210 may deliver information related to the annotation displayed on the wearable display 100 to the tablet terminal 300 based on the notification from the smartphone 150.
- the processor 210 may distribute the annotation information output to the smartphone 150 to the tablet terminal 300.
- the memory 220 stores various data used for processing in the processor 210.
- the communication unit 230 is a communication circuit that performs network communication with the communication unit 180b of the smartphone 150 and the communication unit 330 of the tablet terminal 300.
- the tablet terminal 300 includes a processor 310, a memory 320, a communication unit 330, a display 340, and a touch panel 350.
- the processor 310 executes various types of information processing in the tablet terminal 300. For example, the processor 310 performs control for causing the display 340 to display a moving image received from the server 200 via the communication unit 330. At this time, the processor 310 displays the annotation displayed in the first user's field of view on the wearable display 100 or information indicating the annotation output from the server 200 or the smartphone 150 for display on the display 340 together with the moving image. You may let them. Further, for example, the processor 310 transmits an annotation input by the second user via the touch panel 350 to the server 200 via the communication unit 330.
- the memory 320 stores various data used for processing in the processor 310.
- the communication unit 330 is a communication circuit that performs network communication with the communication unit 230 of the server 200.
- the display 340 is, for example, an LCD or an organic EL display, and presents various types of information under the control of the processor 310. For example, the display 340 displays a moving image corresponding to the first user's field of view generated based on an image captured by the camera 120 of the wearable display 100. The display 340 displays information indicating the annotation displayed in the first user's field of view or output for display. Further, the display 340 may display a GUI (Graphical User Interface) for the second user to input an annotation to the moving image.
- GUI Graphic User Interface
- the touch panel 350 is disposed on the surface of the display 340 and detects a user's touch as an input.
- the touch panel 350 detects, for example, text input using a software keyboard, selection input of an image or the like, input of handwritten characters or figures, and the like. Characters and images input via the touch panel 350 are processed as annotations by the processor 310 and transmitted to the server 200 via the communication unit 330.
- a moving image corresponding to the field of view of the first user wearing the wearable display 100 is distributed to the tablet terminal 300, but in another embodiment, in addition to the tablet terminal 300 or the tablet terminal Instead of 300, a moving image can be distributed to various devices including a display and an input device such as a desktop or notebook personal computer, a television, a smartphone, a media player, or a game machine.
- the moving image may be distributed to a wearable display different from the wearable display 100.
- the input device is not limited to the touch panel exemplified in the present embodiment, and may be a keyboard, a mouse, a hardware button, or the like.
- voice input or gesture input may be used for inputting the annotation.
- FIG. 3 is a diagram illustrating a schematic functional configuration of the system according to the first embodiment of the present disclosure.
- the system 10 includes an image processing unit 251, an image acquisition unit 253, a display control unit 255, an annotation output control unit 257, a display control unit 259, and an annotation detection unit 261 as functional configurations.
- These functional configurations may be realized by, for example, any of the processor 160 of the smartphone 150, the processor 210 of the server 200, or the processor 310 of the tablet terminal 300, or may be realized by being distributed among these processors.
- each functional configuration will be further described.
- the image processing unit 251 processes a moving image captured by the camera 120 mounted on the wearable display 100, and generates a moving image corresponding to the field of view of the first user wearing the wearable display 100. For example, the image processing unit 251 cuts out a region corresponding to the field of view from a moving image obtained by imaging a range wider than the field of view of the first user, according to the result of calibration executed in advance. For example, the image processing unit 251 may correct the inclination of the moving image based on the difference in position between the camera 120 and the viewpoint of the first user.
- the image acquisition unit 253 acquires the moving image processed by the image processing unit 251, that is, the moving image corresponding to the first user's field of view in real time.
- the display control unit 255 displays the acquired moving image on the display 340 of the tablet terminal 300. As described above, the display 340 is viewed by the second user. This second user is different from the first user wearing the wearable display 100.
- the display control unit 255 further displays the annotation on the display 340 based on information provided from the annotation detection unit 261 described later.
- the annotation displayed here is an annotation output for display in the field of view of the first user.
- the annotation output control unit 257 outputs the annotation input by the second user via the touch panel 350 of the tablet terminal 300 so as to be displayed in the first user's field of view.
- the display control unit 259 displays the output annotation on the display 110 of the wearable display 100. Since the wearable display 100 is worn by the first user, the annotation displayed on the display 110 is displayed in the field of view of the first user.
- the display control unit 259 further causes the display 110 to display an image acquired from the image processing unit 251.
- the annotation detection unit 261 detects that the annotation input by the second user with respect to the moving image displayed on the display 340 in the tablet terminal 300 is output from the annotation output control unit 257. More specifically, the annotation detection unit 261 acquires information indicating the output annotation from the annotation output control unit 257. The annotation detection unit 261 provides information indicating the output annotation to the display control unit 255.
- the annotation output control unit 257 is preferably realized by a processor of a device other than the tablet terminal 300 (when the annotation output control unit 257 is realized by the tablet terminal 300, communication delay is not reflected. For).
- FIG. 4 is a diagram illustrating a display example on the wearable display according to the first embodiment of the present disclosure. Referring to FIG. 4, a real space image 1010, text 1021, and stamp 1022 are displayed on a screen 1000 displayed on the display 110 of the wearable display 100.
- the real space image 1010 may be directly viewed through the transmissive display 110, or may be an image electronically displayed on the sealed display 110.
- the real space image 1010 is an image corresponding to the field of view of the first user wearing the wearable display 100.
- Both the text 1021 and the stamp 1022 are annotations input by the first user on the tablet terminal 300 with respect to the real space image 1010 distributed as a moving image.
- the text 1021 and the stamp 1022 are transparently superimposed on the real space image 1010 or synthesized into an image constituting the real space image 1010.
- moving images are distributed to a plurality of tablet terminals 300, and annotations are input to the plurality of tablet terminals 300. Therefore, the text 1021 includes the name of the user who input the annotation.
- FIG. 5 is a diagram illustrating a display example on the tablet terminal according to the first embodiment of the present disclosure. Referring to FIG. 5, a moving image 3010, text 3021, a stamp 3022, a text input box 3040, and a stamp selection box 3050 are displayed on a screen 3000 displayed on the display 340 of the tablet terminal 300.
- the moving image 3010 is an image corresponding to the first user's field of view generated by processing a moving image captured by the camera 120 mounted on the wearable display 100.
- the moving image 3010 is displayed in real time except for the time difference due to communication delay or the like. That is, the moving image 3010 is substantially synchronized with the real space image 1010 displayed on the wearable display 100.
- the text 3021 and the stamp 3022 are annotations that are output for display in the view of the first user on the wearable display 100.
- these annotations are synthesized with the moving image 3010.
- the display control unit 255 may execute a process of combining the annotation with the moving image 3010.
- the annotation may be displayed separately from the moving image 3010.
- the text input box 3040 is displayed for inputting the annotation text.
- the second user using the tablet terminal 300 places a cursor in the text input box 3040 using the touch panel 350, for example, and inputs text using the software keyboard.
- a stamp selection box 3050 is displayed for inputting an annotation stamp.
- the second user uses the touch panel 350 to select a stamp indicating evaluation or impression on the moving image 3010.
- the text and the stamp input using the text input box 3040 and the stamp selection box 3050 are once transmitted from the tablet terminal 300 to the server 200.
- the annotations are displayed based on information acquired through the annotation detection unit 261. Accordingly, there is a slight time difference until the text or stamp input using the text input box 3040 and the stamp selection box 3050 is displayed as the text 3021 and the stamp 3022, and the time difference is the transmission time and display time of the annotation. Reflects some of the differences between
- FIG. 6 is a diagram illustrating a schematic functional configuration of a system according to a modified example of the first embodiment of the present disclosure.
- the annotation output control unit 257 outputs annotation information to the image processing unit 251 in addition to the display control unit 259.
- the image processing unit 251 in addition to processing for cutting out an area from a moving image captured by the camera 120 and correcting the tilt, processing for combining an annotation with the generated moving image is executed. That is, in this modification, annotations are already combined with a moving image processed by the image processing unit 251, that is, a moving image corresponding to the first user's field of view. Therefore, the image acquisition unit 253 that acquires the processed moving image also acquires the annotation information output together with the moving image, and also realizes the function of the annotation detection unit 261.
- the display control unit 259 displays the image acquired from the image processing unit 251 on the display 110, so that the moving image displayed on the wearable display 100 and the tablet terminal 300 is displayed.
- the relationship between the image and the annotation is the same.
- the annotation displayed on the display 340 of the tablet terminal 300 is the annotation actually displayed in the first user's field of view.
- FIG. 7 is a diagram illustrating a schematic functional configuration of a system according to the second embodiment of the present disclosure.
- the system 20 includes an image processing unit 251, an image acquisition unit 253, a display control unit 255, an annotation output control unit 257, a display control unit 259, an annotation detection unit 261, and an annotation queue 263 as functional configurations.
- the annotation queue 263 added in the present embodiment can be realized by any of the memory 170 of the smartphone 150, the memory 220 of the server 200, or the memory 320 of the tablet terminal 300, for example.
- these functional configurations will be described with a focus on differences from the first embodiment.
- the first user wearing the wearable display 100 is active in real space.
- the first user's field of view needs to be secured to some extent in order to operate safely and to see what he / she wants to see. Therefore, in the example shown in FIG. 4, it is desirable that the area where the text 1021 and the stamp 1022 are superimposed or combined in the real space image 1010 displayed on the display 110 is limited. Therefore, in the example of FIG. 4, the number of texts 1021 and stamps 1022 that are simultaneously displayed on the display 110 may be limited to two, for example.
- the first user since the first user is active in the real space, the first user is not always dedicated to visually recognizing the displayed annotation. That is, it is estimated that the first user often views the annotation between one hand. Therefore, it is desirable that the time for which the display of the same annotation is continued is longer than the case where the user can concentrate on visually recognizing the annotation.
- the number of annotations simultaneously displayed in the first user's field of view is limited to a predetermined threshold value or less, or the display time of the same annotation is continued for a predetermined threshold value or more. Or is limited to. Accordingly, the annotation output control unit 257 determines the annotation input by one or more second users via the touch panel 350 of the tablet terminal 300 according to the display number and / or the display duration. Output at the rate. When the annotation input rate exceeds a predetermined output rate, the annotation output control unit queues the input annotation in the annotation queue 263.
- the annotation output control unit 257 queues the annotation as described above, from the viewpoint of communication between the second user who inputs the annotation and the first user who browses the annotation, the input comment is If it is not known at what timing it is displayed in the first user's field of view, communication may not be smooth.
- the annotation detection unit 261 detects that an annotation has been output from the annotation output control unit 257, and information indicating the output annotation is displayed on the display control unit 255.
- the display control unit 255 displays the output annotation on the display 340 of the tablet terminal 300 based on this information. Therefore, the second user can know the timing when the input comment is output for display in the first user's field of view. For example, the second comment is input after the first input comment is output. can do.
- the annotation detection unit 261 may detect that the input annotation is queued for display or output.
- information indicating the queued annotation is provided to the display control unit 255.
- the display control unit 255 displays the queued annotation on the display 340 based on this information.
- the second user can know that the input annotation is in a queued state.
- the second user can also predict the timing at which the input annotation is displayed in the first user's field of view.
- the annotation output control unit 257 may delete at least a part of the queued annotations based on the operation of the second user via the touch panel 350. This allows the second user to delete an annotation that is currently meaningless from the queue by his / her own operation, for example, when the user is out of time while the annotation is queued. be able to. For example, when there are a plurality of second users, the visibility of the first user is hindered by an annotation that has lost its meaning over time because each user can perform the operations described above. Can also be prevented.
- FIG. 8 is a diagram illustrating an example of a sequence according to the second embodiment of the present disclosure.
- FIG. 8 illustrates a case where the display control unit 255, the annotation output control unit 257, and the annotation detection unit 261 are realized by the server 200.
- the input annotation is transmitted from the tablet terminal 300 to the server 200 (S101).
- the annotation output control unit 257 queues the annotation here.
- the annotation detection unit 261 detects that the annotation has been queued, and provides information indicating the annotation to the display control unit 255.
- the tablet terminal 300 displays the queued annotation on the display 340 (S103).
- the annotation output control unit 257 extracts the annotation from the annotation queue 263 and outputs the annotation for display on the wearable display 100 (S105)
- the annotation detection unit 261 detects that the annotation has been output
- Information indicating the annotation is provided to the display control unit 255.
- the tablet terminal 300 displays the annotation output on the display 340 (S107).
- FIG. 9 is a diagram illustrating a display example on the tablet terminal according to the second embodiment of the present disclosure. Note that display examples on the wearable display 100 are the same as those in the first embodiment described above, and thus the description thereof is omitted.
- a screen 3100 displayed on the display 340 of the tablet terminal 300 includes a moving image 3010, text 3021, stamp 3022, the same as the screen 3000 described in the first embodiment with reference to FIG. 5.
- the queued text 3131 and the queued stamp 3132 are displayed.
- Queued text 3131 and stamp 3132 are displayed in a different manner from text 3021 and stamp 3022 combined with moving image 3010. More specifically, the text 3131 and the stamp 3132 are arranged below the moving image 3010. This arrangement corresponds to the state of the annotation queue 263. That is, in the state of the illustrated example, the annotation output control unit 257 outputs the text 3131a and the stamp 3132a next, and outputs the text 3131b and the stamp 3132b next. The text 3131c and the stamp 3132c are output.
- FIG. 10 is a diagram illustrating a schematic functional configuration of a system according to the third embodiment of the present disclosure.
- an output rate setting unit 265 is further realized as a functional configuration.
- the wearable display 100 is provided with operation buttons / sensors 140.
- the output rate setting unit 265 may set a predetermined rate at which the annotation output control unit 257 outputs an annotation based on, for example, a first user operation via an operation button.
- the output rate setting unit 265 may set a predetermined rate at which the annotation output control unit 257 outputs the annotation based on the first user's sensing information acquired by the sensor.
- the output rate of the annotation can be increased by an operation via an operation button or the like. it can.
- the annotation may be displayed using a larger area of the display 110, or the time for which the same annotation display is continued may be shortened.
- the operation buttons of the wearable display 100 may be replaced by an input device such as a touch panel provided in the smartphone 150, voice input, gesture input, or the like.
- the annotation output control unit 257 displays the annotation until the amount of annotation queued in the annotation queue 263 falls below a predetermined threshold.
- the output rate of annotation by the output control unit 257 may be set to a maximum value.
- the output rate is set to the maximum value, for example, the annotation is displayed on the entire display 110 and is set to the minimum display duration of the annotation.
- the output rate setting unit 265 may estimate the surrounding environment of the first user based on the sensing information of the first user, and set the output rate based on the estimation result. More specifically, for example, the output rate setting unit 265 estimates that the first user is exercising (for example, walking or running) based on the detection result of the acceleration sensor mounted on the wearable display 100. In some cases, the output rate may be lowered. For example, the output rate setting unit 265 may decrease the output rate when it is estimated that the first user is in a noisy place based on the detection result of the sound sensor mounted on the wearable display 100.
- the output rate setting unit 265 may temporarily set the output rate to 0 when lowering the output rate.
- the annotation output control unit 257 does not output the annotation, and the annotation is not displayed on the display 110 of the wearable display 100.
- the display 110 displays fewer objects superimposed on the real space image (an object other than the annotation may continue to be displayed). It makes it easier to focus on visual recognition of the image, improving the safety of activities in real space, and focusing on what he / she wants to see.
- the output rate setting unit 265 may temporarily set the output rate to 0 in accordance with a first user operation via an operation button or the like.
- FIG. 11 is a diagram illustrating a first display example on the tablet terminal when the output rate is temporarily set to 0 in the third embodiment of the present disclosure. Since the wearable display 100 is in a state in which no annotation is displayed, illustration of the display example and detailed description thereof are omitted.
- a screen 3200 displayed on the display 340 of the tablet terminal 300 includes a moving image 3010, a text input box 3040 similar to the screen 3000 described with reference to FIG. 5 in the first embodiment, and A stamp selection box 3050 and an annotation non-display notification 3260 are displayed.
- the annotation non-display notification 3260 is provided from the annotation output control unit 257 to the display control unit 255 via the annotation detection unit 261 when the output rate setting unit 265 temporarily sets the annotation output rate to 0, for example. It is displayed based on information.
- the annotation non-display notification 3260 is output from the annotation output control unit 257 when the second user inputs a new annotation on the tablet terminal 300 while the annotation output rate is temporarily set to 0.
- the information may be displayed based on information provided to the display control unit 255 via the annotation detection unit 261.
- the annotation output control unit 257 queues the annotation input while the output rate is 0 in the annotation queue 263.
- FIG. 12 is a diagram illustrating a second display example on the tablet terminal when the output rate is temporarily set to 0 in the third embodiment of the present disclosure.
- a screen 3300 displayed on the display 340 of the tablet terminal 300 includes a moving image 3010, a text input box 3040, a stamp similar to the screen 3100 described with reference to FIG. 9 in the second embodiment.
- a selection box 3050, queued text 3131 and stamp 3132, and a non-display notification 3260 similar to the first example above are displayed.
- the second user who uses the tablet terminal 300 is in a state where the input annotation is temporarily not displayed in the first user's view, and when the annotation display is resumed. It is possible to grasp whether the annotations are displayed in such an order. Therefore, for example, when there is an annotation that has lost its meaning in a timely manner while the display of the annotation is suppressed, as in the example described in the second embodiment, the annotation is input. The second user can delete the annotation from the queue.
- FIG. 13 is a diagram illustrating a schematic functional configuration of a system according to the fourth embodiment of the present disclosure.
- a gaze detection unit 267 is further realized as a functional configuration.
- the wearable display 100 is provided with an eye camera 145.
- the gaze detection unit 267 detects the line of sight of the first user wearing the wearable display 100 from the image captured by the eye camera 145, and detects that the annotation displayed on the display 1100 has been gaze by the first user. To do. More specifically, for example, the gaze detection unit 267 may detect that the annotation is watched when the first user's gaze matches the displayed annotation, or the first user's gaze. It may be detected that the annotation is watched when the direction of is substantially the same as the direction in which the annotation is displayed.
- the result of detection by the gaze detection unit 267 is provided to the annotation detection unit 261.
- the annotation detection unit 261 detects that an annotation has been output based on the information provided from the annotation output control unit 257, and also uses the annotation provided based on the information provided from the gaze detection unit 267. It is also detected that is watched.
- the display control unit 255 displays the annotation output for display on the wearable display 100 on the display 340 of the tablet terminal 300, and these annotations are visually recognized. This notification is displayed on the display 340.
- FIG. 14 is a diagram illustrating a first display example on the tablet terminal according to the fourth embodiment of the present disclosure. Note that display examples on the wearable display 100 are the same as those in the first embodiment described above, and thus the description thereof is omitted.
- a screen 3400 displayed on the display 340 of the tablet terminal 300 includes a moving image 3010, text 3021, a stamp 3022, the same as the screen 3000 described with reference to FIG. 5 in the first embodiment.
- a text input box 3040, a stamp selection box 3050, and a gaze icon 3470 are displayed.
- the gaze icon 3470 is displayed for the annotation that has been detected by the annotation detection unit 261 as being watched by the first user.
- the second user who uses the tablet terminal 300 can know that the input annotation is displayed on the wearable display 100, and that it has been watched by the first user. .
- FIG. 15 is a diagram illustrating a second display example on the tablet terminal according to the fourth embodiment of the present disclosure.
- a screen 3500 displayed on the display 340 of the tablet terminal 300 includes a moving image 3010, text 3021, stamp 3022, the same as the screen 3100 described with reference to FIG. 9 in the second embodiment.
- Text input box 3040, stamp selection box 3050, queued text 3131 and stamp 3132, and gaze icon 3470 similar to those in the first example are displayed. As shown in this figure, the gaze icon 3470 may be displayed with the queued text 3131 and the stamp 3132.
- FIG. 16 is a block diagram for describing a hardware configuration capable of realizing the information processing apparatus according to the embodiment of the present disclosure.
- the illustrated information processing apparatus 900 can realize, for example, the smartphone 150, the server 200, the tablet terminal 300, and the like in the above-described embodiment.
- the information processing apparatus 900 includes a CPU (Central Processing unit) 901, a ROM (Read Only Memory) 903, and a RAM (Random Access Memory) 905.
- the information processing apparatus 900 may include a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, a drive 921, a connection port 923, and a communication device 925.
- the information processing apparatus 900 may include an imaging device 933 and a sensor 935 as necessary.
- the information processing apparatus 900 may include a processing circuit called DSP (Digital Signal Processor) or ASIC (Application Specific Integrated Circuit) instead of or in addition to the CPU 901.
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- the CPU 901 functions as an arithmetic processing device and a control device, and controls all or a part of the operation in the information processing device 900 according to various programs recorded in the ROM 903, the RAM 905, the storage device 919, or the removable recording medium 927.
- the ROM 903 stores programs and calculation parameters used by the CPU 901.
- the RAM 905 primarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like.
- the CPU 901, the ROM 903, and the RAM 905 are connected to each other by a host bus 907 configured by an internal bus such as a CPU bus. Further, the host bus 907 is connected to an external bus 911 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 909.
- PCI Peripheral Component Interconnect / Interface
- the input device 915 is a device operated by the user, such as a mouse, a keyboard, a touch panel, a button, a switch, and a lever.
- the input device 915 may be, for example, a remote control device that uses infrared rays or other radio waves, or may be an external connection device 929 such as a mobile phone that supports the operation of the information processing device 900.
- the input device 915 includes an input control circuit that generates an input signal based on information input by the user and outputs the input signal to the CPU 901. The user operates the input device 915 to input various data and instruct processing operations to the information processing device 900.
- the output device 917 is a device that can notify the user of the acquired information visually or audibly.
- the output device 917 can be, for example, a display device such as an LCD (Liquid Crystal Display), a PDP (Plasma Display Panel), an organic EL (Electro-Luminescence) display, an audio output device such as a speaker and headphones, and a printer device.
- the output device 917 outputs the result obtained by the processing of the information processing device 900 as video such as text or an image, or outputs it as audio such as voice or sound.
- the storage device 919 is a data storage device configured as an example of a storage unit of the information processing device 900.
- the storage device 919 includes, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device.
- the storage device 919 stores programs executed by the CPU 901, various data, various data acquired from the outside, and the like.
- the drive 921 is a reader / writer for a removable recording medium 927 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and is built in or externally attached to the information processing apparatus 900.
- the drive 921 reads information recorded on the attached removable recording medium 927 and outputs the information to the RAM 905.
- the drive 921 writes a record in the attached removable recording medium 927.
- the connection port 923 is a port for directly connecting a device to the information processing apparatus 900.
- the connection port 923 can be, for example, a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface) port, or the like.
- the connection port 923 may be an RS-232C port, an optical audio terminal, an HDMI (High-Definition Multimedia Interface) port, or the like.
- the communication device 925 is a communication interface configured with, for example, a communication device for connecting to the communication network 931.
- the communication device 925 may be, for example, a communication card for wired or wireless LAN (Local Area Network), Bluetooth (registered trademark), or WUSB (Wireless USB).
- the communication device 925 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), or a modem for various communication.
- the communication device 925 transmits and receives signals and the like using a predetermined protocol such as TCP / IP with the Internet and other communication devices, for example.
- the communication network 931 connected to the communication device 925 is a wired or wireless network, such as the Internet, a home LAN, infrared communication, radio wave communication, or satellite communication.
- the imaging device 933 uses various members such as an imaging element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor), and a lens for controlling the formation of a subject image on the imaging element. It is an apparatus that images a real space and generates a captured image.
- the imaging device 933 may capture a still image or may capture a moving image.
- the sensor 935 is various sensors such as an acceleration sensor, a gyro sensor, a geomagnetic sensor, an optical sensor, and a sound sensor.
- the sensor 935 acquires information about the state of the information processing apparatus 900 itself, such as the posture of the information processing apparatus 900, and information about the surrounding environment of the information processing apparatus 900, such as brightness and noise around the information processing apparatus 900, for example. To do.
- the sensor 935 may include a GPS sensor that receives a GPS (Global Positioning System) signal and measures the latitude, longitude, and altitude of the apparatus.
- GPS Global Positioning System
- Each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Such a configuration can be appropriately changed according to the technical level at the time of implementation.
- an embodiment of the present disclosure functions as an information processing apparatus (such as a smartphone, a server, or a tablet terminal) as described above, a system, an information processing method executed by the information processing apparatus or system, and an information processing apparatus. And a non-transitory tangible medium on which the program is recorded.
- an information processing apparatus such as a smartphone, a server, or a tablet terminal
- an image acquisition unit that acquires a moving image corresponding to the field of view of the first user in real time;
- An annotation detection unit that detects that the annotation input by the second user with respect to the moving image is displayed in the field of view of the first user or output for the display;
- the display control unit is further a display control device that displays the displayed or output annotation to the second user.
- the annotation detection unit further detects that the annotation is queued for display or output, The display control device according to (1), wherein the display control unit displays the queued annotation to the second user.
- the display control device displays the displayed or output annotation and the queued annotation in different modes.
- An annotation output control unit that outputs the annotation at a predetermined rate for displaying the annotation in the field of view of the first user, and queues the annotation that has been input beyond the predetermined rate.
- the display control device according to any one of (3).
- the display control device further including an output rate setting unit configured to set the predetermined rate based on an operation of the first user or sensing information of the first user.
- the output rate setting unit temporarily sets the predetermined rate to 0 based on sensing information of the first user.
- the display control device (7)
- the display control unit further displays a notification that the predetermined rate is 0 to the second user.
- the output rate setting unit sets the predetermined rate to a maximum value based on an operation of the first user until the amount of the queued annotation falls below a predetermined threshold. 5.
- the display control device according to any one of items 5) to (7).
- the annotation output control unit according to any one of (4) to (8), wherein the annotation output control unit deletes at least a part of the queued annotation based on an operation of the second user. Display controller.
- the annotation detection unit further detects that the displayed or output annotation is watched by the first user, The display control unit according to any one of (1) to (9), wherein the display control unit further displays a notification that the displayed or output annotation has been watched toward the second user.
- Display control device (11) The display control unit according to any one of (1) to (10), wherein the display control unit causes the moving image combined with the displayed or output annotation to be displayed to the second user. Display controller. (12) The display control device according to (11), wherein the display control unit executes a process of combining the displayed or output annotation with the moving image. (13) The display control apparatus according to (11), wherein the annotation detection unit is realized by the image acquisition unit acquiring the moving image obtained by combining the displayed or output annotations.
- a display control method comprising: (15) a function of acquiring a moving image corresponding to the field of view of the first user in real time; A function of displaying the moving image toward a second user different from the first user; A function of detecting that the annotation input by the second user with respect to the moving image is displayed in the field of view of the first user or output for the display; A program for causing a computer to realize a function of displaying the displayed or output annotation to the second user.
- Wearable display 110 Display 120 Camera 140 Sensor 145 Eye camera 150 Smartphone 160 Processor 170 Memory 200 Server 210 Processor 220 Memory 251 Image processing unit 253 Image acquisition unit 255 Display control unit 257 Annotation output control Unit 261 annotation detection unit 263 annotation queue 265 output rate setting unit 267 gaze detection unit 300 tablet terminal 310 processor 320 memory 340 display 350 touch panel
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
1.第1の実施形態
1-1.システム構成
1-2.装置構成
1-3.機能構成
1-4.表示例
1-5.変形例
2.第2の実施形態
2-1.機能構成
2-2.シーケンス例
2-3.表示例
3.第3の実施形態
4.第4の実施形態
5.ハードウェア構成
6.補足
(1-1.システム構成)
図1は、本開示の第1の実施形態に係るシステムの概略的な構成を示す図である。図1を参照すると、システム10は、ウェアラブルディスプレイ100と、スマートフォン150と、サーバ200と、タブレット端末300とを含む。ウェアラブルディスプレイ100とスマートフォン150とは、例えばBluetooth(登録商標)によって接続される。また、サーバ200は、スマートフォン150およびタブレット端末300と、有線または無線の各種ネットワークによって接続される。
図2は、本開示の第1の実施形態に係る装置の概略的な構成を示す図である。なお、各装置の構成要素は、以下で説明する実施形態の機能に関連する部分に限定して図示されており、各装置は図示されていない構成要素をさらに含みうる。各装置のより詳細な構成については、後述する情報処理装置のハードウェア構成の説明を参照されたい。以下、図2を参照しながら、システム10に含まれる各装置の構成について説明する。
ウェアラブルディスプレイ100は、ディスプレイ110と、カメラ120と、通信部130とを有する。
スマートフォン150は、プロセッサ160と、メモリ170と、通信部180とを有する。
サーバ200は、プロセッサ210と、メモリ220と、通信部230とを有する。
タブレット端末300は、プロセッサ310と、メモリ320と、通信部330と、ディスプレイ340と、タッチパネル350とを有する。
図3は、本開示の第1の実施形態に係るシステムの概略的な機能構成を示す図である。図3を参照すると、システム10は、画像加工部251、画像取得部253、表示制御部255、アノテーション出力制御部257、表示制御部259、およびアノテーション検出部261を機能構成として含む。これらの機能構成は、例えばスマートフォン150のプロセッサ160、サーバ200のプロセッサ210、またはタブレット端末300のプロセッサ310のいずれかによって実現されてもよく、またこれらのプロセッサに分散して実現されてもよい。以下、それぞれの機能構成についてさらに説明する。
(ウェアラブルディスプレイでの表示例)
図4は、本開示の第1の実施形態におけるウェアラブルディスプレイでの表示例を示す図である。図4を参照すると、ウェアラブルディスプレイ100のディスプレイ110に表示される画面1000には、実空間の像1010と、テキスト1021と、スタンプ1022とが表示される。
図5は、本開示の第1の実施形態におけるタブレット端末での表示例を示す図である。図5を参照すると、タブレット端末300のディスプレイ340に表示される画面3000には、動画像3010と、テキスト3021と、スタンプ3022と、テキスト入力ボックス3040と、スタンプ選択ボックス3050とが表示される。
図6は、本開示の第1の実施形態の変形例に係るシステムの概略的な機能構成を示す図である。図6を参照すると、システム12では、アノテーション出力制御部257が、表示制御部259に加えて画像加工部251にアノテーションの情報を出力する。画像加工部251では、カメラ120によって撮像された動画像から領域を切り出したり傾きを補正したりする処理に加えて、生成された動画像にアノテーションを合成する処理が実行される。つまり、本変形例では、画像加工部251によって加工された動画像、すなわち第1のユーザの視界に対応する動画像に、既にアノテーションが合成されている。従って、加工された動画像を取得する画像取得部253は、動画像とともに出力されたアノテーションの情報をも取得し、アノテーション検出部261の機能をも実現することになる。
次に、本開示の第2の実施形態について説明する。なお、上記の第1の実施形態と同様の構成(システム構成および装置構成)については、重複した説明を省略する。
図7は、本開示の第2の実施形態に係るシステムの概略的な機能構成を示す図である。図7を参照すると、システム20は、画像加工部251、画像取得部253、表示制御部255、アノテーション出力制御部257、表示制御部259、アノテーション検出部261、およびアノテーションキュー263を機能構成として含む。本実施形態で追加されるアノテーションキュー263は、例えばスマートフォン150のメモリ170、サーバ200のメモリ220、またはタブレット端末300のメモリ320のいずれかによって実現されうる。以下、これらの機能構成について、第1の実施形態とは異なる点を中心に説明する。
図8は、本開示の第2の実施形態におけるシーケンスの例を示す図である。なお、図8では、表示制御部255、アノテーション出力制御部257、およびアノテーション検出部261がサーバ200で実現される場合について例示している。図8を参照すると、まず、タブレット端末300からサーバ200に、入力されたアノテーションが送信される(S101)。図示された例では、ここでアノテーション出力制御部257がアノテーションをキューイングする。アノテーション検出部261は、アノテーションがキューイングされたことを検出し、当該アノテーションを示す情報を表示制御部255に提供する。表示制御部255から送信された情報に従って、タブレット端末300では、ディスプレイ340にキューイングされているアノテーションが表示される(S103)。
図9は、本開示の第2の実施形態におけるタブレット端末での表示例を示す図である。なお、ウェアラブルディスプレイ100での表示例については、上記の第1の実施形態と同様であるため説明を省略する。図9を参照すると、タブレット端末300のディスプレイ340に表示される画面3100には、第1の実施形態で図5を参照して説明した画面3000と同様の動画像3010、テキスト3021、スタンプ3022、テキスト入力ボックス3040、およびスタンプ選択ボックス3050に加えて、キューイングされたテキスト3131と、キューイングされたスタンプ3132とが表示される。
次に、本開示の第3の実施形態について説明する。なお、上記の第1および第2の実施形態と同様の構成(システム構成および装置構成など)については、重複した説明を省略する。
次に、本開示の第4の実施形態について説明する。なお、上記の第1~第3の実施形態と同様の構成(システム構成および装置構成など)については、重複した説明を省略する。
次に、図16を参照して、本開示の実施形態に係る情報処理装置のハードウェア構成について説明する。図16は、本開示の実施形態に係る情報処理装置を実現可能なハードウェア構成について説明するためのブロック図である。図示された情報処理装置900は、例えば、上記の実施形態におけるスマートフォン150、サーバ200、およびタブレット端末300などを実現しうる。
本開示の実施形態は、例えば、上記で説明したような情報処理装置(スマートフォン、サーバまたはタブレット端末など)、システム、情報処理装置またはシステムで実行される情報処理方法、情報処理装置を機能させるためのプログラム、およびプログラムが記録された一時的でない有形の媒体を含みうる。
(1)第1のユーザの視界に対応する動画像をリアルタイムで取得する画像取得部と、
前記動画像を前記第1のユーザとは異なる第2のユーザに向けて表示させる表示制御部と、
前記動画像に対して前記第2のユーザが入力したアノテーションが、前記第1のユーザの視界に表示された、または該表示のために出力されたことを検出するアノテーション検出部と、
を備え、
前記表示制御部は、さらに、前記表示または出力されたアノテーションを前記第2のユーザに向けて表示させる表示制御装置。
(2)前記アノテーション検出部は、さらに、前記アノテーションが前記表示または出力のためにキューイングされていることを検出し、
前記表示制御部は、前記キューイングされているアノテーションを前記第2のユーザに向けて表示させる、前記(1)に記載の表示制御装置。
(3)前記表示制御部は、前記表示または出力されたアノテーションと、前記キューイングされているアノテーションとを、互いに異なる態様で表示させる、前記(2)に記載の表示制御装置。
(4)前記アノテーションを前記第1のユーザの視界に表示させるために所定のレートで出力し、前記所定のレートを超えて入力された前記アノテーションをキューイングするアノテーション出力制御部をさらに備える、前記(1)~(3)のいずれか1項に記載の表示制御装置。
(5)前記所定のレートを、前記第1のユーザの操作または前記第1のユーザのセンシング情報に基づいて設定する出力レート設定部をさらに備える、前記(4)に記載の表示制御装置。
(6)前記出力レート設定部は、前記第1のユーザのセンシング情報に基づいて、前記所定のレートを一時的に0に設定する、前記(5)に記載の表示制御装置。
(7)前記表示制御部は、さらに、前記所定のレートが0であることの通知を前記第2のユーザに向けて表示させる、前記(6)に記載の表示制御装置。
(8)前記出力レート設定部は、前記第1のユーザの操作に基づいて、前記キューイングされているアノテーションの量が所定の閾値を下回るまで前記所定のレートを最大値に設定する、前記(5)~(7)のいずれか1項に記載の表示制御装置。
(9)前記アノテーション出力制御部は、前記第2のユーザの操作に基づいて前記キューイングされているアノテーションの少なくとも一部を削除する、前記(4)~(8)のいずれか1項に記載の表示制御装置。
(10)前記アノテーション検出部は、さらに、前記表示または出力されたアノテーションが前記第1のユーザによって注視されたことを検出し、
前記表示制御部は、さらに、前記表示または出力されたアノテーションが注視されたことの通知を前記第2のユーザに向けて表示させる、前記(1)~(9)のいずれか1項に記載の表示制御装置。
(11)前記表示制御部は、前記表示または出力されたアノテーションが合成された前記動画像を前記第2のユーザに向けて表示させる、前記(1)~(10)のいずれか1項に記載の表示制御装置。
(12)前記表示制御部は、前記表示または出力されたアノテーションを前記動画像に合成する処理を実行する、前記(11)に記載の表示制御装置。
(13)前記アノテーション検出部は、前記画像取得部が、前記表示または出力されたアノテーションが合成された前記動画像を取得することによって実現される、前記(11)に記載の表示制御装置。
(14)第1のユーザの視界に対応する動画像をリアルタイムで取得することと、
前記動画像を前記第1のユーザとは異なる第2のユーザに向けて表示させることと、
プロセッサが、前記動画像に対して前記第2のユーザが入力したアノテーションが、前記第1のユーザの視界に表示された、または該表示のために出力されたことを検出することと、
前記表示または出力されたアノテーションを前記第2のユーザに向けて表示させることと
を含む表示制御方法。
(15)第1のユーザの視界に対応する動画像をリアルタイムで取得する機能と、
前記動画像を前記第1のユーザとは異なる第2のユーザに向けて表示させる機能と、
前記動画像に対して前記第2のユーザが入力したアノテーションが、前記第1のユーザの視界に表示された、または該表示のために出力されたことを検出する機能と、
前記表示または出力されたアノテーションを前記第2のユーザに向けて表示させる機能と
をコンピュータに実現させるためのプログラム。
100 ウェアラブルディスプレイ
110 ディスプレイ
120 カメラ
140 センサ
145 アイカメラ
150 スマートフォン
160 プロセッサ
170 メモリ
200 サーバ
210 プロセッサ
220 メモリ
251 画像加工部
253 画像取得部
255 表示制御部
257 アノテーション出力制御部
261 アノテーション検出部
263 アノテーションキュー
265 出力レート設定部
267 注視検出部
300 タブレット端末
310 プロセッサ
320 メモリ
340 ディスプレイ
350 タッチパネル
Claims (15)
- 第1のユーザの視界に対応する動画像をリアルタイムで取得する画像取得部と、
前記動画像を前記第1のユーザとは異なる第2のユーザに向けて表示させる表示制御部と、
前記動画像に対して前記第2のユーザが入力したアノテーションが、前記第1のユーザの視界に表示された、または該表示のために出力されたことを検出するアノテーション検出部と、
を備え、
前記表示制御部は、さらに、前記表示または出力されたアノテーションを前記第2のユーザに向けて表示させる表示制御装置。 - 前記アノテーション検出部は、さらに、前記アノテーションが前記表示または出力のためにキューイングされていることを検出し、
前記表示制御部は、前記キューイングされているアノテーションを前記第2のユーザに向けて表示させる、請求項1に記載の表示制御装置。 - 前記表示制御部は、前記表示または出力されたアノテーションと、前記キューイングされているアノテーションとを、互いに異なる態様で表示させる、請求項2に記載の表示制御装置。
- 前記アノテーションを前記第1のユーザの視界に表示させるために所定のレートで出力し、前記所定のレートを超えて入力された前記アノテーションをキューイングするアノテーション出力制御部をさらに備える、請求項1に記載の表示制御装置。
- 前記所定のレートを、前記第1のユーザの操作または前記第1のユーザのセンシング情報に基づいて設定する出力レート設定部をさらに備える、請求項4に記載の表示制御装置。
- 前記出力レート設定部は、前記第1のユーザのセンシング情報に基づいて、前記所定のレートを一時的に0に設定する、請求項5に記載の表示制御装置。
- 前記表示制御部は、さらに、前記所定のレートが0であることの通知を前記第2のユーザに向けて表示させる、請求項6に記載の表示制御装置。
- 前記出力レート設定部は、前記第1のユーザの操作に基づいて、前記キューイングされているアノテーションの量が所定の閾値を下回るまで前記所定のレートを最大値に設定する、請求項5に記載の表示制御装置。
- 前記アノテーション出力制御部は、前記第2のユーザの操作に基づいて前記キューイングされているアノテーションの少なくとも一部を削除する、請求項4に記載の表示制御装置。
- 前記アノテーション検出部は、さらに、前記表示または出力されたアノテーションが前記第1のユーザによって注視されたことを検出し、
前記表示制御部は、さらに、前記表示または出力されたアノテーションが注視されたことの通知を前記第2のユーザに向けて表示させる、請求項1に記載の表示制御装置。 - 前記表示制御部は、前記表示または出力されたアノテーションが合成された前記動画像を前記第2のユーザに向けて表示させる、請求項1に記載の表示制御装置。
- 前記表示制御部は、前記表示または出力されたアノテーションを前記動画像に合成する処理を実行する、請求項11に記載の表示制御装置。
- 前記アノテーション検出部は、前記画像取得部が、前記表示または出力されたアノテーションが合成された前記動画像を取得することによって実現される、請求項11に記載の表示制御装置。
- 第1のユーザの視界に対応する動画像をリアルタイムで取得することと、
前記動画像を前記第1のユーザとは異なる第2のユーザに向けて表示させることと、
プロセッサが、前記動画像に対して前記第2のユーザが入力したアノテーションが、前記第1のユーザの視界に表示された、または該表示のために出力されたことを検出することと、
前記表示または出力されたアノテーションを前記第2のユーザに向けて表示させることと
を含む表示制御方法。 - 第1のユーザの視界に対応する動画像をリアルタイムで取得する機能と、
前記動画像を前記第1のユーザとは異なる第2のユーザに向けて表示させる機能と、
前記動画像に対して前記第2のユーザが入力したアノテーションが、前記第1のユーザの視界に表示された、または該表示のために出力されたことを検出する機能と、
前記表示または出力されたアノテーションを前記第2のユーザに向けて表示させる機能と
をコンピュータに実現させるためのプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201480058333.8A CN105684045B (zh) | 2013-11-13 | 2014-08-13 | 显示控制装置、显示控制方法和程序 |
US15/025,753 US10460022B2 (en) | 2013-11-13 | 2014-08-13 | Display control device, display control method, and program for displaying an annotation toward a user |
JP2015547664A JP6459972B2 (ja) | 2013-11-13 | 2014-08-13 | 表示制御装置、表示制御方法、およびプログラム |
EP14861485.2A EP3070585A4 (en) | 2013-11-13 | 2014-08-13 | Display control device, display control method and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-234933 | 2013-11-13 | ||
JP2013234933 | 2013-11-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015072195A1 true WO2015072195A1 (ja) | 2015-05-21 |
Family
ID=53057141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/071387 WO2015072195A1 (ja) | 2013-11-13 | 2014-08-13 | 表示制御装置、表示制御方法、およびプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US10460022B2 (ja) |
EP (1) | EP3070585A4 (ja) |
JP (1) | JP6459972B2 (ja) |
CN (1) | CN105684045B (ja) |
WO (1) | WO2015072195A1 (ja) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017054185A (ja) * | 2015-09-07 | 2017-03-16 | 株式会社東芝 | 情報処理装置、情報処理方法及び情報処理プログラム |
CN108076307A (zh) * | 2018-01-26 | 2018-05-25 | 南京华捷艾米软件科技有限公司 | 基于ar的视频会议***和基于ar的视频会议方法 |
JP2018181143A (ja) * | 2017-04-19 | 2018-11-15 | Kddi株式会社 | 三次元空間情報表示システム、及び三次元空間情報表示方法 |
JP2019179137A (ja) * | 2018-03-30 | 2019-10-17 | ピーシーフェーズ株式会社 | システム、端末及びプログラム |
JP2020074150A (ja) * | 2020-01-10 | 2020-05-14 | 株式会社東芝 | 情報処理装置、情報処理方法及び情報処理プログラム |
JP2021007042A (ja) * | 2015-11-04 | 2021-01-21 | ソニー株式会社 | 情報処理装置およびプログラム |
WO2023218740A1 (ja) * | 2022-05-13 | 2023-11-16 | 株式会社Nttドコモ | 表示制御システムおよびウェアラブル装置 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10495726B2 (en) | 2014-11-13 | 2019-12-03 | WorldViz, Inc. | Methods and systems for an immersive virtual reality system using multiple active markers |
EP3281403A4 (en) | 2015-04-06 | 2018-03-07 | Scope Technologies US Inc. | Methods and apparatus for augmented reality applications |
US9990689B2 (en) | 2015-12-16 | 2018-06-05 | WorldViz, Inc. | Multi-user virtual reality processing |
US10095928B2 (en) | 2015-12-22 | 2018-10-09 | WorldViz, Inc. | Methods and systems for marker identification |
US10242501B1 (en) | 2016-05-03 | 2019-03-26 | WorldViz, Inc. | Multi-user virtual and augmented reality tracking systems |
CN108108012B (zh) * | 2016-11-25 | 2019-12-06 | 腾讯科技(深圳)有限公司 | 信息交互方法和装置 |
JP2018163461A (ja) * | 2017-03-24 | 2018-10-18 | ソニー株式会社 | 情報処理装置、および情報処理方法、並びにプログラム |
US10403050B1 (en) | 2017-04-10 | 2019-09-03 | WorldViz, Inc. | Multi-user virtual and augmented reality tracking systems |
JP6343779B1 (ja) * | 2017-04-28 | 2018-06-20 | 株式会社コナミデジタルエンタテインメント | サーバ装置、及びそれに用いられるコンピュータプログラム |
US20230177258A1 (en) * | 2021-12-02 | 2023-06-08 | At&T Intellectual Property I, L.P. | Shared annotation of media sub-content |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009251428A (ja) * | 2008-04-09 | 2009-10-29 | Konica Minolta Holdings Inc | 情報表示システム |
JP2011192048A (ja) * | 2010-03-15 | 2011-09-29 | Nec Corp | 発言内容出力システム、発言内容出力装置及び発言内容出力方法 |
JP2012160898A (ja) * | 2011-01-31 | 2012-08-23 | Brother Ind Ltd | 画像処理装置 |
JP2012212345A (ja) | 2011-03-31 | 2012-11-01 | Sony Corp | 端末装置、オブジェクト制御方法及びプログラム |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10257462A (ja) * | 1997-03-13 | 1998-09-25 | Omron Corp | 描画方法および描画システム |
US6152563A (en) * | 1998-02-20 | 2000-11-28 | Hutchinson; Thomas E. | Eye gaze direction tracker |
JP4374625B2 (ja) * | 1998-05-08 | 2009-12-02 | ソニー株式会社 | 画像生成装置及び方法 |
JP2001245269A (ja) * | 2000-02-25 | 2001-09-07 | Sony Corp | コミュニケーション・データ作成装置及び作成方法、コミュニケーション・データ再生装置及び再生方法、並びに、プログラム記憶媒体 |
US6603491B2 (en) * | 2000-05-26 | 2003-08-05 | Jerome H. Lemelson | System and methods for controlling automatic scrolling of information on a display or screen |
JP4288843B2 (ja) * | 2000-10-25 | 2009-07-01 | 沖電気工業株式会社 | 遠隔作業支援システム |
US7116361B2 (en) * | 2001-04-30 | 2006-10-03 | Hewlett-Packard Development Company | Image storage queue adapted to store images according to archival status |
US7119814B2 (en) * | 2001-05-18 | 2006-10-10 | Given Imaging Ltd. | System and method for annotation on a moving image |
US7259785B2 (en) * | 2003-04-28 | 2007-08-21 | Hewlett-Packard Development Company, L.P. | Digital imaging method and apparatus using eye-tracking control |
US7396129B2 (en) * | 2004-11-22 | 2008-07-08 | Carestream Health, Inc. | Diagnostic system having gaze tracking |
JP2007158410A (ja) * | 2005-11-30 | 2007-06-21 | Sony Computer Entertainment Inc | 画像符号化装置、画像復号装置、および画像処理システム |
JP2008067219A (ja) * | 2006-09-08 | 2008-03-21 | Sony Corp | 撮像装置、撮像方法 |
JP4264663B2 (ja) * | 2006-11-21 | 2009-05-20 | ソニー株式会社 | 撮影装置、画像処理装置、および、これらにおける画像処理方法ならびに当該方法をコンピュータに実行させるプログラム |
US8711265B2 (en) * | 2008-04-24 | 2014-04-29 | Canon Kabushiki Kaisha | Image processing apparatus, control method for the same, and storage medium |
JP4596060B2 (ja) * | 2008-08-29 | 2010-12-08 | ソニー株式会社 | 電子機器、動画像データ区間変更方法及びプログラム |
US8682142B1 (en) * | 2010-03-18 | 2014-03-25 | Given Imaging Ltd. | System and method for editing an image stream captured in-vivo |
JP2012085009A (ja) * | 2010-10-07 | 2012-04-26 | Sony Corp | 情報処理装置および情報処理方法 |
JP5553782B2 (ja) * | 2011-01-27 | 2014-07-16 | 日本電信電話株式会社 | 映像コミュニケーションシステム及びその作動方法 |
JP5765019B2 (ja) * | 2011-03-31 | 2015-08-19 | ソニー株式会社 | 表示制御装置、表示制御方法、およびプログラム |
US20120299962A1 (en) * | 2011-05-27 | 2012-11-29 | Nokia Corporation | Method and apparatus for collaborative augmented reality displays |
JP2014092941A (ja) * | 2012-11-02 | 2014-05-19 | Sony Corp | 情報処理装置及び情報処理方法、並びにコンピューター・プログラム |
US9823739B2 (en) * | 2013-04-04 | 2017-11-21 | Sony Corporation | Image processing device, image processing method, and program |
WO2014162825A1 (ja) * | 2013-04-04 | 2014-10-09 | ソニー株式会社 | 表示制御装置、表示制御方法およびプログラム |
EP2983137B1 (en) * | 2013-04-04 | 2019-05-22 | Sony Corporation | Information processing device, information processing method and program |
JP2015095147A (ja) * | 2013-11-13 | 2015-05-18 | ソニー株式会社 | 表示制御装置、表示制御方法、およびプログラム |
JP2015095802A (ja) * | 2013-11-13 | 2015-05-18 | ソニー株式会社 | 表示制御装置、表示制御方法、およびプログラム |
-
2014
- 2014-08-13 JP JP2015547664A patent/JP6459972B2/ja not_active Expired - Fee Related
- 2014-08-13 WO PCT/JP2014/071387 patent/WO2015072195A1/ja active Application Filing
- 2014-08-13 EP EP14861485.2A patent/EP3070585A4/en not_active Withdrawn
- 2014-08-13 CN CN201480058333.8A patent/CN105684045B/zh not_active Expired - Fee Related
- 2014-08-13 US US15/025,753 patent/US10460022B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009251428A (ja) * | 2008-04-09 | 2009-10-29 | Konica Minolta Holdings Inc | 情報表示システム |
JP2011192048A (ja) * | 2010-03-15 | 2011-09-29 | Nec Corp | 発言内容出力システム、発言内容出力装置及び発言内容出力方法 |
JP2012160898A (ja) * | 2011-01-31 | 2012-08-23 | Brother Ind Ltd | 画像処理装置 |
JP2012212345A (ja) | 2011-03-31 | 2012-11-01 | Sony Corp | 端末装置、オブジェクト制御方法及びプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP3070585A4 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017054185A (ja) * | 2015-09-07 | 2017-03-16 | 株式会社東芝 | 情報処理装置、情報処理方法及び情報処理プログラム |
JP2021007042A (ja) * | 2015-11-04 | 2021-01-21 | ソニー株式会社 | 情報処理装置およびプログラム |
US11237717B2 (en) | 2015-11-04 | 2022-02-01 | Sony Corporation | Information processing device and information processing method |
JP7095722B2 (ja) | 2015-11-04 | 2022-07-05 | ソニーグループ株式会社 | 情報処理装置およびプログラム |
JP2018181143A (ja) * | 2017-04-19 | 2018-11-15 | Kddi株式会社 | 三次元空間情報表示システム、及び三次元空間情報表示方法 |
CN108076307A (zh) * | 2018-01-26 | 2018-05-25 | 南京华捷艾米软件科技有限公司 | 基于ar的视频会议***和基于ar的视频会议方法 |
JP2019179137A (ja) * | 2018-03-30 | 2019-10-17 | ピーシーフェーズ株式会社 | システム、端末及びプログラム |
JP2020074150A (ja) * | 2020-01-10 | 2020-05-14 | 株式会社東芝 | 情報処理装置、情報処理方法及び情報処理プログラム |
WO2023218740A1 (ja) * | 2022-05-13 | 2023-11-16 | 株式会社Nttドコモ | 表示制御システムおよびウェアラブル装置 |
Also Published As
Publication number | Publication date |
---|---|
EP3070585A1 (en) | 2016-09-21 |
JPWO2015072195A1 (ja) | 2017-03-16 |
CN105684045A (zh) | 2016-06-15 |
JP6459972B2 (ja) | 2019-01-30 |
US20160239472A1 (en) | 2016-08-18 |
US10460022B2 (en) | 2019-10-29 |
EP3070585A4 (en) | 2017-07-05 |
CN105684045B (zh) | 2019-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6459972B2 (ja) | 表示制御装置、表示制御方法、およびプログラム | |
US10832448B2 (en) | Display control device, display control method, and program | |
US10049497B2 (en) | Display control device and display control method | |
JP6135783B2 (ja) | 情報処理装置、情報処理方法およびプログラム | |
US20150070247A1 (en) | Information processing apparatus, information processing method, and program | |
JP6575652B2 (ja) | 画像処理装置、画像処理方法およびプログラム | |
US10771707B2 (en) | Information processing device and information processing method | |
WO2014185170A1 (ja) | 画像処理装置、画像処理方法およびプログラム | |
TW202324041A (zh) | 與遠端設備的使用者互動 | |
JP2015005809A (ja) | 情報処理装置、情報処理方法、およびプログラム | |
JP6733662B2 (ja) | 情報処理装置、情報処理方法及びコンピュータプログラム | |
JP2018507432A (ja) | 個人的コンテンツの表示法 | |
TWI634453B (zh) | 在虛擬實境環境瀏覽時進行畫面切換之系統及方法,及其相關電腦程式產品 | |
US11372473B2 (en) | Information processing apparatus and information processing method | |
WO2020031795A1 (ja) | 情報処理装置、情報処理方法およびプログラム | |
JP7156301B2 (ja) | 情報処理装置、情報処理方法およびプログラム | |
JP2015111371A (ja) | 情報処理装置、情報処理方法およびプログラム | |
JP2016192136A (ja) | 表示制御装置 | |
CN115766981A (zh) | 基于增强现实的图像显示方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14861485 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015547664 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15025753 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2014861485 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014861485 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |