WO2023184816A1 - 一种云桌面的展示方法、装置、设备及存储介质 - Google Patents

一种云桌面的展示方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2023184816A1
WO2023184816A1 PCT/CN2022/111741 CN2022111741W WO2023184816A1 WO 2023184816 A1 WO2023184816 A1 WO 2023184816A1 CN 2022111741 W CN2022111741 W CN 2022111741W WO 2023184816 A1 WO2023184816 A1 WO 2023184816A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
real
cloud desktop
scene
virtual
Prior art date
Application number
PCT/CN2022/111741
Other languages
English (en)
French (fr)
Inventor
潘仲光
宛静川
Original Assignee
中数元宇数字科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中数元宇数字科技(上海)有限公司 filed Critical 中数元宇数字科技(上海)有限公司
Publication of WO2023184816A1 publication Critical patent/WO2023184816A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • Embodiments of the present application relate to the field of smart wearable technology, and in particular, to a cloud desktop display method, device, equipment and storage medium.
  • head-mounted smart devices are constantly being introduced and the user experience is gradually improved, such as head-mounted virtual reality glasses, head-mounted mixed reality glasses and other smart glasses.
  • smart glasses can be used to display the cloud desktop video sent by the cloud server, and interact with the cloud desktop through the handle or other controllers matched with the smart glasses, which allows users to perform activities through the cloud desktop in the virtual world. Telecommuting or leisure activities.
  • Embodiments of the present application provide a cloud desktop display method, device, equipment and storage medium, so that users can still perceive real scenes in the real world after wearing wearable devices, thereby improving the user's sense of immersion in the virtual world. .
  • Embodiments of the present application provide a method for displaying a cloud desktop, which includes: obtaining a real-life image in the real environment where the wearable device is located and a cloud desktop image provided by the cloud server; in the virtual scene of the wearable device, and the The cloud desktop image synchronously displays the real-life image.
  • obtaining the real-life image in the real environment where the wearable device is located and the cloud desktop image provided by the cloud server includes: for any frame of the cloud desktop image in the cloud desktop video stream, obtaining the time of the cloud desktop image Stamp; select an image with the same timestamp as the cloud desktop image from the real scene video stream captured by the real environment where the wearable device is located, as the real scene image.
  • displaying the real scene image synchronously with the cloud desktop image includes: fusing the cloud desktop image and the real scene image to obtain a fused image; The fused image is displayed in the virtual scene of the wearable device.
  • fusing the cloud desktop image and the real scene image to obtain a fused image includes: superimposing the real scene image on the cloud desktop image to obtain the fused image.
  • fusing the cloud desktop image and the real scene image to obtain a fused image includes: splicing the real scene image and the cloud desktop image to obtain the fused image.
  • the real-scene image includes: a left-view real-scene image and a right-view real-scene image captured by a binocular camera; fusing the cloud desktop image and the real-scene image to obtain a fused image includes: The cloud desktop image is binocularly rendered to obtain a left-view virtual image and a right-view virtual image; the left-view real scene image and the left-view virtual image are fused to obtain a left-view fused image; and the right-view real scene image is Fusion is performed with the right-view virtual image to obtain a right-view fused image.
  • the wearable device further includes: a line of sight detection component; the method further includes: detecting the user's line of sight through the line of sight detection component to obtain the line of sight direction; and determining the user's line of sight based on the line of sight direction.
  • the gaze area in the virtual scene if the gaze area is located in the area where the real scene image is located, the real scene image is highlighted.
  • Embodiments of the present application also provide a cloud desktop display device, including: an acquisition module, used to: acquire the real-life image in the real environment where the wearable device is located and the cloud desktop image provided by the cloud server; the display module, used to: in In the virtual scene of the wearable device, the real scene image is displayed synchronously with the cloud desktop image.
  • an acquisition module used to: acquire the real-life image in the real environment where the wearable device is located and the cloud desktop image provided by the cloud server
  • the display module used to: in In the virtual scene of the wearable device, the real scene image is displayed synchronously with the cloud desktop image.
  • An embodiment of the present application also provides a terminal device, including: a memory and a processor; the memory is used to: store one or more computer instructions; the processor is used to execute the one or more computer instructions, for : Execute the steps in the cloud desktop display method.
  • Embodiments of the present application also provide a computer-readable storage medium storing a computer program.
  • the computer program When executed by a processor, it causes the processor to implement the steps in the cloud desktop display method.
  • the embodiments of the present application provide a cloud desktop display method, device, equipment and storage medium that can obtain real-life images in the real environment where the wearable device is located and cloud desktop images provided by the cloud server, and display them in the virtual environment of the wearable device.
  • the real-life image is displayed simultaneously with the cloud desktop image.
  • the real-life image is displayed simultaneously with the cloud desktop image in the virtual scene, so that the user can still perceive the real scene in the real world after wearing the wearable device, and the user does not need to take off the wearable device when he wants to use real-world tools. device, thereby enhancing the user’s immersion in the virtual world.
  • Figure 1 is a schematic flowchart of a cloud desktop display method provided by an exemplary embodiment of the present application
  • Figure 2 is a superimposed schematic diagram provided by an exemplary embodiment of the present application.
  • Figure 3 is a splicing schematic diagram provided by an exemplary embodiment of the present application.
  • Figure 4 is a schematic diagram of binocular rendering provided by an exemplary embodiment of the present application.
  • Figure 5 is a schematic diagram of the modification of binocular rendering provided by an exemplary embodiment of the present application.
  • Figure 6 is an architectural diagram of a mobile terminal provided by an exemplary embodiment of the present application.
  • Figure 7 is a schematic diagram of a display terminal provided by an exemplary embodiment of the present application.
  • Figure 8 is a schematic diagram of a cloud desktop display device provided by an exemplary embodiment of the present application.
  • Figure 9 is a schematic diagram of a terminal device provided by an exemplary embodiment of the present application.
  • smart glasses can be used to display the cloud desktop video sent by the cloud server, and interact with the cloud desktop through the handle or other controllers matched with the smart glasses, which allows users to perform activities through the cloud desktop in the virtual world. Telecommuting or leisure activities.
  • the user since the user cannot perceive the actual scene in the real world after wearing the smart glasses, the user needs to take off the smart glasses when he wants to use real-world tools, thus reducing the user's experience in the virtual world. Immersion.
  • Figure 1 is a schematic flowchart of a cloud desktop display method provided by an exemplary embodiment of the present application. As shown in Figure 1, the method includes:
  • Step 11 Obtain the real-life image in the real environment where the wearable device is located and the cloud desktop image provided by the cloud server.
  • Step 12 In the virtual scene of the wearable device, display the real scene image synchronously with the cloud desktop image.
  • This embodiment can be executed by a terminal device, a wearable device, or a cloud server.
  • the terminal device may include a computer, a tablet or a mobile phone, etc.
  • wearable devices can include: VR (Virtual Reality, virtual reality) glasses, MR (Mixed Reality, mixed reality) glasses or VR head-mounted display devices ((Head-Mounted Display), HMD), etc.
  • the real environment refers to the environment in the real world, and the real-life image is used to reflect the real environment where the wearable device is located.
  • the real environment where the wearable device is located can be captured through a camera installed in the real environment to obtain a real-life video stream.
  • the real scene image is any frame image in the real scene video stream.
  • the camera used to shoot the real environment can be installed on the wearable device, and its installation position should ensure that the camera's field of view is within the field of view of the human eye, so that the human eye's line of sight is blocked by the wearable device. At this time, the camera can observe the real scene instead of the human eye.
  • cloud desktop also known as desktop virtualization and cloud computer
  • the cloud server can send the cloud desktop video stream to the wearable device for display.
  • the cloud desktop image refers to any frame image in the cloud desktop video stream.
  • the real scene image can be displayed simultaneously with the cloud desktop image in the virtual scene of the wearable device.
  • the tablet computer can obtain the real scene image sent by the wearable device and the cloud desktop image sent by the cloud server, and send the real scene image and cloud desktop image to the wearable device for processing. Display simultaneously.
  • the VR glasses can collect real-scene images, obtain cloud desktop images sent by the cloud server, and display the real-scene images and cloud desktop images synchronously; or, the terminal device can Receive the cloud desktop image sent by the cloud server and forward it to the VR glasses; the VR glasses can obtain the cloud desktop image sent by the terminal device, collect the real scene image, and display the real scene image and the cloud desktop image synchronously.
  • the cloud server can obtain the real-scene image sent by the wearable device, and send the real-scene image and the cloud desktop image to the wearable device for synchronous display.
  • displaying the real-view image simultaneously with the cloud desktop image means displaying the real-view image while displaying the cloud desktop image in the virtual scene of the wearable device. That is, users can view virtual cloud desktop images in virtual scenes, as well as real scene images.
  • the real-life image can be displayed simultaneously with the cloud desktop image in the virtual scene, so that the user can still perceive the real scene in the real world after wearing the wearable device, and the user does not need to take it off when he wants to use real-world tools.
  • Wearable devices thereby enhancing users’ immersion in the virtual world.
  • the camera used to collect real-life images when installed on the wearable device, it can interact with the cloud desktop without setting up an external handle or other controller that is matched with the wearable device. On the one hand, it reduces the hardware cost, and on the other hand, it reduces the hardware cost. On the one hand, it helps wearable devices become more integrated and lightweight.
  • the wearable device can obtain the cloud desktop video stream and the real scene video stream of the display scene sent by the cloud server in real time, and can process the cloud desktop image in the cloud desktop video stream and the real scene in the real scene video stream. Images are displayed synchronously frame by frame. An exemplary explanation will be given below.
  • the wearable device can obtain the timestamp of the cloud desktop image.
  • the timestamp of cloud desktop image P1 is 20:59:01.
  • the wearable device can select an image with the same timestamp as the cloud desktop image from the real scene video stream captured by the real environment where the wearable device is located as a real scene image.
  • the wearable device can select the image P1' with the timestamp of 20:59:01 from the real scene video stream as the real scene image.
  • the wearable device can display different images with the same timestamp simultaneously.
  • some specific applications can be run on the cloud desktop.
  • users can interact with the application in real time by interacting with specific items in the real-life image. Interaction. For example, the user can touch the charger in the real-life image to cause the corresponding operation to be executed simultaneously in the application (application closing, application opening or application restart, etc.).
  • the running application is a game
  • the user can click on the tea cup or toothbrush in the real-life image to cause the virtual character in the game to simultaneously perform the corresponding operation (drinking water, brushing teeth, etc.).
  • the timestamps of the obtained cloud desktop image and the real scene image can be the same, thereby improving the synchronization effect of the cloud desktop image and the real scene image, and reducing the sense of fragmentation when the cloud desktop image and the real scene image are displayed simultaneously.
  • Step 12 of the aforementioned embodiment "In the virtual scene of the wearable device, display the real-life image synchronously with the cloud desktop image" can be implemented based on the following steps:
  • Step 121 Fusion of the cloud desktop image and the real scene image to obtain a fused image.
  • Step 122 Display the fused image in the virtual scene of the wearable device.
  • the wearable device can fuse the cloud desktop image and the real scene image based on the following implementation:
  • Embodiment 2 Splice the real scene image and the cloud desktop image to obtain a fused image.
  • the entire real scene image and the entire cloud desktop image can be superimposed.
  • part of the real-scene image can be intercepted from the real-scene image and superimposed on the specified area of the cloud desktop image.
  • the real-scene image cannot be intercepted, but the real-scene image can be reduced and superimposed on the cloud desktop image. within some areas.
  • the user can use physical buttons on the wearable device or virtual buttons in the virtual scene to enlarge, reduce or intercept the real-scene image, and can adjust the area where the real-scene image is superimposed on the cloud desktop image.
  • real scene images and cloud desktop images can be spliced.
  • the user can enlarge, reduce or intercept the real scene image through physical buttons on the wearable device or virtual buttons in the virtual scene, and can adjust the positions of the real scene image and the cloud desktop image.
  • the user can switch between the above two implementation methods of superposition and splicing through physical buttons on the wearable device or virtual buttons in the virtual scene.
  • the cloud desktop image and the real scene image are fused through superposition or splicing, and the user can freely adjust the fusion method of the cloud desktop image and the real scene image, and can observe the cloud desktop image and the real scene image more completely at the same time.
  • the wearable device can usually be equipped with two left and right cameras, referred to as binocular cameras for short.
  • Wearable devices can collect real-life images based on binocular cameras.
  • the real-scene image may include: a left-view real-scene image and a right-view real-scene image captured by a binocular camera.
  • the left-view real-scene image is a real-scene image collected by the left camera of the binocular camera, and the left-view real-scene image corresponds to the user's left eye
  • the right-view real-scene image is the real-scene image collected by the right camera of the binocular camera.
  • the right-view real-view image corresponds to the user's right eye.
  • the wearable device fuses the cloud desktop image and the real scene image.
  • the fused image can be implemented based on the following steps:
  • Step S1 Perform binocular rendering on the cloud desktop image to obtain a left-view virtual image and a right-view virtual image.
  • the left-view virtual image refers to the virtual image corresponding to the user's left eye
  • the right-view virtual image refers to the virtual image corresponding to the user's right eye.
  • Step S2 fuse the left-view real scene image and the left-view virtual image to obtain a left-view fused image; and fuse the right-view real scene image and the right-view virtual image to obtain a right-view fused image.
  • the user's brain can automatically synthesize the images seen by the left and right eyes into a 3D image.
  • step S1 the binocular rendering in step S1 will be described in detail with reference to FIG. 4 .
  • R is the display distance required by the glasses
  • FOV Field of View, field of view
  • w is the distance between the user's eyes.
  • D is the pixel width output to the left and right screens, which is the maximum width of the image that the user can see with one eye. It can be calculated based on the following formula 1:
  • the distance from a pixel in the cloud desktop image to the central axis is S, then the coordinates of the point are (S, R).
  • the left-view virtual image and the right-view virtual image corresponding to the user's left eye and right eye can be obtained respectively, thereby allowing the user to use the wearable device to view the cloud desktop more realistically, improving the user's interaction with the cloud desktop. Immersion during cloud desktop interaction.
  • d is the deviation distance between the camera and the eye, which can be calculated based on the following formula:
  • e is the distance between the eyes
  • w' is the distance between the cameras.
  • the offset rx1 of the left x-coordinate in the camera's right screen and the offset rx2 of the right x-coordinate in the camera's right screen can be calculated through the following formulas 5 and 6:
  • FOV d is the field of view of the camera
  • FOV e is the field of view of the eye
  • R is the display distance required by the glasses.
  • the left-view virtual image and the right-view virtual image obtained after binocular rendering are further corrected. Based on the more accurate virtual image, the picture quality of the fused image obtained by subsequent fusion can be improved.
  • the wearable device may be equipped with a line of sight detection component.
  • the wearable device can detect the user's line of sight through the line of sight detection component and obtain the user's line of sight direction. Furthermore, the wearable device can determine the user's gaze area in the virtual scene based on the direction of gaze.
  • the real scene image is highlighted; if the gaze area is located in the area where the cloud desktop image is located, the cloud desktop image is highlighted.
  • a virtual button/area can be preset in the virtual scene of the wearable device.
  • the corresponding function can be performed, such as full-screen display of cloud desktop images or full-screen display of real-life images, or, Display cloud desktop images and real-life images according to the preset layout style, etc.
  • FIG. 6 is an architecture diagram of a video stream processing method of a mobile terminal (i.e., the aforementioned terminal device), and
  • FIG. 7 is an architecture diagram of a video stream processing method of a display terminal (i.e., the aforementioned wearable device).
  • Mapp a specific application
  • the application can obtain authorized account information.
  • the application can call the cloud desktop API (Application Programming Interface, application program interface) for authorization authentication.
  • Mapp can obtain the IP (Internet Protocol, protocol for interconnection between networks) address and port number of the cloud desktop virtual machine corresponding to the user account.
  • the remote desktop connection protocol includes but is not limited to: TCP/IP (Transmission Control Protocol/Internet Protocol) protocol, NetBEUI (NetBios Enhanced User Interface, communication protocol) protocol, IPX/SPX (Internetwork Packet Exchange/Sequences Packet Exchange, packet switching/sequence packet switching) protocol or RDP (Remote Display Protocol, remote display) protocol, etc., this embodiment is not limited.
  • the RDP protocol is a remote desktop protocol built on the TCP/IP protocol.
  • Mapp After the Mapp successfully establishes a connection with the cloud desktop virtual machine, data communication based on the Remote Desktop Connection Protocol at the ISO layer can begin between the two.
  • Mapp runs as an ordinary cloud desktop client.
  • Mapp no longer directly displays the cloud desktop interface, but returns the result to the cloud desktop virtual machine.
  • the cloud desktop video stream performs image processing and is sent to the display terminal for display.
  • Mapp program can identify the image channel from the established virtual channel. Among them, image channels have special identifiers. Mapp can obtain the image data sent by the image channel to obtain the original bitmap stream data of the cloud desktop.
  • Mapp can obtain the resolution of the bitmap and compress and optimize the obtained original bitmap stream data according to the resolution and frame rate parameters of the display terminal to improve the efficiency of subsequent image processing.
  • Mapp can be based on other parameters of the display terminal (pupillary distance (offset), viewing angle (FOV field of view), rendering screen width and height (renderWidth/renderHeight), maximum value/minimum value of the visual frustum (depthFar/depthNear) , lens focal length (Convergence) or anti-distortion coefficient (Anti-distortion), etc.) to determine the input parameters of binocular rendering. Furthermore, according to the input parameters, the left-view virtual image and the right-view virtual image can be generated for each frame of the cloud desktop image through image calculation.
  • Mapp can obtain the left-view real-scene image and the right-view real-scene image captured by the binocular camera on the display terminal, fuse the left-view real-scene image and the left-view virtual image to obtain the left-view fused image, and combine the right-view real-scene image with the right-view real scene image.
  • the virtual images are fused to obtain a right-view fused image.
  • Mapp can establish a transmission channel with a specific application (hereinafter referred to as Vapp) running on the display terminal through a wired connection (type-c or lightning). After the transmission channel is further encapsulated, the left-view fusion image and the right-view fusion image can be sent to the Vapp of the display terminal through their corresponding channels. After the display terminal Vapp recognizes the fused image, it outputs it to the left and right screens on the display terminal for display.
  • Vapp a specific application running on the display terminal through a wired connection (type-c or lightning).
  • the real-life image can be displayed simultaneously with the cloud desktop image in the virtual scene, so that the user can still perceive the real scene in the real world after wearing the wearable device, and the user does not need to take it off when he wants to use real-world tools.
  • Wearable devices thereby enhancing users’ immersion in the virtual world.
  • FIG 8 is a schematic diagram of a cloud desktop display device provided by an exemplary embodiment of the present application.
  • the display device includes: an acquisition module 801, used to: acquire real-life images in the real environment where the wearable device is located And the cloud desktop image provided by the cloud server; the display module 802 is used to: in the virtual scene of the wearable device, display the real scene image synchronously with the cloud desktop image.
  • the acquisition module 801 when acquiring the real scene image in the real environment where the wearable device is located and the cloud desktop image provided by the cloud server, is specifically configured to: for any frame of the cloud desktop image in the cloud desktop video stream, acquire The timestamp of the cloud desktop image; select an image with the same timestamp as the cloud desktop image from the real scene video stream captured by the real environment where the wearable device is located, as the real scene image .
  • the display module 802 displays the real scene image synchronously with the cloud desktop image in the virtual scene of the wearable device, it is specifically used to: fuse the cloud desktop image and the real scene image. , obtain the fused image; and display the fused image in the virtual scene of the wearable device.
  • the display module 802 fuses the cloud desktop image and the real scene image to obtain the fused image, it is specifically configured to superimpose the real scene image on the cloud desktop image to obtain the fused image. image.
  • the display module 802 fuses the cloud desktop image and the real scene image to obtain the fused image, it is specifically used to: splice the real scene image and the cloud desktop image to obtain the fused image. image.
  • the real-scene image includes: a left-view real-scene image and a right-view real-scene image captured by a binocular camera.
  • the display module 802 fuses the cloud desktop image and the real scene image to obtain the fused image, it is specifically used to: perform binocular rendering of the cloud desktop image to obtain a left-view virtual image and a right-view virtual image; The left-view real scene image and the left-view virtual image are fused to obtain a left-view fused image; and the right-view real scene image and the right-view virtual image are fused to obtain a right-view fused image.
  • the wearable device further includes: a line of sight detection component.
  • the display module 802 is also used to: detect the user's line of sight through the line of sight detection component to obtain the line of sight direction; determine the user's gaze area in the virtual scene according to the line of sight direction; if the gaze area is located at If the area where the real scene image is located, the real scene image is highlighted.
  • the real-life image in the real environment where the wearable device is located and the cloud desktop image provided by the cloud server can be obtained, and the real-life image can be displayed synchronously with the cloud desktop image in the virtual scene of the wearable device.
  • the real-life image is displayed simultaneously with the cloud desktop image in the virtual scene, so that the user can still perceive the real scene in the real world after wearing the wearable device, and the user does not need to take off the wearable device when he wants to use real-world tools. device, thereby enhancing the user’s immersion in the virtual world.
  • Figure 9 is a schematic structural diagram of a terminal device provided by an exemplary embodiment of the present application. As shown in Figure 9, the terminal device includes: a memory 901 and a processor 902.
  • Memory 901 is used to store computer programs and can be configured to store various other data to support operations on the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, contact data, phonebook data, messages, pictures, videos, etc.
  • the memory 901 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Except programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable except programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the processor 902 is coupled to the memory 901 and is used to execute the computer program in the memory 901 to: obtain real-life images in the real environment where the wearable device is located and cloud desktop images provided by the cloud server; in the wearable device In the virtual scene, the real scene image is displayed synchronously with the cloud desktop image.
  • the processor 902 when acquiring the real scene image in the real environment where the wearable device is located and the cloud desktop image provided by the cloud server, is specifically configured to: for any frame of the cloud desktop image in the cloud desktop video stream, obtain The timestamp of the cloud desktop image; select an image with the same timestamp as the cloud desktop image from the real scene video stream captured by the real environment where the wearable device is located, as the real scene image .
  • the processor 902 when the processor 902 displays the real scene image synchronously with the cloud desktop image in the virtual scene of the wearable device, the processor 902 is specifically configured to: fuse the cloud desktop image and the real scene image. , obtain the fused image; and display the fused image in the virtual scene of the wearable device.
  • the processor 902 when the processor 902 fuses the cloud desktop image and the real scene image to obtain the fused image, the processor 902 is specifically configured to superimpose the real scene image on the cloud desktop image to obtain the fused image. image.
  • the processor 902 fuses the cloud desktop image and the real scene image to obtain the fused image
  • the processor 902 is specifically configured to: splice the real scene image and the cloud desktop image to obtain the fused image. image.
  • the real-scene image includes: a left-view real-scene image and a right-view real-scene image captured by a binocular camera.
  • the processor 902 fuses the cloud desktop image and the real scene image to obtain the fused image, it is specifically used to: perform binocular rendering of the cloud desktop image to obtain a left-view virtual image and a right-view virtual image; The left-view real scene image and the left-view virtual image are fused to obtain a left-view fused image; and the right-view real scene image and the right-view virtual image are fused to obtain a right-view fused image.
  • the wearable device further includes: a line of sight detection component.
  • the processor 902 is also configured to: detect the user's line of sight through the line of sight detection component to obtain the line of sight direction; determine the user's gaze area in the virtual scene according to the line of sight direction; if the gaze area is located If the area where the real scene image is located, the real scene image is highlighted.
  • the memory in Figure 9 above can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), Erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM Erasable programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory magnetic memory
  • flash memory magnetic or optical disk.
  • the display 903 in FIG. 9 above includes a screen, and the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide action.
  • Audio component 904 in Figure 9 may be configured to output and/or input audio signals.
  • the audio component includes a microphone (MIC), and when the device where the audio component is located is in an operating mode, such as call mode, recording mode, and voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in memory or sent via a communications component.
  • the audio component further includes a speaker for outputting audio signals.
  • the terminal device also includes: a communication component 905, a power supply component 906 and other components. Only some components are schematically shown in Figure 9, which does not mean that the terminal device only includes the components shown in Figure 9.
  • the above-mentioned communication component 905 in Figure 9 is configured to facilitate wired or wireless communication between the device where the communication component is located and other devices.
  • the device where the communication component is located can access a wireless network based on communication standards, such as WiFi, 2G, 3G, 4G or 5G, or a combination thereof.
  • the communication component receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component may be based on near field communication (NFC) technology, radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies to fulfill.
  • NFC near field communication
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the power supply component 906 provides power for various components of the device where the power supply component is located.
  • a power component may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to the device in which the power component resides.
  • the real-life image in the real environment where the wearable device is located and the cloud desktop image provided by the cloud server can be obtained, and the real-life image can be displayed synchronously with the cloud desktop image in the virtual scene of the wearable device.
  • the real-life image is displayed simultaneously with the cloud desktop image in the virtual scene, so that the user can still perceive the real scene in the real world after wearing the wearable device, and the user does not need to take off the wearable device when he wants to use real-world tools. device, thereby enhancing the user’s immersion in the virtual world.
  • embodiments of the present application also provide a computer-readable storage medium storing a computer program.
  • the computer program When executed, it can implement each step that can be executed by the terminal device in the above method embodiment.
  • embodiments of the present invention may be provided as methods, systems, or computer program products.
  • the invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
  • the invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the instructions
  • the device implements the functions specified in a process or processes of the flowchart and/or a block or blocks of the block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing a series of operating steps to be performed on the computer or other programmable device to produce computer-implemented processing, thereby executing on the computer or other programmable device.
  • Instructions provide steps for implementing the functions specified in a process or processes of a flowchart diagram and/or a block or blocks of a block diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include non-permanent storage in computer-readable media, random access memory (RAM) and/or non-volatile memory in the form of read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash random access memory
  • Computer-readable media includes both persistent and non-volatile, removable and non-removable media that can be implemented by any method or technology for storage of information.
  • Information may be computer-readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), and read-only memory.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • read-only memory read-only memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • compact disc read-only memory CD-ROM
  • DVD digital versatile disc
  • Magnetic tape cassettes disk storage or other magnetic storage devices, or any other non-transmission medium, can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例提供一种云桌面的展示方法、装置、设备及存储介质。在该云桌面的展示方法中,可获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像,并在穿戴式装置的虚拟场景中,与云桌面图像同步展示实景图像。通过这种方式,在虚拟场景中与云桌面图像同步展示实景图像,使得用户戴上穿戴式装置后仍可感知到现实世界的真实场景,用户想要使用现实世界的工具时无需摘下穿戴式装置,从而提升了用户在虚拟世界中的沉浸感。

Description

一种云桌面的展示方法、装置、设备及存储介质
交叉引用
本申请引用于2022年3月31日递交的名称为“一种云桌面的展示方法、装置、设备及存储介质”的第202210329680.6号中国专利申请,其通过引用被全部并入本申请。
技术领域
本申请实施例涉及智能可穿戴技术领域,尤其涉及一种云桌面的展示方法、装置、设备及存储介质。
背景技术
随着虚拟现实、增强现实和混合现实等相关技术的高速发展,头戴式智能设备不断推陈出新且使用体验逐渐提高,如头戴式虚拟现实眼镜、头戴式混合现实眼镜等智能眼镜。
在现有技术中,可利用智能眼镜来展示云端服务器发送的云桌面视频,并通过与智能眼镜配套的手柄或者其他控制器与云桌面进行交互,这使得用户可在虚拟世界中通过云桌面进行远程办公或休闲活动。
但是,在这种方式中,由于用户戴上智能眼镜后无法感知到现实世界的实际场景,用户想要使用现实世界的工具时,需要将智能眼镜摘下,从而降低了用户在虚拟世界中的沉浸感。因此,一种解决方案亟待被提出。
发明内容
本申请实施例提供一种云桌面的展示方法、装置、设备及存储介质,用以使得用户戴上穿戴式装置后仍可感知到现实世界的真实场景,进而提升用户在虚拟世界中的沉浸感。
本申请实施例提供一种云桌面的展示方法,包括:获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像;在所述穿戴式装置的虚拟场景中,与所述云桌面图像同步展示所述实景图像。
进一步可选地,获取穿戴式装置所在的现实环境中的实景图像以及云端 服务器提供的云桌面图像,包括:针对云桌面视频流中的任一帧云桌面图像,获取所述云桌面图像的时间戳;从对所述穿戴式装置所在的现实环境进行拍摄到的现实场景视频流中选取与所述云桌面图像的时间戳相同的一帧图像,作为所述实景图像。
进一步可选地,在所述穿戴式装置的虚拟场景中,与所述云桌面图像同步展示所述实景图像,包括:对所述云桌面图像以及所述实景图像进行融合,得到融合图像;在所述穿戴式装置的虚拟场景中,展示所述融合图像。
进一步可选地,对所述云桌面图像以及所述实景图像进行融合,得到融合图像,包括:将所述实景图像叠加在所述云桌面图像上,得到所述融合图像。
进一步可选地,对所述云桌面图像以及所述实景图像进行融合,得到融合图像,包括:将所述实景图像与所述云桌面图像进行拼接,得到所述融合图像。
进一步可选地,所述实景图像包括:双目摄像头拍摄得到的左视实景图像以及右视实景图像;对所述云桌面图像以及所述实景图像进行融合,得到融合图像,包括:对所述云桌面图像进行双目渲染,得到左视虚拟图像和右视虚拟图像;将所述左视实景图像与所述左视虚拟图像进行融合,得到左视融合图像;以及将所述右视实景图像与所述右视虚拟图像进行融合,得到右视融合图像。
进一步可选地,所述穿戴式装置,还包括:视线检测组件;所述方法还包括:通过所述视线检测组件对用户进行视线检测,得到视线方向;根据所述视线方向,确定所述用户在所述虚拟场景中的注视区域;若所述注视区域位于所述实景图像所在的区域,则突出展示所述实景图像。
本申请实施例还提供一种云桌面的展示装置,包括:获取模块,用于:获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像;展示模块,用于:在所述穿戴式装置的虚拟场景中,与所述云桌面图像同步展示所述实景图像。
本申请实施例还提供一种终端设备,包括:存储器以及处理器;所述存储器用于:存储一条或多条计算机指令;所述处理器用于执行所述一条或多条计算机指令,以用于:执行云桌面的展示方法中的步骤。
本申请实施例还提供一种存储有计算机程序的计算机可读存储介质,计 算机程序被处理器执行时,致使处理器实现云桌面的展示方法中的步骤。
本申请实施例提供的一种云桌面的展示方法、装置、设备及存储介质,可获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像,并在穿戴式装置的虚拟场景中,与云桌面图像同步展示实景图像。通过这种方式,在虚拟场景中与云桌面图像同步展示实景图像,使得用户戴上穿戴式装置后仍可感知到现实世界的真实场景,用户想要使用现实世界的工具时无需摘下穿戴式装置,从而提升了用户在虚拟世界中的沉浸感。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一示例性实施例提供的云桌面的展示方法的流程示意图;
图2为本申请一示例性实施例提供的叠加示意图;
图3为本申请一示例性实施例提供的拼接示意图;
图4为本申请一示例性实施例提供的双目渲染的示意图;
图5为本申请一示例性实施例提供的双目渲染的修正的示意图;
图6为本申请一示例性实施例提供的移动终端的架构图;
图7为本申请一示例性实施例提供的显示终端的示意图;
图8为本申请一示例性实施例提供的云桌面的展示装置的示意图;
图9为本申请一示例性实施例提供的终端设备的示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在现有技术中,可利用智能眼镜来展示云端服务器发送的云桌面视频,并通过与智能眼镜配套的手柄或者其他控制器与云桌面进行交互,这使得用 户可在虚拟世界中通过云桌面进行远程办公或休闲活动。但是,在这种方式中,由于用户戴上智能眼镜后无法感知到现实世界的实际场景,用户想要使用现实世界的工具时,需要将智能眼镜摘下,从而降低了用户在虚拟世界中的沉浸感。
针对上述技术问题,在本申请一些实施例中,提供了一种解决方案,以下将结合附图,详细说明本申请各实施例提供的技术方案。
图1为本申请一示例性实施例提供的一种云桌面的展示方法的流程示意图,如图1所示,该方法包括:
步骤11、获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像。
步骤12、在穿戴式装置的虚拟场景中,与云桌面图像同步展示实景图像。
本实施例可由终端设备执行,也可由穿戴式装置执行,也可由云端服务器执行。其中,终端设备可包括计算机、平板电脑或手机等等。其中,可穿戴设备,可包括:VR(Virtual Reality,虚拟现实)眼镜、MR(Mixed Reality,混合现实)眼镜或VR头戴显示设备((Head-Mounted Display),HMD)等等。
其中,现实环境指的是真实世界中的环境,实景图像用于反映穿戴式装置所在的现实环境。在本实施例中,可通过现实环境中安装的摄像头拍摄穿戴式装置所在现实环境,得到实景视频流。实景图像为该实景视频流中的任一帧图像。其中,用于对现实环境进行拍摄的摄像头,可安装在穿戴式装置上,且其安装位置应当确保摄像头的视场范围位于人眼的视场范围内,从而在人眼视线被穿戴式装置遮挡时,使得摄像头可代替人眼观察现实场景。
其中,云桌面又称桌面虚拟化、云电脑,可利用虚拟技术,对各种物理设备进行虚拟化处理,从而使资源的利用率得到有效提升,以此节约成本、提高应用质量。在云端服务器中的云平台上部署虚拟桌面后,用户可在任何地方访问虚拟桌面和应用。云端服务器可将云桌面视频流发送至穿戴式装置进行展示。其中,云桌面图像指的是云桌面视频流中的任一帧图像。
获取到实景图像和云桌面图像后,可在穿戴式装置的虚拟场景中,与云桌面图像同步展示实景图像。
当本实施例由终端设备执行时,以平板电脑为例,平板电脑可获取穿戴式装置发送的实景图像和云端服务器发送的云桌面图像,并将实景图像和云桌面图像发送到穿戴式装置进行同步展示。
当本实施例由穿戴式装置执行时,以VR眼镜为例,VR眼镜可采集实景图像,并获取云端服务器发送的云桌面图像,将实景图像和云桌面图像进行同步展示;或者,终端设备可接收云端服务器发送的云桌面图像并转发给VR眼镜;VR眼镜可获取终端设备发送的云桌面图像、采集实景图像,并将实景图像和云桌面图像进行同步展示。
当本实施例由云端服务器执行时,云端服务器可获取穿戴式装置发送的实景图像,并将实景图像和云桌面图像发送到穿戴式装置进行同步展示。
其中,与云桌面图像同步展示实景图像是指,在穿戴式装置的虚拟场景中展示云桌面图像的同时,展示实景图像。即,用户可在虚拟场景中观看到虚拟的云桌面图像,也能够观看到真实的实景图像。
通过这种实施方式,可在虚拟场景中与云桌面图像同步展示实景图像,使得用户戴上穿戴式装置后仍可感知到现实世界的真实场景,用户想要使用现实世界的工具时无需摘下穿戴式装置,从而提升了用户在虚拟世界中的沉浸感。
同时,当用于采集实景图像的摄像头安装在穿戴式装置上时,无需设置外接的与穿戴式装置配套的手柄或者其他控制器即可实现与云桌面进行交互,一方面降低了硬件成本,另一方面,有利于穿戴式装置更加一体化、轻量化。
以下实施例中,将以穿戴式装置为执行主体进行实例性说明。
在一些可选的实施例中,穿戴式装置可实时获取云服务器发送的云桌面视频流以及显示场景的实景视频流,并可对云桌面视频流中的云桌面图像以及实景视频流中的实景图像进行逐帧同步展示。以下将进行示例性说明。
针对获取到的云桌面视频流中的任一帧云桌面图像,穿戴式装置可获取该云桌面图像的时间戳。比如,云桌面图像P1的时间戳为20:59:01。进而,穿戴式装置可从对穿戴式装置所在的现实环境进行拍摄到的现实场景视频流中选取与云桌面图像的时间戳相同的一帧图像,作为实景图像。比如,穿戴式装置可从现实场景视频流中选取时间戳为20:59:01的图像P1',作为实景图像。从而,穿戴式装置可同步展示时间戳相同的不同图像。在一些可能的实际应用场景中,云桌面上可运行一些特定的应用,由于获取的云桌面图像和实景图像的时间戳相同,用户可通过与实景图像中的特定物品进行交互 来与应用进行实时交互。比如,用户可通过触摸实景图像中的充电器,使得应用中同步执行对应的操作(应用关闭、应用打开或应用重启等等)。当运行的应用为游戏时,用户可通过点击实景图像中的茶杯或牙刷等等,使得游戏中的虚拟人物同步执行对应的操作(喝水或刷牙等等)。
通过这种实施方式,可使得获取的云桌面图像和实景图像的时间戳相同,进而,提升了云桌面图像和实景图像的同步效果,降低了云桌面图像和实景图像同步展示时的割裂感。
可选地,前述实施例的步骤12“在穿戴式装置的虚拟场景中,与云桌面图像同步展示实景图像”的操作,可基于以下步骤实现:
步骤121、对云桌面图像以及实景图像进行融合,得到融合图像。
步骤122、在穿戴式装置的虚拟场景中,展示融合图像。
通过这种方式,用户可从融合图像中同时观察到云桌面的相关信息和现实世界的相关信息,当用户想要使用现实世界的工具时无需摘下穿戴式装置,进而,使得用户在使用穿戴式装置时,可更方便的与现实世界的事物进行交互。
可选地,穿戴式装置可基于以下实施方式对云桌面图像以及实景图像进行融合:
实施方式一、将实景图像叠加在云桌面图像上,得到融合图像。
实施方式二、将实景图像与云桌面图像进行拼接,得到融合图像。
在实施方式一中,如图2中A部分所示,可将整个实景图像与整个云桌面图像进行叠加。如图2中B部分所示,可从实景图像中截取部分实景图像并叠加在云桌面图像的指定区域内,或者,不对实景图像进行截取,而是将实景图像缩小,并叠加在云桌面图像的部分区域内。可选地,用户可通过穿戴式装置上的实体按键或是虚拟场景中的虚拟按键,对实景图像进行放大、缩小或截取等操作,并可调整实景图像在云桌面图像中叠加的区域。
在实施方式二中,如图3所示,可将实景图像与云桌面图像进行拼接。可选地,用户可通过穿戴式装置上的实体按键或是虚拟场景中的虚拟按键,对实景图像进行放大、缩小或截取等操作,并可调整实景图像和云桌面图像的位置。
需要说明的是,用户可通过穿戴式装置上的实体按键或是虚拟场景中的虚拟按键,在上述叠加和拼接两种实施方式之间进行切换。
通过上述实施方式,通过叠加或拼接对云桌面图像和实景图像进行融合,用户可自由调整云桌面图像和实景图像的融合方式,并可更完整地同时观察到云桌面图像和实景图像。
可选地,在实际场景中,穿戴式装置上通常可安装有左右两个摄像头,简称为双目摄像头。穿戴式装置可基于双目摄像头采集实景图像。其中,实景图像可包括:双目摄像头拍摄得到的左视实景图像以及右视实景图像。其中,左视实景图像为双目摄像头中的左摄像头采集到的实景图像,且该左视实景图像与用户的左眼对应;右视实景图像为双目摄像头中的右摄像头采集到的实景图像,且该右视实景图像与用户的右眼对应。
基于此,穿戴式装置对云桌面图像以及实景图像进行融合,得到融合图像时,可基于以下步骤实现:
步骤S1、对云桌面图像进行双目渲染,得到左视虚拟图像和右视虚拟图像。其中,左视虚拟图像指的是与用户的左眼对应的虚拟图像,右视虚拟图像指的是与用户的右眼对应的虚拟图像。
步骤S2、将左视实景图像与左视虚拟图像进行融合,得到左视融合图像;以及将右视实景图像与右视虚拟图像进行融合,得到右视融合图像。基于该步骤,当用户的左眼和右眼分别看到左视融合图像和右视融合图像时,用户的大脑可将左右眼看到的图像自动合成3D的图像。
以下,将结合图4,就步骤S1中的双目渲染进行详细说明。
如图4所示,R为眼镜要求的显示距离,FOV(Field of View,视场角),w是用户双眼之间的距离。
D为输出到左右屏幕的像素宽度,即用户单眼能看到的画面的最大宽度,可基于以下公式1计算得出:
Figure PCTCN2022111741-appb-000001
取云桌面图像中的一像素点到中心轴的距离为S,则该点的坐标为(S,R)。
基于上述过程,可分别通过以下公式2和公式3计算得到这个像素点的x坐标值在左视虚拟图像中的比例(用B1表示),以及这个像素点的x坐标值在 右视虚拟图像中的比例(用B2表示)。
Figure PCTCN2022111741-appb-000002
Figure PCTCN2022111741-appb-000003
计算得到这个像素点的x坐标值在左视虚拟图像和右视虚拟图像中的比例后,乘以屏幕实际像素宽度a即可得出最终的x坐标值,y坐标值不变。
通过这种双目渲染的方式,可得到分别与用户左眼和右眼对应的左视虚拟图像和右视虚拟图像,进而使得用户可利用穿戴式装置更加真实地观看云桌面,提升了用户与云桌面交互过程中的沉浸感。
对云桌面图像进行双目渲染后,可进一步进行图像融合。但是,如图5所示,在上述双目渲染过程中,由于双目摄像头之间的距离通常和用户双眼之间的距离不同,所以双目渲染后可根据以下公式进行进一步修正。
如图5所示,d为摄像头与眼睛之间的偏差距离,可基于以下公式计算得到:
d=(w′-e)/2   (公式4)
其中,e为双眼之间的距离,w'为摄像头之间的距离。
基于此,可通过以下公式5和公式6计算摄像头右屏中左边x坐标的偏移量rx1和摄像头右屏中右边x坐标的偏移量rx2:
Figure PCTCN2022111741-appb-000004
Figure PCTCN2022111741-appb-000005
其中,FOV d为摄像头的视场角,FOV e为眼睛的视场角,R为眼镜要求的显示距离。
通过上述偏移量的计算,进一步对双目渲染后得到的左视虚拟图像和右视虚拟图像进行修正,基于较为准确的虚拟图像,可提高后续融合得到的融合图像的画面质量。
在一些实施例中,穿戴式装置可安装有视线检测组件。穿戴式装置可通过视线检测组件对用户进行视线检测,得到用户的视线方向。进而,穿戴式 装置可根据视线方向,确定用户在虚拟场景中的注视区域。
若注视区域位于实景图像所在的区域,则突出展示实景图像;若注视区域位于云桌面图像所在的区域,则突出展示云桌面图像。
需要说明的是,若用户长时间注视云桌面图像,可隐藏实景图像并全屏显示云桌面图像;若用户长时间注视实景图像,可隐藏云桌面图像并全屏显示实景图像。可选地,穿戴式装置的虚拟场景中可预设有一个虚拟按钮/区域,当用户注视该按钮/区域时,可执行对应的功能,比如全屏显示云桌面图像或全屏显示实景图像,或者,按照预设的布局风格展示云桌面图像和实景图像,等等。
以下将结合图6、图7以及实际应用场景,对云桌面的展示方法进行进一步说明。
图6为移动终端(即前述的终端设备)视频流处理方式的架构图,图7为显示终端(即前述的穿戴式装置)视频流处理方式的架构图。
在实际场景中,移动终端上可安装有特定的应用程序(以下简称为Mapp),该应用程序可获取授权的账户信息。用户输入账户密码后,该应用程序可调用云桌面API(Application Programming Interface,应用程序接口)进行授权认证。当用户输入的账户密码通过认证后,Mapp可获取用户账户对应的云桌面虚拟机的IP(Internet Protocol,网络之间互连的协议)地址和端口号。
Mapp可通过无线传输的方式,基于远程桌面连接协议与云桌面虚拟机尝试建立连接。需要说明的是,该远程桌面连接协议包括但不限于:TCP/IP(Transmission Control Protocol/Internet Protocol,传输控制协议/网际协议)协议、NetBEUI(NetBios Enhanced User Interface,通讯协定)协议、IPX/SPX(Internetwork Packet Exchange/Sequences Packet Exchange,分组交换/顺序分组交换)协议或RDP(Remote Display Protocal、远程显示)协议等等,本实施例不做限制。其中,RDP协议是建立在TCP/IP协议之上的远程桌面协议。
在Mapp成功与云桌面虚拟机建立连接后,两者之间可开始ISO层的基于远程桌面连接协议的数据通信。当显示终端未与Mapp连接的时候,Mapp作为普通的云桌面客户端运行使用,当检测到显示终端与Mapp连接的时候,Mapp不再直接显示云桌面界面,而是对云桌面虚拟机返回的云桌面视频流进行图像处理,并发送到显示终端进行显示。
在前述的基于远程桌面连接协议的连接全部成功完成后,Mapp程序可从已建立的虚拟通道中,识别出图像通道。其中,图像通道有专门的标识符。Mapp可获取图像通道发送的图像数据,以得到云桌面的原始位图流数据。
进而,Mapp可获取位图的分辨率,并根据显示终端的分辨率和帧率参数等参数,对得到的原始位图流数据进行压缩优化,以提高后续的图像处理效率。
Mapp可根据显示终端的其他参数(瞳距(offset)、可视角(FOV field of view)、渲染画面宽高(renderWidth/renderHeight)、视椎体的极大值/极小值(depthFar/depthNear)、透镜焦距(Convergence)或反畸变系数(Anti-distortion)等等),确定双目渲染的输入参数。进而,可根据该输入参数,通过图像计算,为每帧云桌面图像生成左视虚拟图像和右视虚拟图像。
Mapp可获取显示终端上的双目摄像头拍摄的左视实景图像以及右视实景图像,并将左视实景图像与左视虚拟图像进行融合,得到左视融合图像,将右视实景图像与右视虚拟图像进行融合,得到右视融合图像。
需要说明的是,Mapp可通过有线连接(type-c或lightning)与显示终端上运行的特定应用程序(以下简称为Vapp)建立传输通道。该传输通道经过进一步的封装后,可分别将左视融合图像和右视融合图像通过各自对应的通道,发送给显示终端的Vapp。显示终端Vapp识别出融合图像后,分别输出到显示终端上的左右两个屏幕,以进行展示。
通过这种实施方式,可在虚拟场景中与云桌面图像同步展示实景图像,使得用户戴上穿戴式装置后仍可感知到现实世界的真实场景,用户想要使用现实世界的工具时无需摘下穿戴式装置,从而提升了用户在虚拟世界中的沉浸感。
图8是本申请一示例性实施例提供的云桌面的展示装置的示意图,如图8所示,该展示装置包括:获取模块801,用于:获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像;展示模块802,用于:在所述穿戴式装置的虚拟场景中,与所述云桌面图像同步展示所述实景图像。
进一步可选地,获取模块801在获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像时,具体用于:针对云桌面视频流中的任一帧云桌面图像,获取所述云桌面图像的时间戳;从对所述穿戴式装 置所在的现实环境进行拍摄到的现实场景视频流中选取与所述云桌面图像的时间戳相同的一帧图像,作为所述实景图像。
进一步可选地,展示模块802在所述穿戴式装置的虚拟场景中,与所述云桌面图像同步展示所述实景图像时,具体用于:对所述云桌面图像以及所述实景图像进行融合,得到融合图像;在所述穿戴式装置的虚拟场景中,展示所述融合图像。
进一步可选地,展示模块802在对所述云桌面图像以及所述实景图像进行融合,得到融合图像时,具体用于:将所述实景图像叠加在所述云桌面图像上,得到所述融合图像。
进一步可选地,展示模块802在对所述云桌面图像以及所述实景图像进行融合,得到融合图像时,具体用于:将所述实景图像与所述云桌面图像进行拼接,得到所述融合图像。
进一步可选地,所述实景图像包括:双目摄像头拍摄得到的左视实景图像以及右视实景图像。展示模块802在对所述云桌面图像以及所述实景图像进行融合,得到融合图像时,具体用于:对所述云桌面图像进行双目渲染,得到左视虚拟图像和右视虚拟图像;将所述左视实景图像与所述左视虚拟图像进行融合,得到左视融合图像;以及将所述右视实景图像与所述右视虚拟图像进行融合,得到右视融合图像。
进一步可选地,所述穿戴式装置,还包括:视线检测组件。展示模块802还用于:通过所述视线检测组件对用户进行视线检测,得到视线方向;根据所述视线方向,确定所述用户在所述虚拟场景中的注视区域;若所述注视区域位于所述实景图像所在的区域,则突出展示所述实景图像。
在本实施例中,可获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像,并在穿戴式装置的虚拟场景中,与云桌面图像同步展示实景图像。通过这种方式,在虚拟场景中与云桌面图像同步展示实景图像,使得用户戴上穿戴式装置后仍可感知到现实世界的真实场景,用户想要使用现实世界的工具时无需摘下穿戴式装置,从而提升了用户在虚拟世界中的沉浸感。
图9是本申请一示例性实施例提供的终端设备的结构示意图,如图9所示,该终端设备包括:存储器901以及处理器902。
存储器901,用于存储计算机程序,并可被配置为存储其它各种数据以支持在终端设备上的操作。这些数据的示例包括用于在终端设备上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。
其中,存储器901可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
处理器902,与存储器901耦合,用于执行存储器901中的计算机程序,以用于:获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像;在所述穿戴式装置的虚拟场景中,与所述云桌面图像同步展示所述实景图像。
进一步可选地,处理器902在获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像时,具体用于:针对云桌面视频流中的任一帧云桌面图像,获取所述云桌面图像的时间戳;从对所述穿戴式装置所在的现实环境进行拍摄到的现实场景视频流中选取与所述云桌面图像的时间戳相同的一帧图像,作为所述实景图像。
进一步可选地,处理器902在所述穿戴式装置的虚拟场景中,与所述云桌面图像同步展示所述实景图像时,具体用于:对所述云桌面图像以及所述实景图像进行融合,得到融合图像;在所述穿戴式装置的虚拟场景中,展示所述融合图像。
进一步可选地,处理器902在对所述云桌面图像以及所述实景图像进行融合,得到融合图像时,具体用于:将所述实景图像叠加在所述云桌面图像上,得到所述融合图像。
进一步可选地,处理器902在对所述云桌面图像以及所述实景图像进行融合,得到融合图像时,具体用于:将所述实景图像与所述云桌面图像进行拼接,得到所述融合图像。
进一步可选地,所述实景图像包括:双目摄像头拍摄得到的左视实景图像以及右视实景图像。处理器902在对所述云桌面图像以及所述实景图像进行融合,得到融合图像时,具体用于:对所述云桌面图像进行双目渲染,得到左视虚拟图像和右视虚拟图像;将所述左视实景图像与所述左视虚拟图像进行融合,得到左视融合图像;以及将所述右视实景图像与所述右视虚拟图 像进行融合,得到右视融合图像。
进一步可选地,所述穿戴式装置,还包括:视线检测组件。处理器902还用于:通过所述视线检测组件对用户进行视线检测,得到视线方向;根据所述视线方向,确定所述用户在所述虚拟场景中的注视区域;若所述注视区域位于所述实景图像所在的区域,则突出展示所述实景图像。
上述图9中的存储器可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
上述图9中的显示器903包括屏幕,其屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。
图9中的音频组件904,可被配置为输出和/或输入音频信号。例如,音频组件包括一个麦克风(MIC),当音频组件所在设备处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器或经由通信组件发送。在一些实施例中,音频组件还包括一个扬声器,用于输出音频信号。
进一步,如图9所示,该终端设备还包括:通信组件905、电源组件906等其它组件。图9中仅示意性给出部分组件,并不意味着终端设备只包括图9所示组件。
上述图9中的通信组件905被配置为便于通信组件所在设备和其他设备之间有线或无线方式的通信。通信组件所在设备可以接入基于通信标准的无线网络,如WiFi,2G、3G、4G或5G,或它们的组合。在一个示例性实施例中,通信组件经由广播信道接收来自外部广播管理***的广播信号或广播相关信息。在一个示例性实施例中,通信组件可基于近场通信(NFC)技术、射频识别(RFID)技术、红外数据协会(IrDA)技术、超宽带(UWB)技术、蓝牙(BT)技术和其他技术来实现。
其中,电源组件906,为电源组件所在设备的各种组件提供电力。电源组件可以包括电源管理***,一个或多个电源,及其他与为电源组件所在设备 生成、管理和分配电力相关联的组件。
在本实施例中,可获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像,并在穿戴式装置的虚拟场景中,与云桌面图像同步展示实景图像。通过这种方式,在虚拟场景中与云桌面图像同步展示实景图像,使得用户戴上穿戴式装置后仍可感知到现实世界的真实场景,用户想要使用现实世界的工具时无需摘下穿戴式装置,从而提升了用户在虚拟世界中的沉浸感。
相应地,本申请实施例还提供一种存储有计算机程序的计算机可读存储介质,计算机程序被执行时能够实现上述方法实施例中可由终端设备执行的各步骤。
本领域内的技术人员应明白,本发明的实施例可提供为方法、***、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图 一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (10)

  1. 一种云桌面的展示方法,其特征在于,包括:
    获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像;
    在所述穿戴式装置的虚拟场景中,与所述云桌面图像同步展示所述实景图像。
  2. 根据权利要求1所述的方法,其特征在于,获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像,包括:
    针对云桌面视频流中的任一帧云桌面图像,获取所述云桌面图像的时间戳;
    从对所述穿戴式装置所在的现实环境进行拍摄到的现实场景视频流中选取与所述云桌面图像的时间戳相同的一帧图像,作为所述实景图像。
  3. 根据权利要求1所述的方法,其特征在于,在所述穿戴式装置的虚拟场景中,与所述云桌面图像同步展示所述实景图像,包括:
    对所述云桌面图像以及所述实景图像进行融合,得到融合图像;
    在所述穿戴式装置的虚拟场景中,展示所述融合图像。
  4. 根据权利要求3所述的方法,其特征在于,对所述云桌面图像以及所述实景图像进行融合,得到融合图像,包括:
    将所述实景图像叠加在所述云桌面图像上,得到所述融合图像。
  5. 根据权利要求3所述的方法,其特征在于,对所述云桌面图像以及所述实景图像进行融合,得到融合图像,包括:
    将所述实景图像与所述云桌面图像进行拼接,得到所述融合图像。
  6. 根据权利要求3-5任一项所述的方法,其特征在于,所述实景图像包括:双目摄像头拍摄得到的左视实景图像以及右视实景图像;
    对所述云桌面图像以及所述实景图像进行融合,得到融合图像,包括:
    对所述云桌面图像进行双目渲染,得到左视虚拟图像和右视虚拟图像;
    将所述左视实景图像与所述左视虚拟图像进行融合,得到左视融合图像;以及将所述右视实景图像与所述右视虚拟图像进行融合,得到右视融合图像。
  7. 根据权利要求1-5任一项所述的方法,其特征在于,所述穿戴式装置,还包括:视线检测组件;所述方法还包括:
    通过所述视线检测组件对用户进行视线检测,得到视线方向;
    根据所述视线方向,确定所述用户在所述虚拟场景中的注视区域;
    若所述注视区域位于所述实景图像所在的区域,则突出展示所述实景图像。
  8. 一种云桌面的展示装置,其特征在于,包括:
    获取模块,用于:获取穿戴式装置所在的现实环境中的实景图像以及云端服务器提供的云桌面图像;
    展示模块,用于:在所述穿戴式装置的虚拟场景中,与所述云桌面图像同步展示所述实景图像。
  9. 一种终端设备,其特征在于,包括:存储器以及处理器;
    其中,所述存储器用于:存储一条或多条计算机指令;
    所述处理器用于执行所述一条或多条计算机指令,以用于:执行权利要求1-7任一项所述的方法中的步骤。
  10. 一种存储有计算机程序的计算机可读存储介质,其特征在于,当计算机程序被处理器执行时,致使处理器实现权利要求1-7任一项所述方法中的步骤。
PCT/CN2022/111741 2022-03-31 2022-08-11 一种云桌面的展示方法、装置、设备及存储介质 WO2023184816A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210329680.6A CN114442814B (zh) 2022-03-31 2022-03-31 一种云桌面的展示方法、装置、设备及存储介质
CN202210329680.6 2022-03-31

Publications (1)

Publication Number Publication Date
WO2023184816A1 true WO2023184816A1 (zh) 2023-10-05

Family

ID=81360275

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/111741 WO2023184816A1 (zh) 2022-03-31 2022-08-11 一种云桌面的展示方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN114442814B (zh)
WO (1) WO2023184816A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114442814B (zh) * 2022-03-31 2022-09-16 灯影科技有限公司 一种云桌面的展示方法、装置、设备及存储介质
CN115661418A (zh) * 2022-12-22 2023-01-31 灯影科技有限公司 混合现实显示装置、方法、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105955456A (zh) * 2016-04-15 2016-09-21 深圳超多维光电子有限公司 虚拟现实与增强现实融合的方法、装置及智能穿戴设备
CN106774869A (zh) * 2016-12-08 2017-05-31 广州大西洲科技有限公司 一种实现虚拟现实的方法、装置及虚拟现实头盔
CN110412765A (zh) * 2019-07-11 2019-11-05 Oppo广东移动通信有限公司 增强现实图像拍摄方法、装置、存储介质及增强现实设备
CN114442814A (zh) * 2022-03-31 2022-05-06 灯影科技有限公司 一种云桌面的展示方法、装置、设备及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108957742B (zh) * 2017-05-19 2021-09-03 深圳市易瞳科技有限公司 一种实现画面虚拟透明动态调节的增强现实头盔及方法
CN107105204A (zh) * 2017-05-19 2017-08-29 王朋华 一种实现实时实景视频桌面背景的方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105955456A (zh) * 2016-04-15 2016-09-21 深圳超多维光电子有限公司 虚拟现实与增强现实融合的方法、装置及智能穿戴设备
CN106774869A (zh) * 2016-12-08 2017-05-31 广州大西洲科技有限公司 一种实现虚拟现实的方法、装置及虚拟现实头盔
CN110412765A (zh) * 2019-07-11 2019-11-05 Oppo广东移动通信有限公司 增强现实图像拍摄方法、装置、存储介质及增强现实设备
CN114442814A (zh) * 2022-03-31 2022-05-06 灯影科技有限公司 一种云桌面的展示方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN114442814B (zh) 2022-09-16
CN114442814A (zh) 2022-05-06

Similar Documents

Publication Publication Date Title
US11838518B2 (en) Reprojecting holographic video to enhance streaming bandwidth/quality
WO2023184816A1 (zh) 一种云桌面的展示方法、装置、设备及存储介质
US10171792B2 (en) Device and method for three-dimensional video communication
US9934573B2 (en) Technologies for adjusting a perspective of a captured image for display
WO2015157862A1 (en) Augmented reality communications
US20180068489A1 (en) Server, user terminal device, and control method therefor
US10762688B2 (en) Information processing apparatus, information processing system, and information processing method
JP6560740B2 (ja) バーチャルリアリティヘッドマウントディスプレイ機器ソフトウェアをテストする方法、装置、プログラム、及び記録媒体
JP2018533232A (ja) ステレオレンダリングシステム
WO2020140758A1 (zh) 图像显示方法、图像处理方法和相关设备
US11218669B1 (en) System and method for extracting and transplanting live video avatar images
JP2017525024A (ja) 入力データを管理するためのアーキテクチャ
US11720996B2 (en) Camera-based transparent display
US10957063B2 (en) Dynamically modifying virtual and augmented reality content to reduce depth conflict between user interface elements and video content
CN109496293B (zh) 扩展内容显示方法、装置、***及存储介质
US20170185147A1 (en) A method and apparatus for displaying a virtual object in three-dimensional (3d) space
JP2021526693A (ja) ポーズ補正
CN110278432B (zh) 一种裸眼3d显示屏3d参数手动校准方法及电子设备
US10764535B1 (en) Facial tracking during video calls using remote control input
JP2016146044A (ja) 映像処理システム、映像処理装置及びその制御方法、並びにプログラム及び記憶媒体
US11521297B2 (en) Method and device for presenting AR information based on video communication technology
JP2015149654A (ja) ヘッドマウントディスプレイ、視聴システム
WO2021134575A1 (zh) 显示控制方法和设备
US11983822B2 (en) Shared viewing of video with prevention of cyclical following among users
CN117478931A (zh) 信息显示方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22934636

Country of ref document: EP

Kind code of ref document: A1