WO2019028855A1 - Virtual display device, intelligent interaction method, and cloud server - Google Patents

Virtual display device, intelligent interaction method, and cloud server Download PDF

Info

Publication number
WO2019028855A1
WO2019028855A1 PCT/CN2017/097157 CN2017097157W WO2019028855A1 WO 2019028855 A1 WO2019028855 A1 WO 2019028855A1 CN 2017097157 W CN2017097157 W CN 2017097157W WO 2019028855 A1 WO2019028855 A1 WO 2019028855A1
Authority
WO
WIPO (PCT)
Prior art keywords
display device
virtual
virtual display
interaction
module
Prior art date
Application number
PCT/CN2017/097157
Other languages
French (fr)
Chinese (zh)
Inventor
廉士国
王恺
林义闽
刘兆祥
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2017/097157 priority Critical patent/WO2019028855A1/en
Priority to CN201780003281.8A priority patent/CN108401463A/en
Publication of WO2019028855A1 publication Critical patent/WO2019028855A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Definitions

  • the present application relates to the field of augmented reality (MR), and in particular to a virtual display device, an intelligent interaction method, and a cloud server.
  • MR augmented reality
  • smart terminals As the same time, with the rapid development of smart terminals and Internet of Things technologies, more and more smart devices are used, especially smart terminal devices with display screens, such as computers, smart phones, smart TVs, and the like.
  • AR Augmented Reality
  • VR virtual reality
  • AR augmented reality
  • the goal of AR technology is to add virtual world to the real world and interact on the screen.
  • the augmented reality scene presented in the function display of Microsoft HoloLens smart glasses People wearing HoloLens glasses can see a virtual person standing on the ground and a virtual picture on the wall. In these scenes, smart glasses can display any digital picture, but it is still difficult to integrate the intelligent terminals with different functions.
  • the position of the display should be realistic and appropriate, on the other hand, it must support the natural interaction mode.
  • the screens of traditional smart terminals have different sizes, the positions are in accordance with the usage habits of people, and cannot always appear in the center of the line of sight of the human eye in a fixed size; and the current virtual reality (VR) devices and augmented reality (AR) devices are generally used.
  • VR virtual reality
  • AR augmented reality
  • Space gestures are manipulated and are not suitable for the operation of some intelligent terminal devices with high precision requirements, and arm fatigue is easy to occur without force feedback.
  • Chinese Patent Application No. 201611160770.8 discloses a display system applied to a video AR, which combines a customized original video generation model video with a preset three-dimensional scene model through an AR display, a three-dimensional virtual reality combined display scene and user photos. In combination, the user can customize his own AR display scene to realize the video of the user's own customized AR scene.
  • VR virtual reality
  • AR augmented reality
  • the present application provides a virtual display device, an intelligent interaction method and a device for integrating various traditional smart terminal functions in combination with a wearable smart display device and a traditional interactive tool, and unifies the screen of the traditional smart terminal and adaptively displays on the virtual display device. And interact, or synchronize adaptive display and interaction on different virtual display devices.
  • the embodiment of the present application provides a virtual display device, which can communicate with a second virtual display device and an interaction terminal, where the virtual display device includes a receiving module, a wireless connection module, and a display module.
  • the receiving module is configured to receive a three-dimensional display position specified by a user and to obtain on-screen display content of the second virtual display device;
  • the display module is configured to determine a virtual window according to the three-dimensional display position, and display the screen display content of the second virtual display device in the virtual window;
  • the receiving module is further configured to acquire an interactive operation instruction of the interactive terminal to the screen display content
  • the display module is further configured to display, in the virtual window, the on-screen content that is updated by the second virtual display device in response to the interactive operation instruction.
  • the second aspect is illustrated by the virtual display terminal processing flow, and the embodiment of the present application provides an intelligent interaction method.
  • the virtual display device receives the three-dimensional display position specified by the user, and acquires the on-screen display content of the second virtual display device;
  • the virtual display device determines a virtual window according to the three-dimensional display position, and displays the screen display content of the second virtual display device in the virtual window;
  • the virtual display device acquires an interactive operation instruction of the interactive terminal to the screen display content; and displays, in the virtual window, the screen display content that is updated by the second virtual display device in response to the interaction operation instruction.
  • the embodiment of the present application provides a cloud server, which can communicate with a virtual display device, where the virtual display device communicates with an interaction terminal, where the cloud server includes a second virtual display device and a sending module. And a receiving module,
  • the second virtual display device is configured to output on-screen display content
  • the sending module is configured to send the screen display content to the virtual display device, where the virtual display device determines a virtual window at a user-specified three-dimensional display position, and displays the screen display content in the virtual window;
  • the receiving module is configured to receive an interaction operation instruction acquired by the virtual display device from the interaction end;
  • the second virtual display device is further configured to output updated on-screen content to the virtual display device in response to the interactive operation instruction.
  • the utility model has the beneficial effects that the virtual display device, the intelligent interaction method and the device provided by the embodiment of the present application are configured to run the second virtual display device in the cloud server system, and the virtual display device is wirelessly connected with the cloud server and wirelessly with the interaction terminal.
  • FIG. 1 is a system architecture diagram of an intelligent interaction system provided by an embodiment of the present application.
  • FIG. 2 is a structural diagram of a multi-person interaction system of an intelligent interaction system provided by an embodiment of the present application
  • FIG. 3 is a block diagram of an intelligent interaction system provided by an embodiment of the present application.
  • FIG. 5 is a main flowchart of an intelligent interaction method provided by an embodiment of the present application.
  • FIG. 6 is a detailed flowchart of a smart interaction method provided by an embodiment of the present application.
  • FIG. 8 is an interaction simulation diagram of a hybrid reality display device of the smart interaction method provided by the embodiment of the present application.
  • FIG. 9 is a hardware framework diagram of a method for implementing an intelligent interaction provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of another interaction end embodiment of an intelligent interaction system according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of still another interaction end of an intelligent interaction system according to an embodiment of the present application.
  • the virtual display device, the intelligent interaction method and the device provided by the embodiments of the present application combine the wearable intelligent display device and the traditional interactive tool, such as a keyboard and a mouse, a remote controller, or a touchpad, etc., to integrate functions of various traditional intelligent terminals.
  • the traditional interactive tool such as a keyboard and a mouse, a remote controller, or a touchpad, etc.
  • computers, smart phones, smart TVs, etc. The screens of the traditional smart terminals are unified, and the visual effects of the stereoscopic perspective are adaptively displayed and simultaneously completed on the virtual display device, or the adaptive display and interaction are synchronized on different virtual display devices.
  • the intelligent interaction method of the present application relies on a smart interaction system composed of a plurality of devices.
  • the wearable virtual display device 10 eg, augmented reality glasses, etc.
  • the interactive terminal e.g., the interactive terminal
  • the interactive cloud 90 are included.
  • the wearable virtual display device 10 of the present application is an augmented reality (AR) device including a transparent spectacle lens display screen through which a user can see the real world through the display screen of the spectacle lens.
  • the virtual display device 10 displays the three-dimensional virtual content screen at a suitable location and exchanges data with the interactive terminal and the interactive cloud 90 by wireless communication.
  • the interactive terminal can be a keyboard 40, a mouse 50, a wireless remote controller 60, a touchpad 70, and the like.
  • the user can manipulate the virtual content displayed in the three-dimensional virtual space by operating the interaction terminal.
  • the interactive cloud 90 is formed by networking a plurality of cloud servers, and the network of the three cloud servers 91, 92, and 93 is formed as shown in FIG.
  • the second virtual display device is operated in the cloud server of the interactive cloud 90.
  • the second virtual display device is a traditional smart terminal system, and may be a computer system, a smart phone system, a smart television system, or the like.
  • the interaction cloud 90 is responsible for transmitting the display data related to the second virtual display device to the wearable virtual display device 10 and operating the interactive cloud 90 by acquiring an interactive operation command triggered by the user at the interactive terminal from the virtual display device 10.
  • the digital content of the second virtual display device is responsible for transmitting the display data related to the second virtual display device to the wearable virtual display device 10 and operating the interactive cloud 90 by acquiring an interactive operation command triggered by the user at the interactive terminal from the virtual display device 10.
  • the digital content of the second virtual display device is responsible for transmitting the display data related to the second virtual display device to the wearable virtual display device 10 and operating the interactive cloud 90 by acquiring an interactive operation command triggered by the user at the interactive terminal from the virtual display device 10.
  • the virtual display device 10 is provided with one or more CPUs, and a GPU is added to complete functions such as three-dimensional modeling and data transceiving and multi-window image display, as necessary to meet image recognition requirements.
  • the virtual display device 10 can be equipped with an android operating system, or an operating system such as an iOS operating system or a Windows Phone.
  • this embodiment relates to a conceptual embodiment of a smart interaction method and apparatus. This conceptual embodiment is described from one side of the virtual display device.
  • the virtual display device may be a virtual reality (VR) device, an augmented reality (AR) device, or a mixed reality display device.
  • VR virtual reality
  • AR augmented reality
  • the virtual display device is wirelessly connected to the cloud server and wirelessly connected to the interactive terminal, and the virtual display device communicates with the second virtual display device running on the cloud server system.
  • the virtual display device is used as a display medium of the second virtual display device and the interactive terminal, and combines the functions of various traditional smart terminals in the three-dimensional virtual display system of the virtual display device in combination with the wearable smart display device and the traditional interactive tool.
  • the virtual display device 10 includes a transmitting module 14 and a receiving module 12 for communicating with the interactive cloud 90, including a location determining module 16 and a window module 18 for determining a virtual window, including a display module 20,
  • An adaptive display module 40 and a visual computing module 42 including a connection window module 18 include a wireless connection module 30 for communicating with the interactive terminal and a multi-person interaction module 35.
  • the window module 18 can also be disposed in the display module 20 together.
  • each embodiment includes an interactive communication module 60.
  • the interactive communication module 60 establishes a wireless connection and data communication with the wireless connection module 30 of the virtual display device 10 when the interactive terminal is started, thereby transmitting the interactive operation instruction of the user at the interactive terminal to the virtual display device 10.
  • the virtual display device 10 is wirelessly connected and communicated with the second virtual display device 50 via a wireless communication module, wherein the wireless communication module includes the transmitting module 14 and the receiving module 12.
  • the second virtual display device 50 is a software system or software module running on the interactive cloud 90.
  • the receiving module 12 receives a three-dimensional display position specified by the user.
  • the three-dimensional display position is a spatial three-dimensional coordinate.
  • the window module 18 of the display module 20 determines a virtual window based on the three-dimensional display position.
  • the receiving module 12 acquires the on-screen content of the second virtual display device 50, and displays the on-screen content from the second virtual display device in the virtual window.
  • the wireless connection module 30 of the virtual display device 10 acquires an interactive operation instruction of the interactive terminal to the screen display content.
  • the transmitting module 14 of the virtual display device 10 transmits the interactive operation command to the second virtual display device 50.
  • the display module 20 displays the feedback of the second virtual display device 50 in the virtual window. Respond to the on-screen display of the interactive operation instruction update.
  • the adaptive display module of the virtual display device 10 is configured to adaptively display the received on-screen content in the virtual window.
  • the adaptive display module 40 includes a visual computing module 42.
  • the visual computing module 42 detects the amount of relative positional change produced by the user wearing the virtual display device 10.
  • the adaptive display module 40 adjusts the virtual window according to the relative position change amount and the stereo perspective relationship, and continues to display the screen display content of the second virtual display device in the adjusted virtual window. Therefore, when the user wearing the virtual display device 10 walks or moves, the head also moves, and the virtual window of the virtual display device 10 changes the stereoscopic effect with the movement of the head and continues to display in the transformed window.
  • the content is displayed such that when the user moves in reality, the position of the second virtual display device 50, such as the virtual computer screen, seen in the virtual three-dimensional space remains in physical location, and the user feels that the visual sense is true. Stereo computer screen.
  • the multi-person interaction module 35 of the virtual display device 10 is used to support multi-person viewing/control.
  • the multi-person interaction module 35 is used to establish a multi-person interaction group and complete interactive sharing. A plurality of people can simultaneously watch a computer, a television, a mobile phone/tablet, and can all be controlled by connecting an interactive terminal, wherein each user wears the virtual display device 10.
  • the multi-person interaction module 35 is further configured to determine that the local virtual display device or the secondary virtual display device is determined to be the secondary virtual display device, and the multi-person interaction module 35 acquires the three-dimensional display position of the primary virtual display device from the cloud server and The interactive operation instruction simultaneously acquires the on-screen display content sent to the main virtual display device for synchronous display.
  • the interactive terminal is connected to the primary virtual display device and the secondary virtual display device, and shares an interactive operation instruction of the interactive terminal, and controls the display content of the primary virtual display device and the secondary virtual display device to be simultaneously displayed through the interactive terminal.
  • the interactive terminal uses a wireless keyboard 40 and a wireless mouse 50.
  • the interactive terminal uses a wireless remote controller 60.
  • the interactive end uses an interactive touch panel 70.
  • the virtual display device that is first activated is used as the primary virtual display device and the virtual display device that is activated later as the secondary virtual display device.
  • the main virtual display device is first activated, and displays the display position specified by the user, connects to the cloud server, and controls the display content through the interactive terminal, such as a remote controller.
  • the cloud server is connected, and the initial setting information such as the display position specified by the user of the primary virtual display device is obtained, and the virtual content is manipulated by the wireless remote controller.
  • the terminal communicates with the primary/secondary virtual display device to deliver an interactive operation instruction.
  • the cloud server communicates with the primary/secondary virtual display device, receives the interactive operation instruction, operates the virtual content of the second virtual display device, and feeds back the updated on-screen display content to the primary/secondary virtual display device.
  • the virtual display device may be a virtual reality (VR) device, an augmented reality (AR) device, or a mixed reality display device.
  • the second virtual display device is a module running on the cloud server.
  • the multi-person interaction mode (including login, input, etc.) of the device system is similar to the multi-person interaction mode of the traditional real device, and will not be described here.
  • Step 101 The virtual display device receives a three-dimensional display position specified by the user
  • Step 102 Acquire on-screen content of the second virtual display device.
  • Step 103 The virtual display device determines a virtual window according to the three-dimensional display position, and displays the screen display content of the second virtual display device in the virtual window.
  • Step 104 The virtual display device acquires an interactive operation instruction of the interactive terminal to the screen display content.
  • Step 105 Display, in the virtual window, the screen display content that is updated by the second virtual display device in response to the interaction operation instruction.
  • the following describes the smart interaction method on one side of the virtual display device, which includes:
  • Step 201 - step 204 is the same as step 101 - step 104 above;
  • Step 205 The virtual display device sends the interaction operation instruction to the second virtual display device, where the second virtual display device runs on the cloud server;
  • Step 206 The second virtual display device operates the virtual content of the system in response to the interactive operation instruction, and feeds back the updated on-screen content;
  • Step 207 Display, in the virtual window, the screen display content that is updated by the second virtual display device in response to the interaction operation instruction;
  • the method also includes the step of adaptively displaying the received on-screen content in the virtual window:
  • Step 208 Detect a relative position change amount of the virtual display device.
  • the method for detecting the relative position change includes, but is not limited to, calculating the relative position change amount by using a physical template of a special pattern, that is, comparing the relative position change amount of the physical template in the adjacent two pictures collected by the camera in real time; or
  • the environmental information is calculated, that is, the relative position change of the environment in the adjacent two images collected by the camera is compared in real time, for example, PTAM/SLAM or the like; or the calculation of the relative position change visually can be combined with the sensor information, such as IMU Sensor to boost this
  • a virtual display device such as a calculation accuracy of a position change of a fast moving moving glasses of a mixed reality display device.
  • Step 301 Adjust the virtual window according to the relative position change amount and the stereo perspective relationship, and display the screen display content of the second virtual display device in the adjusted virtual window.
  • FIG. 7 shows a multi-person interaction flowchart of the virtual display device.
  • Step 301 The interaction end is connected to multiple virtual display devices, and the interaction operation instruction of the interaction terminal is shared;
  • Step 302 When the virtual display device requests multi-person interaction, the virtual display device that is first activated is the primary virtual display device, and the virtual display device that is activated later is the secondary virtual display device.
  • Step 303 The primary virtual display device sets an initial display position to receive an interactive operation instruction of the interaction end, and sends the display position and the interaction operation instruction to the cloud server.
  • Step 304 The cloud server stores the display location and forwards the interactive operation instruction to the second virtual display device, and the second virtual display device feeds back the corresponding screen display content to the virtual display device that initiates the request according to the interaction operation instruction;
  • Step 305 The secondary virtual display device is connected to the cloud server.
  • Step 306 The secondary virtual display device acquires the display position of the primary virtual display device from the cloud server, and acquires the screen display content sent to the primary virtual display device for synchronous display.
  • Step 307 The secondary virtual display device determines the virtual window according to the display position and displays the synchronized on-screen content in the virtual window, and receives the interactive operation instruction of the interaction end, and sends the received interaction operation instruction to the cloud server.
  • each virtual display device also initiates an adaptive display step.
  • this embodiment relates to a conceptual embodiment of a smart interaction method and apparatus. This conceptual embodiment is described from one side of the cloud server.
  • the cloud server can communicate with a virtual display device, and the virtual display device communicates with an interactive terminal, and the cloud server runs a second virtual display device, a sending module, and a receiving module.
  • the second virtual display device is configured to output on-screen content.
  • the sending module is configured to send the screen display content of the second virtual display device to the virtual display device, where the virtual display device determines a virtual window at a user-specified three-dimensional display position, and displays the screen display content in the virtual window.
  • the receiving module is configured to receive an interaction operation instruction for the screen display content acquired by the virtual display device from the interaction terminal.
  • the second virtual display device is further configured to output updated on-screen content to the virtual display device in response to the interactive operation instruction.
  • the cloud server also sets a multi-person interaction module for implementing multi-person interaction.
  • the virtual display device that is first activated is the primary virtual display device, and the virtual display device that is activated later is the secondary virtual display device.
  • the multi-person interaction module is used for multi-person interaction, the three-dimensional display of the primary virtual display device is stored. Display location and interworking instructions.
  • the sending module of the cloud server is configured to send a three-dimensional display position of the main virtual display device and an interactive operation instruction, and is further configured to send the screen display content of the synchronous main virtual display device.
  • the interactive terminal is connected to the primary virtual display device and the secondary virtual display device, and shares an interactive operation instruction of the interactive terminal, and controls the display content of the primary virtual display device and the secondary virtual display device to be simultaneously displayed through the interactive terminal.
  • the interactive terminal 300 uses the wireless keyboard 40 and the wireless mouse 50 to support the user to move in a wide range.
  • the second virtual display device is a personal computer
  • the virtual display device of the present embodiment is a mixed reality (MR) display device 200.
  • MR mixed reality
  • This kind of mixed reality PC interaction such as including login, input, etc., refers to real TV operations.
  • This embodiment implements a personal computer system incorporating a mixed reality display device.
  • the screen of the glasses of the mixed reality display device 200 is semi-transparent, and the virtual computer screen can be adaptively displayed, and the real world can be seen through the glasses.
  • the hybrid reality display device establishes a wireless connection with the wireless keyboard 40 and the wireless mouse 50 for acquiring an interactive operation instruction of the interaction terminal.
  • the mixed reality display device and the interaction cloud 90 are wirelessly interconnected by means of WiFi, 4G communication protocol, etc., and the virtual computer screen display content running in the interactive cloud is acquired from the interaction cloud 90. And transmitting the interactive operation instructions of the interactive end to the interactive cloud 90.
  • the interactive cloud 90 responds to the request of the augmented reality device, such as the augmented reality glasses, receives the interactive operation instruction and operates the content of the virtual computer thereon, and then updates the virtual computer screen display content to the requested mixed reality display device.
  • the mixed reality display device adaptively displays the on-screen display content from the virtual computer by visual calculation.
  • the user specifies a three-dimensional display position in the three-dimensional space of the mixed reality display device, that is, a position to be displayed on the virtual computer screen.
  • This three-dimensional display position includes three dimensions: x, y, and z.
  • the specified three-dimensional display position includes, but is not limited to, the following methods: 1) using a special pattern of the image as a physical template, attached to a specified position in the wall 100, such as a wall surface of a person sitting at the desk; 2) the user passes the head An interactive mode of gesture interaction, mouse, keyboard, etc.
  • the relative position change amount T of the mixed reality display device worn by the user is calculated by the visual calculation module 42.
  • the method for determining the relative position change amount includes, but is not limited to: 1) calculating the relative position change amount by using a physical template of a special pattern, that is, comparing the relative position change amount of the physical template in the adjacent two pictures collected by the camera in real time; Or by environmental information, that is, real-time comparison of the relative position change of the environment in the adjacent two images collected by the camera, such as PTAM/SLAM, etc.; 3) or visually relative position change calculation and sensor information
  • a physical template of a special pattern that is, comparing the relative position change amount of the physical template in the adjacent two pictures collected by the camera in real time
  • environmental information that is, real-time comparison of the relative position change of the environment in the adjacent two images collected by the camera, such as PTAM/SLAM, etc.
  • visually relative position change calculation and sensor information In combination, such as an IMU sensor, the computational accuracy of the positional change of the virtual display device, such as a mixed reality display device, is rapidly increased.
  • the adaptive display module 40 adjusts the position of the virtual window 210 originally used as the virtual computer screen according to the relative position change amount and the stereoscopic perspective relationship in the three-dimensional display system, and displays it again in the adjusted virtual window of the mixed reality display device. Screen display content.
  • the virtual computer screen position presented in the mixed reality display device remains physically in place, allowing the user to visually perceive the view as a real computer screen.
  • the interactive terminal 300 adopts a wireless remote controller 60
  • the second virtual display device is a home television.
  • the virtual display device of this embodiment The display device 200 is mixed for reality.
  • the following describes in detail an embodiment in which the virtual display device integrates display of home television screen content. This kind of mixed reality home TV interaction, such as the operation of the home TV with login, input and other interactive operations.
  • This embodiment implements a home television incorporating a mixed reality display device.
  • the screen of the mixed reality display device such as the augmented reality glasses, is translucent,
  • the virtual television screen is adaptively displayed, and the real world can be seen through it;
  • the wireless connection between the mixed reality display device and the television wireless remote controller 60 is obtained, and an interactive operation instruction of the wireless remote controller 60 can be obtained, and the wireless connection is obtained.
  • the infrared remote controller sends an interactive operation command to the infrared receiving module of the augmented reality glasses through the infrared signal, corresponding to the wireless connection module 30; the Bluetooth/WiFi air mouse remote control sends the control command to the wireless of the augmented reality glasses through the Bluetooth/WiFi wireless network.
  • the module is connected to support the user to move in a wide range; the augmented reality glasses 200 and the interactive cloud 90 establish a wireless connection through the WiFi, the 4G communication protocol, etc., obtain the screen display content of the virtual TV from the interaction cloud 90, and connect the interactive terminal 300.
  • the interactive operation instruction is transmitted to the interaction cloud 90; the interaction cloud 90 responds to the request of the augmented reality glasses, receives the interactive operation instruction, operates the content of the virtual television, and sends the screen display content of the virtual television in response to the interactive operation instruction to the augmented reality glasses.
  • the augmented reality glasses adaptively display the on-screen display content from the virtual television by visual calculation.
  • Step 1 The user specifies the position to be displayed on the virtual television screen in the virtual three-dimensional space of the augmented reality glasses.
  • the three-dimensional display position includes three dimensions: x, y, and z.
  • the specified three-dimensional display position includes, but is not limited to, the following methods: 1) using a special pattern image as a physical template, attached to a specified position in the wall 100, suitable for placing a wall of the wall 100 of the television; 2) the user passes the head posture
  • the interactive mode, the wireless remote controller 60 and the like select a three-dimensional display position in the three-dimensional virtual display space, for example, by dragging the virtual window to the desired position through the wireless remote controller 60 or the wireless mouse 50, and through the augmented reality glasses, such as enhancement
  • the camera of the actual glasses records the environmental information at this time.
  • the relative position change amount T of the mixed reality display device worn by the user is calculated by the visual calculation module 42.
  • the adaptive display module 40 adjusts the position of the virtual window 210 originally used as the virtual television screen in the three-dimensional display space according to the relative position change amount T and the stereoscopic perspective relationship, and displays it again in the virtual window after the augmented reality glasses are adjusted. Screen display content.
  • the virtual television screen position presented in the augmented reality glasses remains physically in place, allowing the user to visually perceive a real TV screen.
  • the interaction end uses an interactive touch panel 70
  • the second virtual display device is a touch mobile phone or a tablet terminal.
  • the virtual display device of this embodiment is a mixed reality display device, such as augmented reality glasses.
  • the following describes an embodiment in which the virtual display device integrates the display of the screen content of the touch mobile phone or the tablet terminal.
  • the interaction of the touch reality mobile phone or the tablet terminal of the mixed reality for example, including the login, input, and the like, refers to the real touch mobile phone or the tablet terminal interaction.
  • This embodiment implements a touch mobile phone or a touch tablet terminal that combines a mixed reality display device.
  • the screen of the augmented reality glasses is semi-transparent, and the touch phone or the touch screen screen can be adaptively displayed, and the real world can be seen through the same;
  • the interactive touch panel 70 is a touch without a display function.
  • the sensing device can also be a transparent touch film that is attached to a plastic touch panel.
  • the interactive touch panel 70 includes a communication module 60 for transmitting touch information, and a wireless connection between the augmented reality glasses and the interactive touch panel 70, such as Bluetooth or WiFi, to support the user to move over a wide range, so that the augmented reality glasses are enabled.
  • the interactive operation command of the interactive touch panel 70 can be obtained; the wireless connection between the augmented reality glasses and the interactive cloud 90 is established through WiFi, 4G communication protocol, etc., and the screen display content of the touch mobile phone and the touch tablet is obtained from the interactive cloud 90, and The interactive operation instructions of the interactive trackpad 70 are transmitted to the interactive cloud 90.
  • the interactive cloud 90 responds to the request of the augmented reality glasses, receives the interactive operation instruction, operates the content of the touch mobile phone or the touch panel, and sends the screen display content of the virtual TV after responding to the interactive operation instruction to the augmented reality glasses.
  • the adaptive display of the augmented reality glasses is realized by visual calculation, and the steps are similar to those of the third embodiment.
  • Step 1 The user specifies the position to be displayed on the touch mobile phone or the touch flat screen in the virtual three-dimensional space of the augmented reality glasses.
  • the three-dimensional display position includes three dimensions: x, y, and z.
  • the method for specifying the position of the three-dimensional display is similar to that of Embodiment 4, and details are not described herein again.
  • the interactive terminal can also adopt a virtual touch screen of the mobile phone.
  • a paper image of a special pattern is used as the physical image template 76, which is attached to a designated position of the interactive touch panel 74, such as the front half of the rectangular interactive touch panel.
  • the image template 76 is placed in the middle interlayer of the touch film 72 and the touch panel 74.
  • the relative position change amount T of the augmented reality glasses worn by the user is calculated by the visual calculation module 42 again, similar to the fourth embodiment.
  • the adaptive display module 40 adjusts its position in the three-dimensional display space according to the relative position change amount T and the stereoscopic perspective relationship for the virtual window 210 serving as the virtual television screen, and displays it again in the virtual window 210 port after the augmented reality glasses are adjusted.
  • Screen display content In this way, as the user's head moves, the screen position of the touch mobile phone or the touch tablet presented in the augmented reality glasses is kept in physical place, so that the user feels that the user is visually aware of the real touch phone or touch. Control the screen of the tablet.
  • the interaction terminal 300 adopts an interactive keypad
  • the second virtual display device is a button mobile phone
  • the button is a hardware button. Does not include software button implementations.
  • the virtual display device of this embodiment is a mixed reality display device such as augmented reality glasses.
  • the following describes in detail an embodiment in which the virtual display device integrates the display of the contents of the button mobile phone screen.
  • This kind of mixed reality button mobile phone interaction such as including login, input and other operations refer to the real button mobile phone interaction.
  • Embodiment 5 The difference from Embodiment 5 is that the second virtual display device changes from the touch mode to the button mode. As a result, the key modules change from an interactive trackpad to an interactive keypad, and the other modules remain the same.
  • the interactive terminal can also be an interactive keypad 80.
  • the interactive keypad 80 is a button sensing device without a display function, and can be a key keyboard 86 attached to the plastic touch panel 84 .
  • the interactive keypad 80 includes a touch film 82 that includes a communication module 60 that transmits key information, and exchanges interactive operation commands with the wireless connection module 30 of the augmented reality glasses through the communication module 60.
  • the module 60 and the wireless connection module 30 are wirelessly connected, such as Bluetooth/WiFi, to support the user to move over a wide range.
  • the application can be used for other interactive scenarios such as a computer, a television, a mobile phone, and the like.
  • the interaction end uses an interactive touch panel 70
  • the second virtual display device is an electronic whiteboard.
  • the interactive end adopts an interactive touch panel 70 including an in-vehicle display function
  • the second virtual display device is an in-vehicle system.
  • FIG. 8 is a schematic diagram of a hardware structure of an electronic device 600 according to an intelligent interaction method according to an embodiment of the present disclosure. As shown in FIG. 8, the electronic device 600 includes:
  • processors 610 One or more processors 610, memory 620, one or more graphics processing units (GPUs) 630, and communication components 650 are illustrated in FIG. 8 by a processor 610 and a graphics processor 630.
  • the memory 620 stores instructions executable by the at least one processor 610, the instructions being by the at least one When the processor is executing, a data channel is established by communication component 650 to enable the at least one processor to perform the intelligent interaction method.
  • the processor 610, the memory 620, the graphics processor 630, and the communication component 650 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 620 is used as a non-volatile computer readable storage medium, and can be used for storing a non-volatile software program, a non-volatile computer executable program, and a module, such as a program instruction corresponding to the intelligent interaction method in the embodiment of the present application. / module (for example, the receiving module 12, the transmitting module 14, the position determining module 16, the window module 18, and the display module 20 shown in FIG. 4, the multi-person interaction module 56 and the second virtual display device 58 shown in FIG. ).
  • the processor 610 executes various functional applications and data processing of the server by running non-volatile software programs, instructions, and modules stored in the memory 620, that is, implementing the intelligent interaction method in the foregoing method embodiments.
  • the memory 620 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the virtual display device, and the like.
  • memory 620 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • memory 620 can optionally include memory remotely located relative to processor 610, which can be connected to the robotic interactive electronic device via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the one or more modules are stored in the memory 620, and when executed by the one or more processors 610, perform an image-based intelligent interaction method in any of the above method embodiments, for example, performing the above described
  • the method steps 101 to 105 in FIG. 4, the method steps 201 to 209 in FIG. 5 described above are performed, and the method steps 301 to 307 in FIG. 6 described above are performed to implement the receiving module shown in FIG. 12.
  • Embodiments of the present application provide a non-transitory computer readable storage medium readable by the computer
  • the storage medium stores computer executable instructions that are executed by one or more processors, for example, performing method steps 101 through 105 of FIG. 4 described above, performing the method steps of FIG. 5 described above. 201 to step 209 and performing the method steps 301 to 307 in FIG. 6 described above, implementing the receiving module 12, the transmitting module 14, the position determining module 16, the window module 18, and the display module 20 shown in FIG. 3, The functions of the multi-person interaction module 56 and the second virtual display device 58 shown in FIG.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A virtual display device may communicate with a second virtual display device and an interactive terminal. The virtual display device comprises a receiving module, a wireless connection module, and a display module. The receiving module is used for receiving a three-dimensional display position specified by a user and obtaining screen display content of the second virtual display device. The display module is used for determining a virtual window according to the three-dimensional display position, and displaying the screen display content of the second virtual display device in the virtual window. The wireless connection module is used for obtaining an interactive operation instruction for the screen display content from the interactive terminal. The display module is further used for displaying, in the virtual window, updated screen display content that is fed back by the second virtual display device in response to the interactive operation instruction.

Description

一种虚拟显示装置、智能交互方法和云端服务器Virtual display device, intelligent interaction method and cloud server 技术领域Technical field
本申请涉及增强现实(Mixed reality,MR)领域,具体涉及一种虚拟显示装置、智能交互方法和云端服务器。The present application relates to the field of augmented reality (MR), and in particular to a virtual display device, an intelligent interaction method, and a cloud server.
背景技术Background technique
随着网络技术的发展,图像处理技术以及大数据处理能力的提升,可穿戴虚拟现实设备得到用户越来越多的青睐。With the development of network technology, image processing technology and the enhancement of big data processing capabilities, wearable virtual reality devices have become more and more popular among users.
同时随着智能终端和物联网技术的快速发展,人们使用的智能设备越来越多,尤其是带有显示屏幕的智能终端设备,例如电脑、智能手机、智能电视等。At the same time, with the rapid development of smart terminals and Internet of Things technologies, more and more smart devices are used, especially smart terminal devices with display screens, such as computers, smart phones, smart TVs, and the like.
目前的智能穿戴式显示设备,比如虚拟现实(VR)设备、增强现实(AR)设备如何整合现有的带显示屏幕的智能终端设备成为亟待解决的问题,增强现实技术(Augmented Reality,简称AR),是一种实时地计算图像或者影像的位置及角度并加上相应图像、视频、3D模型技术,AR技术的目标是在屏幕上把虚拟世界增加至现实世界并进行互动。例如,微软HoloLens智能眼镜的功能展示中呈现出的增强现实场景:人戴着HoloLens眼镜可以看到地面上站着一个虚拟人、墙上贴着一幅虚拟画。在这些场景中,智能眼镜可实现对任意数字画面的显示,但要融合当前不同功能的智能终端还有难度,一方面是显示的位置要逼真、贴切,另一方面是要支持自然的交互方式。传统智能终端的屏幕有不同的尺寸,位置符合人的使用习惯,不能一直出现以固定尺寸在人眼视线的正中央;并且,目前的虚拟现实(VR)设备、增强现实(AR)设备一般使用空间手势来操控,不适用于对精度要求高的一些智能终端设备内容操作,而且在没有力反馈的情况下很容易出现手臂疲劳。How to integrate smart wearable display devices, such as virtual reality (VR) devices and augmented reality (AR) devices, into existing smart terminal devices with display screens has become an urgent problem to be solved, and Augmented Reality (AR) It is a real-time calculation of the position and angle of an image or image and the corresponding image, video, and 3D model technology. The goal of AR technology is to add virtual world to the real world and interact on the screen. For example, the augmented reality scene presented in the function display of Microsoft HoloLens smart glasses: People wearing HoloLens glasses can see a virtual person standing on the ground and a virtual picture on the wall. In these scenes, smart glasses can display any digital picture, but it is still difficult to integrate the intelligent terminals with different functions. On the one hand, the position of the display should be realistic and appropriate, on the other hand, it must support the natural interaction mode. . The screens of traditional smart terminals have different sizes, the positions are in accordance with the usage habits of people, and cannot always appear in the center of the line of sight of the human eye in a fixed size; and the current virtual reality (VR) devices and augmented reality (AR) devices are generally used. Space gestures are manipulated and are not suitable for the operation of some intelligent terminal devices with high precision requirements, and arm fatigue is easy to occur without force feedback.
中国专利申请第201611160770.8号披露了一种应用于视频AR的显示***,通过户上传自定义的原始视频生成模型视频与预置的三维场景模型结合通过AR显示,三维虚实结合的显示场景与用户照片结合,同时用户可以自定义自己的AR显示场景,实现了用户自己定制AR场景的视频。Chinese Patent Application No. 201611160770.8 discloses a display system applied to a video AR, which combines a customized original video generation model video with a preset three-dimensional scene model through an AR display, a three-dimensional virtual reality combined display scene and user photos. In combination, the user can customize his own AR display scene to realize the video of the user's own customized AR scene.
但是,目前的虚拟现实(VR)设备、增强现实(AR)设备还没有做到与传统的带显示屏幕的智能终端融合,无法提供更多样化的AR体验。 However, current virtual reality (VR) devices and augmented reality (AR) devices have not yet been integrated with traditional smart terminals with display screens, and cannot provide a more diverse AR experience.
因此,现有的智能穿戴显示技术还有待于改进。Therefore, the existing smart wearable display technology has yet to be improved.
发明内容Summary of the invention
本申请提供一种结合可穿戴智能显示设备和传统交互工具整合各种传统智能终端功能的虚拟显示装置、智能交互方法和设备,把传统智能终端的屏幕统一起来,在虚拟显示装置上自适应显示和交互,或者在不同虚拟显示装置上同步自适应显示和交互。The present application provides a virtual display device, an intelligent interaction method and a device for integrating various traditional smart terminal functions in combination with a wearable smart display device and a traditional interactive tool, and unifies the screen of the traditional smart terminal and adaptively displays on the virtual display device. And interact, or synchronize adaptive display and interaction on different virtual display devices.
第一方面,从虚拟显示终端角度阐述,本申请实施例提供了一种虚拟显示装置,可与第二虚拟显示装置以及交互端通信,该虚拟显示装置包括接收模块、无线连接模块以及显示模块,In a first aspect, from the perspective of a virtual display terminal, the embodiment of the present application provides a virtual display device, which can communicate with a second virtual display device and an interaction terminal, where the virtual display device includes a receiving module, a wireless connection module, and a display module.
该接收模块用于接收用户指定的三维显示位置以及用于获取第二虚拟显示装置的屏显内容;The receiving module is configured to receive a three-dimensional display position specified by a user and to obtain on-screen display content of the second virtual display device;
该显示模块用于根据该三维显示位置确定虚拟窗口,并在该虚拟窗口中显示该第二虚拟显示装置的屏显内容;The display module is configured to determine a virtual window according to the three-dimensional display position, and display the screen display content of the second virtual display device in the virtual window;
该接收模块还用于获取该交互端对该屏显内容的交互操作指令;The receiving module is further configured to acquire an interactive operation instruction of the interactive terminal to the screen display content;
该显示模块还用于在该虚拟窗口显示该第二虚拟显示装置反馈的响应该交互操作指令更新的屏显内容。The display module is further configured to display, in the virtual window, the on-screen content that is updated by the second virtual display device in response to the interactive operation instruction.
第二方面,从虚拟显示终端处理流程阐述,本申请实施例提供了一种智能交互方法,The second aspect is illustrated by the virtual display terminal processing flow, and the embodiment of the present application provides an intelligent interaction method.
虚拟显示装置接收用户指定的三维显示位置,获取第二虚拟显示装置的屏显内容;The virtual display device receives the three-dimensional display position specified by the user, and acquires the on-screen display content of the second virtual display device;
该虚拟显示装置根据该三维显示位置确定虚拟窗口,并在该虚拟窗口中显示该第二虚拟显示装置的屏显内容;The virtual display device determines a virtual window according to the three-dimensional display position, and displays the screen display content of the second virtual display device in the virtual window;
该虚拟显示装置获取该交互端对该屏显内容的交互操作指令;并在该虚拟窗口显示该第二虚拟显示装置反馈的响应该交互操作指令更新的屏显内容。The virtual display device acquires an interactive operation instruction of the interactive terminal to the screen display content; and displays, in the virtual window, the screen display content that is updated by the second virtual display device in response to the interaction operation instruction.
第三方面,从智能交互的云端阐述,本申请实施例提供了一种云端服务器,可与虚拟显示装置通信,该虚拟显示装置与交互端通信,该云端服务器包括第二虚拟显示装置、发送模块以及接收模块,In a third aspect, from the cloud of the smart interaction, the embodiment of the present application provides a cloud server, which can communicate with a virtual display device, where the virtual display device communicates with an interaction terminal, where the cloud server includes a second virtual display device and a sending module. And a receiving module,
该第二虚拟显示装置用于输出屏显内容; The second virtual display device is configured to output on-screen display content;
该发送模块用于将该屏显内容发送至该虚拟显示装置,该虚拟显示装置在用户指定的三维显示位置确定虚拟窗口,并在该虚拟窗口中显示该屏显内容;The sending module is configured to send the screen display content to the virtual display device, where the virtual display device determines a virtual window at a user-specified three-dimensional display position, and displays the screen display content in the virtual window;
该接收模块用于接收该虚拟显示装置从交互端获取的交互操作指令;The receiving module is configured to receive an interaction operation instruction acquired by the virtual display device from the interaction end;
该第二虚拟显示装置还用于响应该交互操作指令向该虚拟显示装置输出更新的屏显内容。The second virtual display device is further configured to output updated on-screen content to the virtual display device in response to the interactive operation instruction.
本申请的有益效果在于,本申请实施例提供的虚拟显示装置、智能交互方法和设备,通过在云端服务器***运行第二虚拟显示装置,同时该虚拟显示装置与云端服务器无线连接以及与交互端无线连接,将虚拟显示装置作为第二虚拟显示装置和交互端的显示媒介,结合可穿戴智能显示设备和传统交互工具整合各种传统智能终端的功能,实现将传统智能终端的功能整合至虚拟显示装置,把传统智能终端的屏幕统一起来,在虚拟显示装置上自适应显示和交互,或者在不同虚拟显示装置上同步自适应显示和交互,提供更丰富多样的交互体验。The utility model has the beneficial effects that the virtual display device, the intelligent interaction method and the device provided by the embodiment of the present application are configured to run the second virtual display device in the cloud server system, and the virtual display device is wirelessly connected with the cloud server and wirelessly with the interaction terminal. Connecting, using the virtual display device as the display medium of the second virtual display device and the interactive terminal, combining the functions of the various traditional smart terminals with the wearable smart display device and the traditional interactive tool, and integrating the functions of the traditional smart terminal into the virtual display device. Unify the screens of traditional smart terminals, adaptively display and interact on virtual display devices, or synchronize adaptive display and interaction on different virtual display devices to provide a richer and more diverse interactive experience.
附图说明DRAWINGS
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。The one or more embodiments are exemplified by the accompanying drawings in the accompanying drawings, and FIG. The figures in the drawings do not constitute a scale limitation unless otherwise stated.
图1是本申请实施例提供的智能交互***的***架构图;1 is a system architecture diagram of an intelligent interaction system provided by an embodiment of the present application;
图2是本申请实施例提供的智能交互***的多人交互***架构图;2 is a structural diagram of a multi-person interaction system of an intelligent interaction system provided by an embodiment of the present application;
图3是本申请实施例提供的智能交互***的模块图;3 is a block diagram of an intelligent interaction system provided by an embodiment of the present application;
图4是本申请实施例提供的智能交互方法的多人交互流程图;4 is a flow chart of multi-person interaction of the intelligent interaction method provided by the embodiment of the present application;
图5是本申请实施例提供的智能交互方法的主要流程图;FIG. 5 is a main flowchart of an intelligent interaction method provided by an embodiment of the present application;
图6是本申请实施例提供的智能交互方法的详细流程图;6 is a detailed flowchart of a smart interaction method provided by an embodiment of the present application;
图7是本申请实施例提供的智能交互方法的多人交互流程图;7 is a flow chart of multi-person interaction of the intelligent interaction method provided by the embodiment of the present application;
图8是本申请实施例提供的智能交互方法的混合现实显示设备的交互模拟图;8 is an interaction simulation diagram of a hybrid reality display device of the smart interaction method provided by the embodiment of the present application;
图9是本申请实施例提供的为实现智能交互方法的硬件框架图;9 is a hardware framework diagram of a method for implementing an intelligent interaction provided by an embodiment of the present application;
图10是本申请实施例提供的智能交互***的另一交互端实施例结构示意图;以及FIG. 10 is a schematic structural diagram of another interaction end embodiment of an intelligent interaction system according to an embodiment of the present application;
图11是本申请实施例提供的智能交互***的再一交互端实施例结构示意图。 FIG. 11 is a schematic structural diagram of still another interaction end of an intelligent interaction system according to an embodiment of the present application.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。In order to make the objects, technical solutions, and advantages of the present application more comprehensible, the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
本申请实施例提供的虚拟显示装置、智能交互方法和设备,结合可穿戴智能显示设备和传统交互工具,比如键盘和鼠标、遥控器、或者触控板等等,整合各种传统智能终端的功能,比如,电脑、智能手机、智能电视等。把传统智能终端的屏幕统一起来,在虚拟显示装置上以立体透视的视觉效果自适应地显示并同时完成交互,或者在不同虚拟显示装置上同步自适应显示和交互。The virtual display device, the intelligent interaction method and the device provided by the embodiments of the present application combine the wearable intelligent display device and the traditional interactive tool, such as a keyboard and a mouse, a remote controller, or a touchpad, etc., to integrate functions of various traditional intelligent terminals. For example, computers, smart phones, smart TVs, etc. The screens of the traditional smart terminals are unified, and the visual effects of the stereoscopic perspective are adaptively displayed and simultaneously completed on the virtual display device, or the adaptive display and interaction are synchronized on different virtual display devices.
本申请的智能交互方法依赖于多种设备构成的智能交互***,如图1所示,包括穿戴式的虚拟显示装置10(例如增强现实眼镜等)、交互端、交互云90。The intelligent interaction method of the present application relies on a smart interaction system composed of a plurality of devices. As shown in FIG. 1 , the wearable virtual display device 10 (eg, augmented reality glasses, etc.), the interactive terminal, and the interactive cloud 90 are included.
其中,本申请的穿戴式虚拟显示装置10为增强现实(AR)设备包括透明的眼镜镜片显示屏,用户可以透过该眼镜镜片的显示屏幕看到真实世界。同时,该虚拟显示装置10在合适的位置显示三维虚拟内容画面,并与交互端和交互云90通过无线通信来交换数据。该交互端可以是键盘40、鼠标50,也可以是无线遥控器60,也可以是触控板70等等。用户能通过操作交互端,可以实现操控三维虚拟空间中显示的虚拟内容。该交互云90有多个云端服务器组网而成,图1所示以三个云端服务器91、92、93组网而成。该交互云90的云端服务器中运行第二虚拟显示装置,该第二虚拟显示装置也就是传统智能终端***,可以是电脑***、智能手机***、智能电视***等。该交互云90负责发送该第二虚拟显示装置相关的显示数据给穿戴式虚拟显示装置10并通过从该虚拟显示装置10获取用户在交互端触发的交互操作指令来操作交互云90中的该第二虚拟显示装置的数字内容。The wearable virtual display device 10 of the present application is an augmented reality (AR) device including a transparent spectacle lens display screen through which a user can see the real world through the display screen of the spectacle lens. At the same time, the virtual display device 10 displays the three-dimensional virtual content screen at a suitable location and exchanges data with the interactive terminal and the interactive cloud 90 by wireless communication. The interactive terminal can be a keyboard 40, a mouse 50, a wireless remote controller 60, a touchpad 70, and the like. The user can manipulate the virtual content displayed in the three-dimensional virtual space by operating the interaction terminal. The interactive cloud 90 is formed by networking a plurality of cloud servers, and the network of the three cloud servers 91, 92, and 93 is formed as shown in FIG. The second virtual display device is operated in the cloud server of the interactive cloud 90. The second virtual display device is a traditional smart terminal system, and may be a computer system, a smart phone system, a smart television system, or the like. The interaction cloud 90 is responsible for transmitting the display data related to the second virtual display device to the wearable virtual display device 10 and operating the interactive cloud 90 by acquiring an interactive operation command triggered by the user at the interactive terminal from the virtual display device 10. The digital content of the second virtual display device.
在主要硬件上,该虚拟显示装置10设置一个或者多个CPU,为了满足图像识别要求必要的时候增设GPU以完成三维建模和数据收发以及多窗口图像显示等功能。On the main hardware, the virtual display device 10 is provided with one or more CPUs, and a GPU is added to complete functions such as three-dimensional modeling and data transceiving and multi-window image display, as necessary to meet image recognition requirements.
在软件上,该虚拟显示装置10可以搭载android操作***,或者iOS操作***或者WindowsPhone等操作***。 In software, the virtual display device 10 can be equipped with an android operating system, or an operating system such as an iOS operating system or a Windows Phone.
实施例1Example 1
请进一步参考图1,本实施例涉及请智能交互方法和设备的概念性实施例。该概念性实施例是从虚拟显示装置单侧进行描述。With further reference to FIG. 1, this embodiment relates to a conceptual embodiment of a smart interaction method and apparatus. This conceptual embodiment is described from one side of the virtual display device.
本实施例提供穿戴在用户身体上的虚拟显示装置,该虚拟显示装置可以是虚拟现实(VR)设备,也可以是增强现实(AR)设备,也可是混合现实显示设备。This embodiment provides a virtual display device that is worn on the user's body. The virtual display device may be a virtual reality (VR) device, an augmented reality (AR) device, or a mixed reality display device.
该虚拟显示装置与云端服务器无线连接以及与交互端无线连接,该虚拟显示装置与在云端服务器***运行第二虚拟显示装置通信。该虚拟显示装置作为第二虚拟显示装置和交互端的显示媒介,结合可穿戴智能显示设备和传统交互工具在该虚拟显示装置的三维虚拟显示***中整合各种传统智能终端的功能。The virtual display device is wirelessly connected to the cloud server and wirelessly connected to the interactive terminal, and the virtual display device communicates with the second virtual display device running on the cloud server system. The virtual display device is used as a display medium of the second virtual display device and the interactive terminal, and combines the functions of various traditional smart terminals in the three-dimensional virtual display system of the virtual display device in combination with the wearable smart display device and the traditional interactive tool.
请一并参考图3,该虚拟显示装置10包括用于与交互云90通信的发送模块14和接收模块12,包括用于确定虚拟窗口的位置确定模块16和窗口模块18,包括显示模块20,包括连接窗口模块18的自适应显示模块40和视觉计算模块42,包括用于与交互端通信的无线连接模块30以及多人交互模块35。该窗口模块18也可以一并设置在显示模块20中。Referring to FIG. 3 together, the virtual display device 10 includes a transmitting module 14 and a receiving module 12 for communicating with the interactive cloud 90, including a location determining module 16 and a window module 18 for determining a virtual window, including a display module 20, An adaptive display module 40 and a visual computing module 42 including a connection window module 18 include a wireless connection module 30 for communicating with the interactive terminal and a multi-person interaction module 35. The window module 18 can also be disposed in the display module 20 together.
为了实现交互端和该虚拟显示装置10建立无线连接,其中,每一种实施例的交互端均包括交互端通信模块60。该交互端通信模块60在交互端启动时与该虚拟显示装置10的无线连接模块30建立无线连接和数据通信,从而将用户在交互端的交互操作指令发送给虚拟显示装置10。In order to implement the wireless connection between the interactive terminal and the virtual display device 10, the interactive end of each embodiment includes an interactive communication module 60. The interactive communication module 60 establishes a wireless connection and data communication with the wireless connection module 30 of the virtual display device 10 when the interactive terminal is started, thereby transmitting the interactive operation instruction of the user at the interactive terminal to the virtual display device 10.
该虚拟显示装置10通过无线通信模块与第二虚拟显示装置50无线连接和通信,其中,该无线通信模块包括该发送模块14和接收模块12。该第二虚拟显示装置50为运行在交互云90的软件***或者软件模块。The virtual display device 10 is wirelessly connected and communicated with the second virtual display device 50 via a wireless communication module, wherein the wireless communication module includes the transmitting module 14 and the receiving module 12. The second virtual display device 50 is a software system or software module running on the interactive cloud 90.
该接收模块12接收用户指定的三维显示位置。该三维显示位置为空间三维坐标。该显示模块20的窗口模块18根据该三维显示位置确定虚拟窗口。该接收模块12获取第二虚拟显示装置50的屏显内容,并在该虚拟窗口中显示来自第二虚拟显示装置的屏显内容。The receiving module 12 receives a three-dimensional display position specified by the user. The three-dimensional display position is a spatial three-dimensional coordinate. The window module 18 of the display module 20 determines a virtual window based on the three-dimensional display position. The receiving module 12 acquires the on-screen content of the second virtual display device 50, and displays the on-screen content from the second virtual display device in the virtual window.
在该虚拟显示装置10虚拟显示该第二虚拟显示装置50的屏显内容后,该虚拟显示装置10的无线连接模块30获取该交互端对该屏显内容的交互操作指令。该虚拟显示装置10的发送模块14将该交互操作指令发送至第二虚拟显示装置50。接着,该显示模块20在该虚拟窗口显示该第二虚拟显示装置50反馈 的响应该交互操作指令更新的屏显内容。After the virtual display device 10 virtually displays the screen display content of the second virtual display device 50, the wireless connection module 30 of the virtual display device 10 acquires an interactive operation instruction of the interactive terminal to the screen display content. The transmitting module 14 of the virtual display device 10 transmits the interactive operation command to the second virtual display device 50. Then, the display module 20 displays the feedback of the second virtual display device 50 in the virtual window. Respond to the on-screen display of the interactive operation instruction update.
为了在虚拟显示装置的显示***提供更贴近真实的显示效果,该虚拟显示装置10的自适应显示模块用于在虚拟窗口自适应显示收到的屏显内容。In order to provide a more realistic display effect on the display system of the virtual display device, the adaptive display module of the virtual display device 10 is configured to adaptively display the received on-screen content in the virtual window.
该自适应显示模块40包括视觉计算模块42。该视觉计算模块42侦测用户戴着该虚拟显示装置10运动所产生的相对位置变化量。该自适应显示模块40根据该相对位置变化量和立体透视关系调整该虚拟窗口并在调整后的虚拟窗口中继续显示该第二虚拟显示装置的屏显内容。因此,在穿戴该虚拟显示装置10的用户行走或者移动时,头部也随之产生移动,虚拟显示装置10的虚拟窗口随着头部的移动变换立体视觉效果并持续在变换的窗口中显示屏显内容,使得用户在现实中运动时,在虚拟的三维空间中看到的第二虚拟显示装置50,比如虚拟电脑屏幕的位置保持在物理原地,用户在视觉感官上觉得看到的是真实的立体电脑屏幕。The adaptive display module 40 includes a visual computing module 42. The visual computing module 42 detects the amount of relative positional change produced by the user wearing the virtual display device 10. The adaptive display module 40 adjusts the virtual window according to the relative position change amount and the stereo perspective relationship, and continues to display the screen display content of the second virtual display device in the adjusted virtual window. Therefore, when the user wearing the virtual display device 10 walks or moves, the head also moves, and the virtual window of the virtual display device 10 changes the stereoscopic effect with the movement of the head and continues to display in the transformed window. The content is displayed such that when the user moves in reality, the position of the second virtual display device 50, such as the virtual computer screen, seen in the virtual three-dimensional space remains in physical location, and the user feels that the visual sense is true. Stereo computer screen.
该虚拟显示装置10的该多人交互模块35用于支持多人观看/控制。The multi-person interaction module 35 of the virtual display device 10 is used to support multi-person viewing/control.
该多人交互模块35用于建立多人交互群组和完成交互共享。实现多人同时观看一个电脑、电视、手机/平板,并都可以通过连接交互端进行操控,其中,每一用户均穿戴有该虚拟显示装置10。该多人交互模块35还用于确定本机为主虚拟显示装置或者副虚拟显示装置,确定为副虚拟显示装置时,该多人交互模块35从云端服务器获取主虚拟显示装置的三维显示位置以及交互操作指令,同时获取发送至主虚拟显示装置的屏显内容进行同步显示。The multi-person interaction module 35 is used to establish a multi-person interaction group and complete interactive sharing. A plurality of people can simultaneously watch a computer, a television, a mobile phone/tablet, and can all be controlled by connecting an interactive terminal, wherein each user wears the virtual display device 10. The multi-person interaction module 35 is further configured to determine that the local virtual display device or the secondary virtual display device is determined to be the secondary virtual display device, and the multi-person interaction module 35 acquires the three-dimensional display position of the primary virtual display device from the cloud server and The interactive operation instruction simultaneously acquires the on-screen display content sent to the main virtual display device for synchronous display.
该交互端与该主虚拟显示装置和副虚拟显示装置连接,共享该交互端的交互操作指令,通过该交互端操控同时显示在主虚拟显示装置和副虚拟显示装置的屏显内容。The interactive terminal is connected to the primary virtual display device and the secondary virtual display device, and shares an interactive operation instruction of the interactive terminal, and controls the display content of the primary virtual display device and the secondary virtual display device to be simultaneously displayed through the interactive terminal.
本实施例中,该交互端采用无线键盘40和无线鼠标50。或者该交互端采用无线遥控器60。或者该交互端采用交互触控板70。具体实现内容,在后续实施例中加以详述。In this embodiment, the interactive terminal uses a wireless keyboard 40 and a wireless mouse 50. Or the interactive terminal uses a wireless remote controller 60. Or the interactive end uses an interactive touch panel 70. The specific implementation content will be described in detail in the subsequent embodiments.
请参考图4,使用过程中,首先启动的虚拟显示装置作为主虚拟显示装置、后启动的虚拟显示装置作为副虚拟显示装置。主虚拟显示装置首先启动,设置显示用户指定的显示位置、连接云端服务器、通过交互端,比如遥控器操控屏显内容。副虚拟显示装置后启动,连接云端服务器,申请获得主虚拟显示装置的用户指定的显示位置等初始设置信息、通过无线遥控器操控虚拟内容。交互 端与主/副虚拟显示装置通信,传递交互操作指令。云端服务器与主/副虚拟显示装置通信,接收交互操作指令、操作第二虚拟显示装置的虚拟内容、反馈更新后的屏显内容给主/副虚拟显示装置。Referring to FIG. 4, during use, the virtual display device that is first activated is used as the primary virtual display device and the virtual display device that is activated later as the secondary virtual display device. The main virtual display device is first activated, and displays the display position specified by the user, connects to the cloud server, and controls the display content through the interactive terminal, such as a remote controller. After the secondary virtual display device is activated, the cloud server is connected, and the initial setting information such as the display position specified by the user of the primary virtual display device is obtained, and the virtual content is manipulated by the wireless remote controller. Interaction The terminal communicates with the primary/secondary virtual display device to deliver an interactive operation instruction. The cloud server communicates with the primary/secondary virtual display device, receives the interactive operation instruction, operates the virtual content of the second virtual display device, and feeds back the updated on-screen display content to the primary/secondary virtual display device.
请参考图5,在多人交互实施例中,该虚拟显示装置可以是虚拟现实(VR)设备,也可以是增强现实(AR)设备,也可以是混合现实显示设备。该第二虚拟显示装置为运行于云端服务器的模块。该设备***的多人交互方式(含登录、输入等)与传统真实设备的多人交互方式相似,在此不再赘述。Referring to FIG. 5, in the multi-person interaction embodiment, the virtual display device may be a virtual reality (VR) device, an augmented reality (AR) device, or a mixed reality display device. The second virtual display device is a module running on the cloud server. The multi-person interaction mode (including login, input, etc.) of the device system is similar to the multi-person interaction mode of the traditional real device, and will not be described here.
本实施例还提供该虚拟显示装置单侧的智能交互方法:This embodiment also provides a smart interaction method on one side of the virtual display device:
步骤101:虚拟显示装置接收用户指定的三维显示位置;Step 101: The virtual display device receives a three-dimensional display position specified by the user;
步骤102:获取第二虚拟显示装置的屏显内容;Step 102: Acquire on-screen content of the second virtual display device.
步骤103:该虚拟显示装置根据该三维显示位置确定虚拟窗口,并在该虚拟窗口中显示该第二虚拟显示装置的屏显内容;Step 103: The virtual display device determines a virtual window according to the three-dimensional display position, and displays the screen display content of the second virtual display device in the virtual window.
步骤104:该虚拟显示装置获取该交互端对该屏显内容的交互操作指令;Step 104: The virtual display device acquires an interactive operation instruction of the interactive terminal to the screen display content.
步骤105:在该虚拟窗口显示该第二虚拟显示装置反馈的响应该交互操作指令更新的屏显内容。Step 105: Display, in the virtual window, the screen display content that is updated by the second virtual display device in response to the interaction operation instruction.
请参考图6,以下全面叙述该虚拟显示装置单侧的智能交互方法,其包括:Referring to FIG. 6, the following describes the smart interaction method on one side of the virtual display device, which includes:
步骤201-步骤204与上述步骤101-步骤104相同;Step 201 - step 204 is the same as step 101 - step 104 above;
步骤205:虚拟显示装置将该交互操作指令发送至该第二虚拟显示装置,其中,该第二虚拟显示装置运行在云端服务器上;Step 205: The virtual display device sends the interaction operation instruction to the second virtual display device, where the second virtual display device runs on the cloud server;
步骤206:该第二虚拟显示装置响应该交互操作指令,操作本***的虚拟内容,并反馈更新的屏显内容;Step 206: The second virtual display device operates the virtual content of the system in response to the interactive operation instruction, and feeds back the updated on-screen content;
步骤207:在该虚拟窗口显示该第二虚拟显示装置反馈的响应该交互操作指令更新的屏显内容;Step 207: Display, in the virtual window, the screen display content that is updated by the second virtual display device in response to the interaction operation instruction;
该方法还包括在该虚拟窗口自适应显示收到的屏显内容的步骤:The method also includes the step of adaptively displaying the received on-screen content in the virtual window:
步骤208:侦测该虚拟显示装置的相对位置变化量;Step 208: Detect a relative position change amount of the virtual display device.
侦测相对位置变化量的实现方法包括但不限于:通过特殊图案的实物模板来计算该相对位置变化量,即实时比较摄像头采集的相邻两幅画面中实物模板的相对位置变化量;或者通过环境信息来计算,即实时比较摄像头采集的相邻两幅画面中环境整体的相对位置变化量,比如,PTAM/SLAM等技术;或者视觉上相对位置变化的计算可以与传感器信息相结合,比如IMU传感器,来提升对该 虚拟显示装置,比如混合现实显示设备眼镜快速移动的位置变化的计算准确率。The method for detecting the relative position change includes, but is not limited to, calculating the relative position change amount by using a physical template of a special pattern, that is, comparing the relative position change amount of the physical template in the adjacent two pictures collected by the camera in real time; or The environmental information is calculated, that is, the relative position change of the environment in the adjacent two images collected by the camera is compared in real time, for example, PTAM/SLAM or the like; or the calculation of the relative position change visually can be combined with the sensor information, such as IMU Sensor to boost this A virtual display device, such as a calculation accuracy of a position change of a fast moving moving glasses of a mixed reality display device.
步骤301:根据该相对位置变化量和立体透视关系调整该虚拟窗口并在调整后的虚拟窗口中显示该第二虚拟显示装置的屏显内容。Step 301: Adjust the virtual window according to the relative position change amount and the stereo perspective relationship, and display the screen display content of the second virtual display device in the adjusted virtual window.
另外,请参考图7,所示为该虚拟显示装置的多人交互流程图。In addition, please refer to FIG. 7, which shows a multi-person interaction flowchart of the virtual display device.
步骤301:交互端与多个虚拟显示装置连接,共享交互端的交互操作指令;Step 301: The interaction end is connected to multiple virtual display devices, and the interaction operation instruction of the interaction terminal is shared;
步骤302:虚拟显示装置请求多人交互时,先启动的虚拟显示装置为主虚拟显示装置,后启动的虚拟显示装置为副虚拟显示装置;Step 302: When the virtual display device requests multi-person interaction, the virtual display device that is first activated is the primary virtual display device, and the virtual display device that is activated later is the secondary virtual display device.
步骤303:主虚拟显示装置设置初始的显示位置接收交互端的交互操作指令,并将该显示位置和交互操作指令发送至云端服务器;Step 303: The primary virtual display device sets an initial display position to receive an interactive operation instruction of the interaction end, and sends the display position and the interaction operation instruction to the cloud server.
步骤304:云端服务器存储该显示位置以及将交互操作指令转至第二虚拟显示装置,第二虚拟显示装置根据该交互操作指令向发起请求的虚拟显示装置反馈对应的屏显内容;Step 304: The cloud server stores the display location and forwards the interactive operation instruction to the second virtual display device, and the second virtual display device feeds back the corresponding screen display content to the virtual display device that initiates the request according to the interaction operation instruction;
步骤305:副虚拟显示装置连接该云端服务器;Step 305: The secondary virtual display device is connected to the cloud server.
步骤306:副虚拟显示装置从云端服务器获取主虚拟显示装置的显示位置,同时获取该发送至主虚拟显示装置的屏显内容进行同步显示;Step 306: The secondary virtual display device acquires the display position of the primary virtual display device from the cloud server, and acquires the screen display content sent to the primary virtual display device for synchronous display.
步骤307:副虚拟显示装置根据显示位置确定虚拟窗口并在虚拟窗口中显示同步的屏显内容,同时,接收交互端的交互操作指令,并将收到的交互操作指令发送至云端服务器。Step 307: The secondary virtual display device determines the virtual window according to the display position and displays the synchronized on-screen content in the virtual window, and receives the interactive operation instruction of the interaction end, and sends the received interaction operation instruction to the cloud server.
可以理解的是,在多人交互中,每一虚拟显示装置同样启动自适应显示步骤。It will be appreciated that in a multi-person interaction, each virtual display device also initiates an adaptive display step.
实施例2Example 2
请一并参考图1和图2,本实施例涉及请智能交互方法和设备的概念性实施例。该概念性实施例是从云端服务器单侧进行描述。Referring to FIG. 1 and FIG. 2 together, this embodiment relates to a conceptual embodiment of a smart interaction method and apparatus. This conceptual embodiment is described from one side of the cloud server.
如实施例1所披露内容,该云端服务器可与虚拟显示装置通信,该虚拟显示装置与交互端通信,该云端服务器中运行第二虚拟显示装置、发送模块以及接收模块。该第二虚拟显示装置用于输出屏显内容。As disclosed in Embodiment 1, the cloud server can communicate with a virtual display device, and the virtual display device communicates with an interactive terminal, and the cloud server runs a second virtual display device, a sending module, and a receiving module. The second virtual display device is configured to output on-screen content.
该发送模块用于将该第二虚拟显示装置的屏显内容发送至该虚拟显示装置,该虚拟显示装置在用户指定的三维显示位置确定虚拟窗口,并在该虚拟窗口中显示该屏显内容。 The sending module is configured to send the screen display content of the second virtual display device to the virtual display device, where the virtual display device determines a virtual window at a user-specified three-dimensional display position, and displays the screen display content in the virtual window.
该接收模块用于接收该虚拟显示装置从交互端获取的针对屏显内容的交互操作指令。The receiving module is configured to receive an interaction operation instruction for the screen display content acquired by the virtual display device from the interaction terminal.
该第二虚拟显示装置还用于响应该交互操作指令向该虚拟显示装置输出更新的屏显内容。The second virtual display device is further configured to output updated on-screen content to the virtual display device in response to the interactive operation instruction.
同样的,该云端服务器为实现多人交互还设置多人交互模块。在多人交互中,先启动的虚拟显示装置为主虚拟显示装置,后启动的虚拟显示装置为副虚拟显示装置,该多人交互模块用于多人交互时,存储该主虚拟显示装置的三维显示位置以及交互操作指令。Similarly, the cloud server also sets a multi-person interaction module for implementing multi-person interaction. In the multi-person interaction, the virtual display device that is first activated is the primary virtual display device, and the virtual display device that is activated later is the secondary virtual display device. When the multi-person interaction module is used for multi-person interaction, the three-dimensional display of the primary virtual display device is stored. Display location and interworking instructions.
该云端服务器的发送模块用于发送该主虚拟显示装置的三维显示位置以及交互操作指令,还用于发送同步主虚拟显示装置的屏显内容。The sending module of the cloud server is configured to send a three-dimensional display position of the main virtual display device and an interactive operation instruction, and is further configured to send the screen display content of the synchronous main virtual display device.
该交互端与该主虚拟显示装置和副虚拟显示装置连接,共享该交互端的交互操作指令,通过该交互端操控同时显示在主虚拟显示装置和副虚拟显示装置的屏显内容。The interactive terminal is connected to the primary virtual display device and the secondary virtual display device, and shares an interactive operation instruction of the interactive terminal, and controls the display content of the primary virtual display device and the secondary virtual display device to be simultaneously displayed through the interactive terminal.
实施例3Example 3
请一并参考图1至图3以及图8,本实施例提供的智能交互方法和设备中,交互端300采用无线键盘40和无线鼠标50以支持用户在大范围内移动。第二虚拟显示装置为个人电脑,本实施例的虚拟显示装置为混合现实(MR)显示设备200。以下具体介绍虚拟显示装置整合显示个人电脑屏幕内容的实施例。这种混合现实个人电脑的交互方式,比如含登录、输入等参考真实电视操作。Referring to FIG. 1 to FIG. 3 and FIG. 8 , in the intelligent interaction method and device provided by the embodiment, the interactive terminal 300 uses the wireless keyboard 40 and the wireless mouse 50 to support the user to move in a wide range. The second virtual display device is a personal computer, and the virtual display device of the present embodiment is a mixed reality (MR) display device 200. The following describes in detail an embodiment in which the virtual display device integrates and displays the contents of the personal computer screen. This kind of mixed reality PC interaction, such as including login, input, etc., refers to real TV operations.
本实施例实现一种结合了混合现实显示设备的个人电脑***。This embodiment implements a personal computer system incorporating a mixed reality display device.
其中,混合现实显示设备200的眼镜的屏幕是半透明的,既可以自适应地显示虚拟电脑屏幕、又可以透过眼镜看到真实世界。该混合现实显示设备与无线键盘40和无线鼠标50之间建立无线连接,用于获取交互端的交互操作指令。在虚拟显示装置的混合现实显示设备实施例中,该混合现实显示设备与交互云90之间通过WiFi、4G通信协议等方式无线互联,从交互云90获取运行在交互云的虚拟电脑屏显内容,并把交互端的交互操作指令传输到交互云90。交互云90响应混合现实设备,比如增强现实眼镜的请求,接收交互操作指令并操作其上虚拟电脑的内容再下发操作后的更新虚拟电脑屏显内容至请求的混合现实显示设备。 Wherein, the screen of the glasses of the mixed reality display device 200 is semi-transparent, and the virtual computer screen can be adaptively displayed, and the real world can be seen through the glasses. The hybrid reality display device establishes a wireless connection with the wireless keyboard 40 and the wireless mouse 50 for acquiring an interactive operation instruction of the interaction terminal. In the embodiment of the hybrid reality display device of the virtual display device, the mixed reality display device and the interaction cloud 90 are wirelessly interconnected by means of WiFi, 4G communication protocol, etc., and the virtual computer screen display content running in the interactive cloud is acquired from the interaction cloud 90. And transmitting the interactive operation instructions of the interactive end to the interactive cloud 90. The interactive cloud 90 responds to the request of the augmented reality device, such as the augmented reality glasses, receives the interactive operation instruction and operates the content of the virtual computer thereon, and then updates the virtual computer screen display content to the requested mixed reality display device.
其中,该混合现实显示设备自适应显示来自虚拟电脑的屏显内容是通过视觉计算方式实现的。Wherein, the mixed reality display device adaptively displays the on-screen display content from the virtual computer by visual calculation.
首先由用户在该混合现实显示设备的三维空间中指定三维显示位置,也就是虚拟电脑屏幕要显示的位置。此三维显示位置包括x,y,z三个维度。指定三维显示位置包括但不限于以下方法:1)用特殊图案的图像作为实物模板,贴在墙体100中指定的位置,例如人坐的办公桌前的墙面上;2)用户通过头部姿态交互、鼠标、键盘等交互方式在三维虚拟示空间中选定一个三维显示位置,例如通过无线鼠标50拖动虚拟窗口到需要的位置、并通过该混合现实显示设备,比如增强现实眼镜的摄像头记录此时的环境信息。再通过视觉计算模块42计算用户佩戴的混合现实显示设备的相对位置变化量T。First, the user specifies a three-dimensional display position in the three-dimensional space of the mixed reality display device, that is, a position to be displayed on the virtual computer screen. This three-dimensional display position includes three dimensions: x, y, and z. The specified three-dimensional display position includes, but is not limited to, the following methods: 1) using a special pattern of the image as a physical template, attached to a specified position in the wall 100, such as a wall surface of a person sitting at the desk; 2) the user passes the head An interactive mode of gesture interaction, mouse, keyboard, etc. selects a three-dimensional display position in the three-dimensional virtual display space, for example, dragging the virtual window to a desired position through the wireless mouse 50, and passing the mixed reality display device, such as a camera of the augmented reality glasses. Record the environmental information at this time. The relative position change amount T of the mixed reality display device worn by the user is calculated by the visual calculation module 42.
该相对位置变化量的确定方法包括但不限于:1)通过特殊图案的实物模板来计算该相对位置变化量,即实时比较摄像头采集的相邻两幅画面中实物模板的相对位置变化量;2)或者通过环境信息来计算,即实时比较摄像头采集的相邻两幅画面中环境整体的相对位置变化量,比如,PTAM/SLAM等技术;3)或者视觉上相对位置变化的计算可以与传感器信息相结合,比如IMU传感器,来提升对该虚拟显示装置,比如混合现实显示设备快速移动的位置变化的计算准确率。自适应显示模块40将原用作虚拟电脑屏幕的虚拟窗口210根据该相对位置变化量和立体透视关系调整其在三维显示***中的位置,并重新在混合现实显示设备调整后的虚拟窗口中显示屏显内容。这样,随着用户头部的移动,混合现实显示设备中呈现的虚拟电脑屏幕位置保持在物理原地,使用户在视觉感官上觉得看到是真实的电脑屏幕。The method for determining the relative position change amount includes, but is not limited to: 1) calculating the relative position change amount by using a physical template of a special pattern, that is, comparing the relative position change amount of the physical template in the adjacent two pictures collected by the camera in real time; Or by environmental information, that is, real-time comparison of the relative position change of the environment in the adjacent two images collected by the camera, such as PTAM/SLAM, etc.; 3) or visually relative position change calculation and sensor information In combination, such as an IMU sensor, the computational accuracy of the positional change of the virtual display device, such as a mixed reality display device, is rapidly increased. The adaptive display module 40 adjusts the position of the virtual window 210 originally used as the virtual computer screen according to the relative position change amount and the stereoscopic perspective relationship in the three-dimensional display system, and displays it again in the adjusted virtual window of the mixed reality display device. Screen display content. Thus, as the user's head moves, the virtual computer screen position presented in the mixed reality display device remains physically in place, allowing the user to visually perceive the view as a real computer screen.
实施例4Example 4
请一并参考图1至图3以及图8,本实施例提供的智能交互方法和设备中,交互端300采用无线遥控器60,第二虚拟显示装置为家庭电视,本实施例的虚拟显示装置为混合现实显示设备200。以下具体介绍虚拟显示装置整合显示家庭电视屏幕内容的实施例。这种混合现实家庭电视的交互,比如含登录、输入等交互操作参考真实的家庭电视的操作。Referring to FIG. 1 to FIG. 3 and FIG. 8 together, in the intelligent interaction method and device provided by the embodiment, the interactive terminal 300 adopts a wireless remote controller 60, and the second virtual display device is a home television. The virtual display device of this embodiment The display device 200 is mixed for reality. The following describes in detail an embodiment in which the virtual display device integrates display of home television screen content. This kind of mixed reality home TV interaction, such as the operation of the home TV with login, input and other interactive operations.
本实施例实现一种结合了混合现实显示设备的家庭电视。This embodiment implements a home television incorporating a mixed reality display device.
其中,混合现实显示设备,比如增强现实眼镜的屏幕是半透明的,既可以 自适应地显示虚拟电视屏幕、又可以透过其看到真实世界;该混合现实显示设备与电视无线遥控器60之间建立无线连接,可获取该无线遥控器60的交互操作指令,该无线连接包括但不限于红外遥控器、蓝牙/WiFi(wireless fidelity,Wifi)空鼠遥控器。其中,红外遥控器会通过红外信号发送交互操作指令给增强现实眼镜的红外接收模块,对应无线连接模块30;蓝牙/WiFi空鼠遥控器通过蓝牙/WiFi无线网络发送控制指令给增强现实眼镜的无线连接模块,以支持用户在大范围内移动;增强现实眼镜200与交互云90之间通过WiFi、4G通信协议等建立无线连接,从交互云90获取虚拟电视的屏显内容、并把交互端300的交互操作指令传输到交互云90;交互云90响应增强现实眼镜的请求,接收交互操作指令,操作虚拟电视的内容,向增强现实眼镜下发响应交互操作指令后的虚拟电视的屏显内容。Wherein, the screen of the mixed reality display device, such as the augmented reality glasses, is translucent, The virtual television screen is adaptively displayed, and the real world can be seen through it; the wireless connection between the mixed reality display device and the television wireless remote controller 60 is obtained, and an interactive operation instruction of the wireless remote controller 60 can be obtained, and the wireless connection is obtained. Including but not limited to infrared remote control, Bluetooth/WiFi (wireless fidelity, Wifi) air mouse remote control. The infrared remote controller sends an interactive operation command to the infrared receiving module of the augmented reality glasses through the infrared signal, corresponding to the wireless connection module 30; the Bluetooth/WiFi air mouse remote control sends the control command to the wireless of the augmented reality glasses through the Bluetooth/WiFi wireless network. The module is connected to support the user to move in a wide range; the augmented reality glasses 200 and the interactive cloud 90 establish a wireless connection through the WiFi, the 4G communication protocol, etc., obtain the screen display content of the virtual TV from the interaction cloud 90, and connect the interactive terminal 300. The interactive operation instruction is transmitted to the interaction cloud 90; the interaction cloud 90 responds to the request of the augmented reality glasses, receives the interactive operation instruction, operates the content of the virtual television, and sends the screen display content of the virtual television in response to the interactive operation instruction to the augmented reality glasses.
其中,该增强现实眼镜自适应显示来自虚拟电视的屏显内容是通过视觉计算方式实现的。Wherein, the augmented reality glasses adaptively display the on-screen display content from the virtual television by visual calculation.
其中,增强现实眼镜的自适应显示通过视觉计算实现,步骤与实施例3类似。步骤一,由用户在增强现实眼镜的虚拟三维空间中指定虚拟电视屏幕要显示的位置,此三维显示位置包括x,y,z三个维度。指定三维显示位置包括但不限于以下方法:1)用特殊图案的图像作为实物模板,贴在墙体100中指定的位置,适合放置电视的墙体100墙面上;2)用户通过头部姿态交互、无线遥控器60等交互方式在三维虚拟示空间中选定一个三维显示位置,例如通过无线遥控器60或者无线鼠标50拖动虚拟窗口到需要的位置,并通过该增强现实眼镜,比如增强现实眼镜的摄像头记录此时的环境信息。再通过视觉计算模块42计算用户佩戴的混合现实显示设备的相对位置变化量T。Wherein, the adaptive display of the augmented reality glasses is realized by visual calculation, and the steps are similar to those of the third embodiment. Step 1: The user specifies the position to be displayed on the virtual television screen in the virtual three-dimensional space of the augmented reality glasses. The three-dimensional display position includes three dimensions: x, y, and z. The specified three-dimensional display position includes, but is not limited to, the following methods: 1) using a special pattern image as a physical template, attached to a specified position in the wall 100, suitable for placing a wall of the wall 100 of the television; 2) the user passes the head posture The interactive mode, the wireless remote controller 60 and the like select a three-dimensional display position in the three-dimensional virtual display space, for example, by dragging the virtual window to the desired position through the wireless remote controller 60 or the wireless mouse 50, and through the augmented reality glasses, such as enhancement The camera of the actual glasses records the environmental information at this time. The relative position change amount T of the mixed reality display device worn by the user is calculated by the visual calculation module 42.
自适应显示模块40将原用作虚拟电视屏幕的虚拟窗口210根据该相对位置变化量T和立体透视关系调整其在三维显示空间中的位置,并重新在增强现实眼镜调整后的虚拟窗口中显示屏显内容。这样,随着用户头部的移动,增强现实眼镜的中呈现的虚拟电视屏幕位置保持在物理原地,使用户在视觉感官上觉得看到是真实的电视屏幕。The adaptive display module 40 adjusts the position of the virtual window 210 originally used as the virtual television screen in the three-dimensional display space according to the relative position change amount T and the stereoscopic perspective relationship, and displays it again in the virtual window after the augmented reality glasses are adjusted. Screen display content. Thus, as the user's head moves, the virtual television screen position presented in the augmented reality glasses remains physically in place, allowing the user to visually perceive a real TV screen.
实施例5Example 5
请一并参考图1至图3以及图8,本实施例提供的智能交互方法和设备中, 交互端采用交互触控板70,第二虚拟显示装置为触控手机或者平板终端,本实施例的虚拟显示装置为混合现实显示设备,比如增强现实眼镜。以下具体介绍虚拟显示装置整合显示触控手机或者平板终端屏幕内容的实施例。这种混合现实的触控手机或者平板终端的交互,比如含登录、输入等操作参考真实的触控手机或者平板终端交互。Referring to FIG. 1 to FIG. 3 and FIG. 8 together, in the intelligent interaction method and device provided by this embodiment, The interaction end uses an interactive touch panel 70, and the second virtual display device is a touch mobile phone or a tablet terminal. The virtual display device of this embodiment is a mixed reality display device, such as augmented reality glasses. The following describes an embodiment in which the virtual display device integrates the display of the screen content of the touch mobile phone or the tablet terminal. The interaction of the touch reality mobile phone or the tablet terminal of the mixed reality, for example, including the login, input, and the like, refers to the real touch mobile phone or the tablet terminal interaction.
本实施例实现一种结合了混合现实显示设备的触控手机或者触控平板终端。This embodiment implements a touch mobile phone or a touch tablet terminal that combines a mixed reality display device.
其中,增强现实眼镜的屏幕是半透明的,既可以自适应地显示触控手机或者触控平板屏幕,又可以透过其看到真实世界;该交互触控板70是不带显示功能的触控感应设备,也可以是粘连在塑料触控板上的透明触控膜。该交互触控板70包括把触控信息发送出去的通信模块60,增强现实眼镜与交互触控板70之间建立无线连接,例如蓝牙或者WiFi以支持用户在大范围内移动,使得增强现实眼镜可获取交互触控板70的交互操作指令;增强现实眼镜与交互云90之间通过WiFi、4G通信协议等建立无线连接,从交互云90获取触控手机和触控平板的屏显内容,并把交互触控板70的交互操作指令传输到交互云90。交互云90响应增强现实眼镜的请求,接收交互操作指令,操作触控手机或者触控平板的内容,向增强现实眼镜下发响应交互操作指令后的虚拟电视的屏显内容。Wherein, the screen of the augmented reality glasses is semi-transparent, and the touch phone or the touch screen screen can be adaptively displayed, and the real world can be seen through the same; the interactive touch panel 70 is a touch without a display function. The sensing device can also be a transparent touch film that is attached to a plastic touch panel. The interactive touch panel 70 includes a communication module 60 for transmitting touch information, and a wireless connection between the augmented reality glasses and the interactive touch panel 70, such as Bluetooth or WiFi, to support the user to move over a wide range, so that the augmented reality glasses are enabled. The interactive operation command of the interactive touch panel 70 can be obtained; the wireless connection between the augmented reality glasses and the interactive cloud 90 is established through WiFi, 4G communication protocol, etc., and the screen display content of the touch mobile phone and the touch tablet is obtained from the interactive cloud 90, and The interactive operation instructions of the interactive trackpad 70 are transmitted to the interactive cloud 90. The interactive cloud 90 responds to the request of the augmented reality glasses, receives the interactive operation instruction, operates the content of the touch mobile phone or the touch panel, and sends the screen display content of the virtual TV after responding to the interactive operation instruction to the augmented reality glasses.
其中,增强现实眼镜的自适应显示通过视觉计算的实现,步骤与实施例3类似。步骤一,由用户在增强现实眼镜的虚拟三维空间中指定触控手机或者触控平板屏幕要显示的位置,此三维显示位置包括x,y,z三个维度。指定三维显示位置的方法如实施例4相似,在此不再赘述。Wherein, the adaptive display of the augmented reality glasses is realized by visual calculation, and the steps are similar to those of the third embodiment. Step 1: The user specifies the position to be displayed on the touch mobile phone or the touch flat screen in the virtual three-dimensional space of the augmented reality glasses. The three-dimensional display position includes three dimensions: x, y, and z. The method for specifying the position of the three-dimensional display is similar to that of Embodiment 4, and details are not described herein again.
请参考图10,该交互端也可以采用手机虚拟触屏。在实现交互操作时,用特殊图案的纸质图像作为实物图像模板76,贴在交互触控板74的指定位置,例如矩形交互触控板的前半部。其中,图像模板76放置在触控膜72和触控板74的中间夹层中。再通过视觉计算模块42计算用户佩戴的增强现实眼镜的相对位置变化量T,与实施例4相似。自适应显示模块40针对用作虚拟电视屏幕的虚拟窗口210根据该相对位置变化量T和立体透视关系调整其在三维显示空间中的位置,并重新在增强现实眼镜调整后的虚拟窗210口中显示屏显内容。这样,随着用户头部的移动,增强现实眼镜的中呈现的触控手机或者触控平板的屏幕位置保持在物理原地,使用户在视觉感官上觉得看到是真实的触控手机或者触 控平板的屏幕。Referring to FIG. 10, the interactive terminal can also adopt a virtual touch screen of the mobile phone. When the interactive operation is implemented, a paper image of a special pattern is used as the physical image template 76, which is attached to a designated position of the interactive touch panel 74, such as the front half of the rectangular interactive touch panel. The image template 76 is placed in the middle interlayer of the touch film 72 and the touch panel 74. The relative position change amount T of the augmented reality glasses worn by the user is calculated by the visual calculation module 42 again, similar to the fourth embodiment. The adaptive display module 40 adjusts its position in the three-dimensional display space according to the relative position change amount T and the stereoscopic perspective relationship for the virtual window 210 serving as the virtual television screen, and displays it again in the virtual window 210 port after the augmented reality glasses are adjusted. Screen display content. In this way, as the user's head moves, the screen position of the touch mobile phone or the touch tablet presented in the augmented reality glasses is kept in physical place, so that the user feels that the user is visually aware of the real touch phone or touch. Control the screen of the tablet.
实施例6Example 6
请一并参考图1至图3、图8以及图11,本实施例提供的智能交互方法和设备中,交互端300采用交互按键板,第二虚拟显示装置为按键手机,该按键为硬件按键不包括软件按键实施方式。本实施例的虚拟显示装置为混合现实显示设备,比如增强现实眼镜。以下具体介绍虚拟显示装置整合显示按键手机屏幕内容的实施例。这种混合现实的按键手机的交互,比如含登录、输入等操作参考真实的按键手机交互。Referring to FIG. 1 to FIG. 3, FIG. 8 and FIG. 11 , in the intelligent interaction method and device provided by the embodiment, the interaction terminal 300 adopts an interactive keypad, and the second virtual display device is a button mobile phone, and the button is a hardware button. Does not include software button implementations. The virtual display device of this embodiment is a mixed reality display device such as augmented reality glasses. The following describes in detail an embodiment in which the virtual display device integrates the display of the contents of the button mobile phone screen. This kind of mixed reality button mobile phone interaction, such as including login, input and other operations refer to the real button mobile phone interaction.
与实施例5相比的不同点在于,该第二虚拟显示装置从触控模式变为按键模式。因此,关键模块从交互式触控板变为交互式按键板,其他模块保持不变。The difference from Embodiment 5 is that the second virtual display device changes from the touch mode to the button mode. As a result, the key modules change from an interactive trackpad to an interactive keypad, and the other modules remain the same.
请参考图11,该交互端也可以采用交互按键板80,该交互按键板80是不带显示功能的按键感应设备,可以是粘连在塑料触控板84上的按键键盘86。同样的,该交互按键板80包括触控膜82,该触控膜82包括把按键信息发送出去的通信模块60,通过通信模块60与增强现实眼镜的无线连接模块30交换交互操作指令,该通信模块60与无线连接模块30为无线连接,例如蓝牙/WiFi,以支持用户在大范围内移动。Referring to FIG. 11 , the interactive terminal can also be an interactive keypad 80. The interactive keypad 80 is a button sensing device without a display function, and can be a key keyboard 86 attached to the plastic touch panel 84 . Similarly, the interactive keypad 80 includes a touch film 82 that includes a communication module 60 that transmits key information, and exchanges interactive operation commands with the wireless connection module 30 of the augmented reality glasses through the communication module 60. The module 60 and the wireless connection module 30 are wirelessly connected, such as Bluetooth/WiFi, to support the user to move over a wide range.
实施例7Example 7
本申请除了可以用于电脑、电视、手机等交互场景之外,还可以用于其他基于屏幕的交互场景,比如交互端采用交互触控板70,第二虚拟显示装置为电子白板。或者交互端采用包括车载显示功能的交互触控板70,第二虚拟显示装置为车载***。The application can be used for other interactive scenarios such as a computer, a television, a mobile phone, and the like. For example, the interaction end uses an interactive touch panel 70, and the second virtual display device is an electronic whiteboard. Or the interactive end adopts an interactive touch panel 70 including an in-vehicle display function, and the second virtual display device is an in-vehicle system.
实施例8Example 8
图8是本申请实施例提供的智能交互方法的电子设备600的硬件结构示意图,如图8所示,该电子设备600包括:FIG. 8 is a schematic diagram of a hardware structure of an electronic device 600 according to an intelligent interaction method according to an embodiment of the present disclosure. As shown in FIG. 8, the electronic device 600 includes:
一个或多个处理器610、存储器620、一个或者多个图形处理器(GPU)630以及通信组件650,图8中以一个处理器610和一个图形处理器630为例。该存储器620存储有可被该至少一个处理器610执行的指令,该指令被该至少一个 处理器执行时,通过通信组件650建立数据通道,以使该至少一个处理器能够执行该智能交互方法。One or more processors 610, memory 620, one or more graphics processing units (GPUs) 630, and communication components 650 are illustrated in FIG. 8 by a processor 610 and a graphics processor 630. The memory 620 stores instructions executable by the at least one processor 610, the instructions being by the at least one When the processor is executing, a data channel is established by communication component 650 to enable the at least one processor to perform the intelligent interaction method.
处理器610、存储器620、图形处理器630以及通信组件650可以通过总线或者其他方式连接,图8中以通过总线连接为例。The processor 610, the memory 620, the graphics processor 630, and the communication component 650 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
存储器620作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的智能交互方法对应的程序指令/模块(例如,附图4所示的接收模块12、发送模块14、位置确定模块16、窗口模块18以及显示模块20,附图3所示的多人交互模块56以及第二虚拟显示装置58)。处理器610通过运行存储在存储器620中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例中的智能交互方法。The memory 620 is used as a non-volatile computer readable storage medium, and can be used for storing a non-volatile software program, a non-volatile computer executable program, and a module, such as a program instruction corresponding to the intelligent interaction method in the embodiment of the present application. / module (for example, the receiving module 12, the transmitting module 14, the position determining module 16, the window module 18, and the display module 20 shown in FIG. 4, the multi-person interaction module 56 and the second virtual display device 58 shown in FIG. ). The processor 610 executes various functional applications and data processing of the server by running non-volatile software programs, instructions, and modules stored in the memory 620, that is, implementing the intelligent interaction method in the foregoing method embodiments.
存储器620可以包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需要的应用程序;存储数据区可存储根据虚拟显示装置的使用所创建的数据等。此外,存储器620可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器620可选包括相对于处理器610远程设置的存储器,这些远程存储器可以通过网络连接至机器人交互电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 620 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the virtual display device, and the like. Moreover, memory 620 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 620 can optionally include memory remotely located relative to processor 610, which can be connected to the robotic interactive electronic device via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
所述一个或者多个模块存储在所述存储器620中,当被所述一个或者多个处理器610执行时,执行上述任意方法实施例中的基于图像的智能交互方法,例如,执行以上描述的图4中的方法步骤101至步骤105,执行以上描述的图5中的方法步骤201至步骤209以及执行以上描述的图6中的方法步骤301至步骤307,实现附图3所示的接收模块12、发送模块14、位置确定模块16、窗口模块18以及显示模块20,附图3所示的多人交互模块56以及第二虚拟显示装置58等的功能。The one or more modules are stored in the memory 620, and when executed by the one or more processors 610, perform an image-based intelligent interaction method in any of the above method embodiments, for example, performing the above described The method steps 101 to 105 in FIG. 4, the method steps 201 to 209 in FIG. 5 described above are performed, and the method steps 301 to 307 in FIG. 6 described above are performed to implement the receiving module shown in FIG. 12. The functions of the transmitting module 14, the location determining module 16, the window module 18, and the display module 20, the multi-person interaction module 56 and the second virtual display device 58 shown in FIG.
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。The above products can perform the methods provided by the embodiments of the present application, and have the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiments of the present application.
本申请实施例提供了一种非易失性计算机可读存储介质,所述计算机可读 存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如,执行以上描述的图4中的方法步骤101至步骤105,执行以上描述的图5中的方法步骤201至步骤209以及执行以上描述的图6中的方法步骤301至步骤307,实现附图3所示的接收模块12、发送模块14、位置确定模块16、窗口模块18以及显示模块20,附图3所示的多人交互模块56以及第二虚拟显示装置58等的功能。Embodiments of the present application provide a non-transitory computer readable storage medium readable by the computer The storage medium stores computer executable instructions that are executed by one or more processors, for example, performing method steps 101 through 105 of FIG. 4 described above, performing the method steps of FIG. 5 described above. 201 to step 209 and performing the method steps 301 to 307 in FIG. 6 described above, implementing the receiving module 12, the transmitting module 14, the position determining module 16, the window module 18, and the display module 20 shown in FIG. 3, The functions of the multi-person interaction module 56 and the second virtual display device 58 shown in FIG.
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。Through the description of the above embodiments, those skilled in the art can clearly understand that the various embodiments can be implemented by means of software plus a general hardware platform, and of course, by hardware. A person skilled in the art can understand that all or part of the process of implementing the above embodiments can be completed by a computer program to instruct related hardware, and the program can be stored in a computer readable storage medium. When executed, the flow of an embodiment of the methods as described above may be included. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。 Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application, and are not limited thereto; in the idea of the present application, the technical features in the above embodiments or different embodiments may also be combined. The steps may be carried out in any order, and there are many other variations of the various aspects of the present application as described above, which are not provided in the details for the sake of brevity; although the present application has been described in detail with reference to the foregoing embodiments, The skilled person should understand that the technical solutions described in the foregoing embodiments may be modified, or some of the technical features may be equivalently replaced; and the modifications or substitutions do not deviate from the embodiments of the present application. The scope of the technical solution.

Claims (17)

  1. 一种虚拟显示装置,其特征在于,可与第二虚拟显示装置以及交互端通信,所述虚拟显示装置包括接收模块、无线连接模块以及显示模块,A virtual display device is characterized in that it can communicate with a second virtual display device and an interaction terminal, and the virtual display device includes a receiving module, a wireless connection module, and a display module.
    所述接收模块用于接收用户指定的三维显示位置以及用于获取第二虚拟显示装置的屏显内容;The receiving module is configured to receive a three-dimensional display position specified by a user and to obtain on-screen display content of the second virtual display device;
    所述显示模块用于根据所述三维显示位置确定虚拟窗口,并在所述虚拟窗口中显示所述第二虚拟显示装置的屏显内容;The display module is configured to determine a virtual window according to the three-dimensional display position, and display the screen display content of the second virtual display device in the virtual window;
    所述无线连接模块用于获取所述交互端对所述屏显内容的交互操作指令;The wireless connection module is configured to acquire an interaction operation instruction of the interactive terminal to the screen display content;
    所述述显示模块还用于在所述虚拟窗口显示所述第二虚拟显示装置反馈的响应所述交互操作指令更新的屏显内容。The display module is further configured to display, in the virtual window, the on-screen content that is updated by the second virtual display device in response to the interaction operation instruction.
  2. 根据权利要求1所述的虚拟显示装置,其特征在于,还包括发送模块,所述发送模块用于将所述交互操作指令发送至所述第二虚拟显示装置。The virtual display device according to claim 1, further comprising a sending module, wherein the sending module is configured to send the interactive operation instruction to the second virtual display device.
  3. 根据权利要求1所述的虚拟显示装置,其特征在于,还包括自适应显示模块,所述自适应显示模块用于在所述虚拟窗口自适应显示收到的屏显内容。The virtual display device according to claim 1, further comprising an adaptive display module, wherein the adaptive display module is configured to adaptively display the received on-screen content in the virtual window.
  4. 根据权利要求3所述的虚拟显示装置,其特征在于,所述自适应显示模块包括视觉计算模块,The virtual display device according to claim 3, wherein the adaptive display module comprises a visual computing module,
    所述视觉计算模块用于侦测所述虚拟显示装置的相对位置变化量;The visual calculation module is configured to detect a relative position change amount of the virtual display device;
    所述自适应显示模块用于根据所述相对位置变化量和立体透视关系调整所述虚拟窗口并在调整后的虚拟窗口中显示所述第二虚拟显示装置的屏显内容。The adaptive display module is configured to adjust the virtual window according to the relative position change amount and the stereo perspective relationship and display the screen display content of the second virtual display device in the adjusted virtual window.
  5. 根据权利要求1所述的虚拟显示装置,其特征在于,还包括多人交互模块,所述多人交互模块用于确定主虚拟显示装置和副虚拟显示装置,确定为副虚拟显示装置时,用于从云端服务器获取主虚拟显示装置的三维显示位置以及交互操作指令,同时获取发送至主虚拟显示装置的屏显内容进行同步显示。The virtual display device according to claim 1, further comprising a multi-person interaction module, wherein the multi-person interaction module is configured to determine the primary virtual display device and the secondary virtual display device, and when determining to be the secondary virtual display device, The three-dimensional display position of the main virtual display device and the interactive operation instruction are acquired from the cloud server, and the screen display content sent to the main virtual display device is acquired for synchronous display.
  6. 根据权利要求5所述的虚拟显示装置,其特征在于,所述交互端与所述主虚拟显示装置和副虚拟显示装置连接,共享所述交互端的交互操作指令,通过所述交互端操控同时显示在主虚拟显示装置和副虚拟显示装置的屏显内容。The virtual display device according to claim 5, wherein the interaction terminal is connected to the primary virtual display device and the secondary virtual display device, and shares an interaction operation instruction of the interaction terminal, and simultaneously displays the interaction through the interaction terminal. On-screen content of the primary virtual display device and the secondary virtual display device.
  7. 根据权利要求1-6所述的虚拟显示装置,其特征在于,所述交互端采用无线键盘和无线鼠标,或者所述交互端采用无线遥控器,或者所述交互端采用交互触控板或者所述交互端采用手机虚拟触屏,或者所述交互端采用交互按键板。 The virtual display device according to any one of claims 1-6, wherein the interactive end uses a wireless keyboard and a wireless mouse, or the interactive end uses a wireless remote controller, or the interactive end uses an interactive touch panel or a The interactive end uses a virtual touch screen of the mobile phone, or the interactive end uses an interactive keypad.
  8. 根据权利要求7的虚拟显示装置,其特征在于,所述虚拟显示装置为混合现实显示设备,所述第二虚拟显示装置为运行于云端服务器的模块。The virtual display device according to claim 7, wherein the virtual display device is a mixed reality display device, and the second virtual display device is a module running on a cloud server.
  9. 一种智能交互方法,其特征在于,An intelligent interaction method, characterized in that
    虚拟显示装置接收用户指定的三维显示位置,获取第二虚拟显示装置的屏显内容;The virtual display device receives the three-dimensional display position specified by the user, and acquires the on-screen display content of the second virtual display device;
    所述虚拟显示装置根据所述三维显示位置确定虚拟窗口,并在所述虚拟窗口中显示所述第二虚拟显示装置的屏显内容;The virtual display device determines a virtual window according to the three-dimensional display position, and displays on-screen content of the second virtual display device in the virtual window;
    所述虚拟显示装置获取所述交互端对所述屏显内容的交互操作指令;并在所述虚拟窗口显示所述第二虚拟显示装置反馈的响应所述交互操作指令更新的屏显内容。The virtual display device acquires an interactive operation instruction of the interactive terminal to the on-screen display content, and displays, in the virtual window, the on-screen display content that is updated by the second virtual display device in response to the interaction operation instruction.
  10. 根据权利要求9所述的智能交互方法,其特征在于,还包括所述虚拟显示装置将所述交互操作指令发送至所述第二虚拟显示装置,其中,所述第二虚拟显示装置运行在云端服务器上。The smart interaction method according to claim 9, further comprising the virtual display device transmitting the interactive operation instruction to the second virtual display device, wherein the second virtual display device is running in the cloud On the server.
  11. 根据权利要求9所述的智能交互方法,其特征在于,还包括在所述虚拟窗口自适应显示收到的屏显内容。The intelligent interaction method according to claim 9, further comprising adaptively displaying the received on-screen content in the virtual window.
  12. 根据权利要求11所述的智能交互方法,其特征在于,所述在所述虚拟窗口自适应显示收到的屏显内容的步骤包括,The intelligent interaction method according to claim 11, wherein the step of adaptively displaying the received on-screen content in the virtual window comprises:
    侦测所述虚拟显示装置的相对位置变化量;Detecting a relative position change amount of the virtual display device;
    根据所述相对位置变化量和立体透视关系调整所述虚拟窗口并在调整后的虚拟窗口中显示所述第二虚拟显示装置的屏显内容。And adjusting the virtual window according to the relative position change amount and the stereo perspective relationship and displaying the screen display content of the second virtual display device in the adjusted virtual window.
  13. 根据权利要求9-12任意一项所述的智能交互方法,其特征在于,所述虚拟显示装置请求多人交互时,先启动的虚拟显示装置为主虚拟显示装置,后启动的虚拟显示装置为副虚拟显示装置,The intelligent interaction method according to any one of claims 9 to 12, wherein when the virtual display device requests multi-person interaction, the virtual display device that is activated first is the main virtual display device, and the virtual display device that is activated later is Secondary virtual display device,
    所述云端服务器存储所述主虚拟显示装置的三维显示位置以及交互操作指令;The cloud server stores a three-dimensional display position of the primary virtual display device and an interactive operation instruction;
    所述副虚拟显示装置从所述云端服务器获取所述主虚拟显示装置的三维显示位置以及交互操作指令,同时获取所述发送至主虚拟显示装置的屏显内容进行同步显示。The secondary virtual display device acquires the three-dimensional display position and the interactive operation instruction of the primary virtual display device from the cloud server, and simultaneously acquires the on-screen display content sent to the primary virtual display device for synchronous display.
  14. 根据权利要求13所述的智能交互方法,其特征在于,所述交互端与所述主虚拟显示装置和副虚拟显示装置连接,共享所述交互端的交互操作指令, 通过所述交互端操控同时显示在主虚拟显示装置和副虚拟显示装置的屏显内容。The intelligent interaction method according to claim 13, wherein the interaction terminal is connected to the primary virtual display device and the secondary virtual display device, and shares an interactive operation instruction of the interaction terminal. The on-screen content of the primary virtual display device and the secondary virtual display device is simultaneously displayed by the interactive terminal.
  15. 一种云端服务器,其特征在于,可与虚拟显示装置通信,所述虚拟显示装置与交互端通信,所述云端服务器包括第二虚拟显示装置、发送模块以及接收模块,A cloud server, configured to communicate with a virtual display device, where the virtual display device communicates with an interaction terminal, the cloud server includes a second virtual display device, a sending module, and a receiving module,
    所述第二虚拟显示装置用于输出屏显内容;The second virtual display device is configured to output on-screen display content;
    所述发送模块用于将所述屏显内容发送至所述虚拟显示装置,所述虚拟显示装置在用户指定的三维显示位置确定虚拟窗口,并在所述虚拟窗口中显示所述屏显内容;The sending module is configured to send the screen display content to the virtual display device, where the virtual display device determines a virtual window at a user-specified three-dimensional display position, and displays the screen display content in the virtual window;
    所述接收模块用于接收所述虚拟显示装置从交互端获取的交互操作指令;The receiving module is configured to receive an interaction operation instruction acquired by the virtual display device from an interaction end;
    所述第二虚拟显示装置还用于响应所述交互操作指令向所述虚拟显示装置输出更新的屏显内容。The second virtual display device is further configured to output updated on-screen content to the virtual display device in response to the interactive operation instruction.
  16. 根据权利要求15所述的云端服务器,其特征在于,所述云端服务器包括多人交互模块,其中,先启动的虚拟显示装置为主虚拟显示装置,后启动的虚拟显示装置为副虚拟显示装置,所述多人交互模块用于多人交互时,存储所述主虚拟显示装置的三维显示位置以及交互操作指令;The cloud server according to claim 15, wherein the cloud server comprises a multi-person interaction module, wherein the virtual display device that is activated first is a virtual display device, and the virtual display device that is activated later is a secondary virtual display device. The multi-person interaction module is configured to store a three-dimensional display position of the main virtual display device and an interactive operation instruction when the multi-person interaction module is used for multi-person interaction;
    所述云端服务器还包括发送模块,所述发送模块用于发送所述主虚拟显示装置的三维显示位置以及交互操作指令,还用于发送同步主虚拟显示装置的屏显内容。The cloud server further includes a sending module, where the sending module is configured to send a three-dimensional display position of the main virtual display device and an interactive operation instruction, and is further configured to send the screen display content of the synchronous main virtual display device.
  17. 根据权利要求16所述的云端服务器,其特征在于,所述交互端与所述主虚拟显示装置和副虚拟显示装置连接,共享所述交互端的交互操作指令,通过所述交互端操控同时显示在主虚拟显示装置和副虚拟显示装置的屏显内容。 The cloud server according to claim 16, wherein the interaction terminal is connected to the primary virtual display device and the secondary virtual display device, and shares an interaction operation instruction of the interaction terminal, and is simultaneously displayed by the interaction terminal On-screen content of the primary virtual display device and the secondary virtual display device.
PCT/CN2017/097157 2017-08-11 2017-08-11 Virtual display device, intelligent interaction method, and cloud server WO2019028855A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/097157 WO2019028855A1 (en) 2017-08-11 2017-08-11 Virtual display device, intelligent interaction method, and cloud server
CN201780003281.8A CN108401463A (en) 2017-08-11 2017-08-11 Virtual display device, intelligent interaction method and cloud server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/097157 WO2019028855A1 (en) 2017-08-11 2017-08-11 Virtual display device, intelligent interaction method, and cloud server

Publications (1)

Publication Number Publication Date
WO2019028855A1 true WO2019028855A1 (en) 2019-02-14

Family

ID=63095110

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/097157 WO2019028855A1 (en) 2017-08-11 2017-08-11 Virtual display device, intelligent interaction method, and cloud server

Country Status (2)

Country Link
CN (1) CN108401463A (en)
WO (1) WO2019028855A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671118B (en) * 2018-11-02 2021-05-28 北京盈迪曼德科技有限公司 Virtual reality multi-person interaction method, device and system
CN111399631B (en) * 2019-01-03 2021-11-05 广东虚拟现实科技有限公司 Virtual content display method and device, terminal equipment and storage medium
WO2020140905A1 (en) * 2019-01-03 2020-07-09 广东虚拟现实科技有限公司 Virtual content interaction system and method
CN110515580B (en) * 2019-09-02 2022-08-19 联想(北京)有限公司 Display control method, device and terminal
CN110459048B (en) * 2019-09-03 2020-06-23 亳州职业技术学院 Intelligent tour guide explanation system based on VR technology
CN110673738B (en) * 2019-09-29 2022-02-18 联想(北京)有限公司 Interaction method and electronic equipment
CN115052030A (en) * 2022-06-27 2022-09-13 北京蔚领时代科技有限公司 Virtual digital person control method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102141858A (en) * 2010-02-25 2011-08-03 微软公司 Multi-Screen synchronous slide gesture
US20140125698A1 (en) * 2012-11-05 2014-05-08 Stephen Latta Mixed-reality arena
CN105824416A (en) * 2016-03-16 2016-08-03 成都电锯互动科技有限公司 Method for combining virtual reality technique with cloud service technique
CN106257543A (en) * 2016-09-23 2016-12-28 珠海市杰理科技股份有限公司 Vehicle-running recording system based on virtual reality visual angle
CN106716306A (en) * 2014-09-30 2017-05-24 索尼互动娱乐股份有限公司 Synchronizing multiple head-mounted displays to a unified space and correlating movement of objects in the unified space

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100138780A1 (en) * 2008-05-20 2010-06-03 Adam Marano Methods and systems for using external display devices with a mobile computing device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102141858A (en) * 2010-02-25 2011-08-03 微软公司 Multi-Screen synchronous slide gesture
US20140125698A1 (en) * 2012-11-05 2014-05-08 Stephen Latta Mixed-reality arena
CN106716306A (en) * 2014-09-30 2017-05-24 索尼互动娱乐股份有限公司 Synchronizing multiple head-mounted displays to a unified space and correlating movement of objects in the unified space
CN105824416A (en) * 2016-03-16 2016-08-03 成都电锯互动科技有限公司 Method for combining virtual reality technique with cloud service technique
CN106257543A (en) * 2016-09-23 2016-12-28 珠海市杰理科技股份有限公司 Vehicle-running recording system based on virtual reality visual angle

Also Published As

Publication number Publication date
CN108401463A (en) 2018-08-14

Similar Documents

Publication Publication Date Title
US11790871B2 (en) Detection and display of mixed 2D/3D content
US11838518B2 (en) Reprojecting holographic video to enhance streaming bandwidth/quality
US11557102B2 (en) Methods for manipulating objects in an environment
US10567449B2 (en) Apparatuses, methods and systems for sharing virtual elements
WO2019028855A1 (en) Virtual display device, intelligent interaction method, and cloud server
US9852546B2 (en) Method and system for receiving gesture input via virtual control objects
US20140347262A1 (en) Object display with visual verisimilitude
JP7081052B2 (en) Displaying device sharing and interactivity in simulated reality (SR)
US10984607B1 (en) Displaying 3D content shared from other devices
US11430198B1 (en) Method and device for orientation-based view switching

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17921225

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 26/06/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17921225

Country of ref document: EP

Kind code of ref document: A1