WO2022033596A1 - 一种基于虚拟现实一体机的交互方法和*** - Google Patents

一种基于虚拟现实一体机的交互方法和*** Download PDF

Info

Publication number
WO2022033596A1
WO2022033596A1 PCT/CN2021/112624 CN2021112624W WO2022033596A1 WO 2022033596 A1 WO2022033596 A1 WO 2022033596A1 CN 2021112624 W CN2021112624 W CN 2021112624W WO 2022033596 A1 WO2022033596 A1 WO 2022033596A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual reality
data
machine
pose
Prior art date
Application number
PCT/CN2021/112624
Other languages
English (en)
French (fr)
Inventor
吴涛
Original Assignee
青岛小鸟看看科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛小鸟看看科技有限公司 filed Critical 青岛小鸟看看科技有限公司
Publication of WO2022033596A1 publication Critical patent/WO2022033596A1/zh
Priority to US17/881,920 priority Critical patent/US11720169B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/024Multi-user, collaborative environment

Definitions

  • the present application relates to the technical field of virtual reality, and in particular to an interaction method and system based on a virtual reality all-in-one machine.
  • the Internet-based life entertainment is undergoing a major change, which makes people's life and entertainment more and more socialized, cooperative and shared.
  • VR virtual reality
  • Users of standalone VR headsets often find that while VR is fun, it becomes very isolating due to its lack of a social dimension.
  • the embodiments of the present application provide an interaction method and system based on a virtual reality all-in-one machine, which are used to solve or partially solve the above problems.
  • an embodiment of the present application provides an interaction method based on a virtual reality all-in-one machine, including:
  • the pose tracking data of multiple users is fused to obtain the pose fusion data of each user, and the pose fusion data carries the user ID;
  • an embodiment of the present application provides an interactive system based on a virtual reality integrated machine, including: a data processing server and a plurality of virtual reality integrated machines connected to the data processing server in a network;
  • the all-in-one virtual reality machine is used to draw the user avatar of each all-in-one virtual reality machine in the virtual reality scene of the all-in-one virtual reality machine according to the user data of each all-in-one virtual reality machine, and collect the pose tracking data of its own users, and then collect the collected data.
  • the pose tracking data of its own user is sent to the data processing server, wherein the pose tracking data carries the user ID;
  • the data processing server is used to receive the pose tracking data of multiple users sent by multiple virtual reality all-in-one machines in real time.
  • the pose fusion data is sent to each virtual reality all-in-one machine, where the pose fusion data carries the user ID;
  • the all-in-one virtual reality machine is also used to update the pose state of the user avatars with the same user ID in the virtual reality scene of the all-in-one virtual reality machine by using the pose fusion data and the user ID carried by it, so as to realize different situations in the virtual reality scene. Interaction between user avatars.
  • embodiments of the present application further provide an electronic device, including: a processor; and a memory arranged to store computer-executable instructions, the executable instructions, when executed, cause the processor to execute the above-mentioned Interactive method based on virtual reality all-in-one machine.
  • embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium stores one or more programs, and the one or more programs are used by an electronic device including multiple application programs. When executed, the electronic device is caused to execute the above-mentioned interaction method based on the virtual reality all-in-one machine.
  • users of multiple VR all-in-one machines are virtualized into the same VR scene, so that users in the same physical area or in different physical areas can experience the same VR environment.
  • Fusion using the pose fusion data and user ID obtained by data fusion to update the pose state of the user avatar corresponding to the user ID in the VR scene displayed by each VR all-in-one machine, so as to realize the interaction between different user avatars in the VR scene , so that other users can observe the status of other users in the VR scene in real time on the VR all-in-one machine from a third perspective, enhance the user's VR experience, and provide the possibility for content creators to integrate social and multi-user dimensions into the VR world.
  • FIG. 1 is a flowchart of an interaction method based on a VR all-in-one machine shown in an embodiment of the present application
  • FIG. 2 is a schematic diagram of multi-user performing VR remote interaction according to an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of an interaction system based on VR all-in-one machine shown in an embodiment of the present application;
  • FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • first, second, third, etc. may be used in this application to describe various information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information without departing from the scope of the present application.
  • word "if” as used herein can be interpreted as "at the time of” or "when” or "in response to determining.”
  • the all-in-one VR machine is usually set so that each user wears a single all-in-one VR machine for virtual reality interaction. This is mainly due to the head-mounted nature of the VR all-in-one machine, which determines its ability to provide VR to only one user at a time. Therefore, when users of two or more standalone VR headsets are in the real world at the same time for a VR experience, they may be immersed in different VR environments without knowing each other's experiences. For example, when users of two standalone VR headsets are located in close physical locations, their VR experiences are completely different VR environments.
  • the embodiment of the present application provides a new interactive solution for the user of the VR all-in-one machine, which makes it less isolated, more community-based and entertaining as a whole.
  • the embodiments of the present application not only enable users of VR all-in-one machines in the same physical environment location and different physical environments to witness the same VR scene, but also allow different users to interact with avatars in the VR environment.
  • FIG. 1 is a flowchart of an interaction method based on a VR all-in-one machine according to an embodiment of the present application. As shown in FIG. 1 , the interaction method in the embodiment of the present application includes the following steps:
  • step S110 according to the user data of the plurality of VR all-in-one machines, the user avatars of each VR all-in-one machine are drawn in the VR scene.
  • drawing the user avatar of each VR all-in-one machine in the VR scene can be understood as drawing the user avatar of the user of the VR all-in-one machine and other VR all-in-one machines in the VR scene displayed by each VR all-in-one machine.
  • User avatar can be understood as drawing the user avatar of the user of the VR all-in-one machine and other VR all-in-one machines in the VR scene displayed by each VR all-in-one machine.
  • the user data includes, but is not limited to, user body surface feature data such as user gender, user height, and user skin color, and personalized drawing of the user avatar is implemented based on the user data.
  • step S120 the pose tracking data of the multiple users collected by the multiple VR all-in-one machines is acquired in real time, and the pose tracking data carries the user ID.
  • each all-in-one VR machine collects the pose tracking data of its own user, and obtains the pose tracking data of multiple users collected by multiple all-in-one VR machines, which can be understood as: acquiring the data collected by each all-in-one VR machine The pose tracking data of the user of the VR all-in-one machine.
  • the first all-in-one VR machine collects the pose tracking data of its own user (represented as user 1)
  • the second all-in-one VR machine collects the pose tracking data of its own user (represented as user 2).
  • the Nth all-in-one VR machine collects the pose tracking data of its own user (represented as user N), and obtains the pose tracking data collected by the N all-in-one VR machines in real time.
  • the pose tracking data is 6DoF (degree of freedom, degree of freedom) data, including user position data and user pose data, for example, using the built-in sensor components of the VR all-in-one machine to collect the user's pose tracking data, and/or using VR
  • the sensor in the handle controller external to the all-in-one machine collects the user's pose tracking data.
  • Step S130 fuse the pose tracking data of multiple users based on the user ID carried by the pose tracking data, to obtain the pose fusion data of each user, and the pose fusion data carries the user ID.
  • Step S140 using the pose fusion data and the user ID carried by the pose fusion data to update the pose state of the user avatars with the same user ID in the VR scene of each VR integrated machine, so as to realize the interaction between different user avatars in the VR scene.
  • users of multiple VR all-in-one machines are virtualized into the same VR scene, so that users in the same physical area or in different physical areas can experience the same VR environment.
  • the pose tracking data is fused, and the pose fusion data obtained by data fusion and the user ID are used to update the pose state of the user avatar of the corresponding user ID in the VR scene displayed by each VR all-in-one machine, so as to realize different situations in the VR scene.
  • the interaction between user avatars enables other users to observe the status of other users in the VR scene in real time on the VR all-in-one machine from a third perspective, enhances the user's VR experience, and provides content creators with social and multi-user dimensions into the VR world. possible.
  • FIG. 2 specifically describes the multi-person remote interaction method based on the VR all-in-one machine by taking multi-person tele-communication as an example, and this method can also be extended to other practical applications, such as VR-based multi-person remote interaction.
  • applications such as office work and VR-based multi-person remote conferences, the embodiments of the present application do not specifically limit specific application scenarios.
  • a data processing server needs to be constructed.
  • the hardware configuration and specifications of the server such as the processing capability and application scene rendering capability, can be determined according to the number of clients (that is, the VR all-in-one machine shown in Figure 2) in the actual application. and the rendering complexity of VR content to be determined.
  • Each client is connected to the network through a wireless network processor, such as a wireless router and other devices, and further connects with the data processing server to realize network communication.
  • This embodiment also constructs a main controller, which assumes the role of an administrator in the system and is used to manage the VR all-in-one machine in the system.
  • the physical structure of the main controller is the same as the VR all-in-one machine. have the same physical structure.
  • the VR all-in-one machine shown in Figure 2 is a head-mounted all-in-one machine.
  • the head-mounted end of the VR all-in-one machine has built-in components such as CPU, GPU, and wireless network module.
  • the computing and processing events of the VR all-in-one machine are all realized on the head-mounted end.
  • the head-mounted end of the VR all-in-one machine shown in Figure 2 has a built-in 6DoF positioning module, and the 6DoF positioning module includes two or more Camera (camera) sensors, IMU (Inertial measurement unit, inertial measurement unit) 9
  • the axis inertial navigation sensor uses computer vision algorithms to combine Camera data and IMU 9-axis inertial navigation sensor data to obtain the 6DoF position and attitude tracking data of the VR all-in-one machine in real time, that is, to obtain the user's position and attitude information relative to the real physical environment.
  • the VR all-in-one machine may also be in other forms, which is not specifically limited in this application.
  • the VR all-in-one machine shown in Figure 2 is externally connected with two handle controllers.
  • the user controls the two handle controllers with the left and right hands respectively.
  • the handle controllers the user can interact with the content in the VR scene in real time.
  • the handle controller has a built-in optical sensor or a 3-axis electromagnetic sensor or an ultrasonic sensor, combined with the data of the IMU 9-axis inertial navigation sensor built in the handle controller, the 6DoF data of the handle is calculated in real time, and the user's pose is obtained in real time. track data.
  • the pose tracking data of multiple users obtained in real time in this embodiment may include data obtained through different sensor modules.
  • the 6DoF data of the headset and the 6DoF data of the handle controller are both calculated and processed by the CPU of the VR all-in-one headset.
  • acquiring the pose tracking data of multiple users collected by multiple VR all-in-one machines in real time in step S120 includes: sending data requests to multiple VR all-in-one machines according to a set frequency, and pre- Receive the pose tracking data replied by the corresponding VR all-in-one machine within the set time.
  • step S130 fuses the pose tracking data of multiple users based on the user ID carried in the pose tracking data, including fusing the pose tracking data replied by the corresponding virtual reality integrated machine received within a preset time.
  • the data processing server sends a data request to N all-in-one VR machines every 1 ms, and when each all-in-one VR machine receives the data request, it sends the currently collected pose tracking data of its own user to the data processing server.
  • the data processing server starts a timer while sending a data request to the N all-in-one VR machines. Within the preset time of the timer, for example, within a preset time of 20ms, the data processing server receives the bit returned by each VR all-in-one machine. Pose tracking data.
  • the data processing server Assuming that within the preset time, the data processing server only receives the pose tracking data returned by some VR all-in-one machines among the N all-in-one VR machines, the data processing server will receive the received pose tracking data from this part of the VR all-in-one machines. Fusion.
  • the data processing server is used to send data requests according to the frequency, and by setting the preset time for receiving data, data fusion is performed only on the pose tracking data received within the preset time, so as to avoid the data processor waiting for all VR all-in-one machines to return to their positions. Delay caused by data fusion after pose tracking data.
  • data fusion can be performed by the following methods:
  • Coordinate transformation is performed on the pose tracking data of multiple users to obtain the pose fusion data of each user located in the same coordinate system.
  • the data processing server of this embodiment After receiving the pose tracking data, will The pose tracking data is transformed into coordinates, and the pose tracking data is mapped to the coordinate system where the VR scene is located, so that the subsequent VR all-in-one machines can update the user avatar in the VR scene based on the position fusion data.
  • step S110 using the pose fusion data and the user ID carried by it to update the position status of the user avatar with the same user ID in the VR scene of each VR integrated machine, including:
  • each VR all-in-one computer in this embodiment starts two parallel processing threads to update itself The pose state of the avatar and the pose state of other user avatars, to avoid the update delay of the pose state caused by different calculation frequencies.
  • the above step S110 further includes receiving an interaction request sent by the VR all-in-one machine, where the interaction request carries verification information and user data; confirming the interaction request sent by the VR all-in-one machine according to the verification information, and if the verification is passed, sending a request to the VR
  • the all-in-one machine sends a response to allow interaction, and draws the user avatar of the all-in-one VR that has passed the verification in the VR scene based on user data; if the verification fails, it sends a response that rejects the interaction to the all-in-one VR.
  • the VR all-in-one machine when the VR all-in-one machine starts multi-person remote interaction, the VR all-in-one machine sends an interaction request to the data processing server, and the interaction request carries authentication information and user data; the data processing server parses the interaction request and Save, and the data processing server forwards the interaction request to the main controller.
  • the main controller makes a judgment based on the verification information carried by the interaction request. If the verification is passed, it sends a response to allow interaction to the data processing server.
  • the interaction method shown in FIG. 1 further includes: receiving a removal command, where the removal command carries a user ID of the user avatar to be removed; according to the user ID of the user avatar to be removed, the user ID of avatars are removed from the VR scene of each standalone VR headset.
  • the main controller when it receives the removal command, for example, when it receives the removal command sent by the administrator, it sends the removal command to the data processing server, and the data processing server will remove the command. It is forwarded to each VR all-in-one machine, and each VR all-in-one machine removes the corresponding user avatar from the VR scene when receiving the removal command.
  • this embodiment can implement multi-user VR scene interaction.
  • FIG. 3 is a schematic structural diagram of an interactive system based on VR all-in-one machine according to an embodiment of the present application.
  • the system 300 of this embodiment includes: a data processing server 310 , a plurality of virtual machines connected to the data processing server 310 via a network Reality all-in-one computer 320;
  • the VR all-in-one machine 320 is used to draw the user avatar of each VR all-in-one machine in the VR scene of the VR all-in-one machine according to the user data of each VR all-in-one machine, and collect the pose tracking data of its own user,
  • the pose tracking data is sent to the data processing server 310, wherein the pose tracking data carries the user ID;
  • the data processing server 310 is configured to receive the pose tracking data of multiple users sent by the multiple VR all-in-one machines in real time, and fuse the pose tracking data of the multiple users based on the user IDs carried by the pose tracking data to obtain each user.
  • the pose fusion data is sent to each VR all-in-one machine, and the pose fusion data carries the user ID;
  • the VR all-in-one machine 320 is further configured to use the pose fusion data and the user ID carried by it to update the pose state of the user avatars with the same user ID in the VR scene of the virtual VR all-in-one machine, so as to realize different user avatars in the VR scene interaction between.
  • the data processing server 310 is further configured to send data requests to a plurality of VR all-in-one machines according to a set frequency, and receive the pose tracking data replied by the corresponding VR all-in-one machines within a preset time;
  • the VR all-in-one machine 320 is further configured to send back the pose tracking data to the data processing server 310 according to the received data request;
  • the data processing server 310 is further configured to fuse the pose tracking data replied by the corresponding VR all-in-one machine received within a preset time.
  • the data processing server 310 is specifically configured to perform coordinate transformation on the pose tracking data of multiple users to obtain the pose fusion data of each user located in the same coordinate system.
  • the VR all-in-one machine 320 is further configured to start the first thread to obtain the pose fusion data, and use the pose fusion data and the user ID carried by it to render and update the user avatar with the same user ID in the VR scene; and A second thread parallel to the first thread is started to obtain the pose tracking data of the own user collected by the VR all-in-one machine, and the pose state of the own avatar in the VR scene is updated by using the collected pose tracking data of the own user.
  • the interactive system further includes a main controller networked with the data processing server 310;
  • the data processing server 310 is further configured to receive an interaction request sent by the VR all-in-one machine and send the received interaction request to the main controller, where the interaction request carries verification information and user data;
  • the main controller is used to confirm the interactive request sent by the VR all-in-one machine according to the verification information, and send the verification result to the data processing server;
  • the data processing server 310 is further configured to respond to the interaction request according to the verification result, if the verification result is that the verification is passed, send an interactive response to the virtual reality integrated machine, and if the verification result is that the verification fails, send a rejection interaction response to the virtual reality integrated machine;
  • the VR all-in-one machine 320 is further configured to receive an interactive response. If the interactive response is an allowable interactive response, the user avatar of the virtual reality all-in-one machine that has passed the verification is drawn in the VR scene based on the user data. interactions.
  • users of multiple VR all-in-one machines are virtualized into the same VR scene, so that users in the same physical area or in different physical areas can experience the same VR environment.
  • the pose tracking data is fused, and the pose fusion data and user ID obtained by data fusion are used to update the pose state of the user avatar of the corresponding user ID in the VR scene displayed by each VR all-in-one machine, so as to realize the realization of different users in the VR scene.
  • the interaction between avatars enables other users to observe the status of other users in the VR scene in real time on the VR all-in-one machine from a third perspective, enhances the user's VR experience, and provides the possibility for content creators to integrate social and multi-user dimensions into the VR world .
  • FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device includes a processor, and optionally an internal bus, a network interface, and a memory.
  • the memory may include memory, such as high-speed random-access memory (Random-Access Memory, RAM), or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM Random-Access Memory
  • non-volatile memory such as at least one disk memory.
  • the electronic equipment may also include hardware required for other services.
  • the processor, network interface and memory can be connected to each other through an internal bus, which can be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Component Interconnect) bus. Industry Standard Architecture, extended industry standard structure) bus, etc.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one bidirectional arrow is used in FIG. 4, but it does not mean that there is only one bus or one type of bus.
  • the program may include program code, and the program code includes computer operation instructions.
  • the memory may include memory and non-volatile memory and provide instructions and data to the processor.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and runs it, forming an interactive system based on the virtual reality all-in-one machine at the logical level.
  • the processor executes the program stored in the memory, and is specifically used to perform the following operations:
  • the pose tracking data of multiple users is fused to obtain the pose fusion data of each user, and the pose fusion data carries the user ID;
  • the above-mentioned method executed by the interactive system based on the virtual reality all-in-one machine disclosed in the embodiment shown in FIG. 3 of the present application may be applied to a processor, or implemented by a processor.
  • a processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above-mentioned method can be completed by an integrated logic circuit of hardware in the processor or an instruction in the form of software.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the electronic device can also execute the interaction method based on the integrated virtual reality machine in FIG. 1 , and realize the functions of the interactive system based on the integrated virtual reality machine in the embodiment shown in FIG. 3 .
  • modules in the device in the embodiment can be adaptively changed and arranged in one or more devices different from the embodiment.
  • the modules or units or components in the embodiments may be combined into one module or unit or component, and further they may be divided into multiple sub-modules or sub-units or sub-assemblies. All features disclosed in this specification (including accompanying claims, abstract and drawings) and any method so disclosed may be employed in any combination, unless at least some of such features and/or procedures or elements are mutually exclusive. All processes or units of equipment are combined.
  • Each feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
  • Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the interactive system according to the embodiments of the present application.
  • DSP digital signal processor
  • the present application can also be implemented as an apparatus or apparatus program (eg, computer programs and computer program products) for performing part or all of the methods described herein.
  • Such a program implementing the present application may be stored on a computer-readable medium, or may be in the form of one or more signals. Such signals may be downloaded from Internet sites, or provided on carrier signals, or in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

一种基于虚拟现实一体机的交互方法和***,所述方法包括:根据多个虚拟现实一体机的用户数据,在虚拟现实场景中绘制各个虚拟现实一体机的用户化身(S110);实时获取多个虚拟现实一体机采集到的多个用户的位姿追踪数据,位姿追踪数据携带用户ID(S120);基于位姿追踪数据携带的用户ID将多个用户的位姿追踪数据进行融合,得到各个用户的位姿融合数据,位姿融合数据携带用户ID(S130);利用位姿融合数据及其携带的用户ID对各个虚拟现实一体机的虚拟现实场景中具有相同用户ID的用户化身的位姿状态进行更新,实现虚拟现实场景中不同用户化身之间的交互(S140)。该方法和***可以实现VR场景中不同用户化身之间的交互,增强VR体验。

Description

一种基于虚拟现实一体机的交互方法和*** 技术领域
本申请涉及虚拟现实技术领域,具体涉及一种基于虚拟现实一体机的交互方法和***。
背景技术
以互联网为主的生活娱乐正在经历了一场大的变革,这样的变革使人的生活娱乐越来越社会化、合作化、共享化。然而,基于虚拟现实(Virtual Reality,VR)技术的VR一体机进行的VR环境交互是一个比较显著的例外。VR一体机的用户经常发现,尽管VR很有趣,但由于缺乏社交维度,VR变得非常孤立。
发明内容
本申请实施例提供了一种基于虚拟现实一体机的交互方法和***,用于解决或部分解决上述问题。
一方面,本申请实施例提供了一种基于虚拟现实一体机的交互方法,包括:
根据多个虚拟现实一体机的用户数据,在虚拟现实场景中绘制各个虚拟现实一体机的用户化身;
实时获取多个虚拟现实一体机采集到的多个用户的位姿追踪数据,位姿追踪数据携带用户ID;
基于位姿追踪数据携带的用户ID将多个用户的位姿追踪数据进行融合,得到各个用户的位姿融合数据,位姿融合数据携带用户ID;
利用位姿融合数据及其携带的用户ID对各个虚拟现实一体机的虚拟现实场景中具有相同用户ID的用户化身的位姿状态进行更新,实现虚拟现实 场景中不同用户化身之间的交互。
另一方面,本申请实施例提供了一种基于虚拟现实一体机的交互***,包括:数据处理服务器与数据处理服务器网络连接的多个虚拟现实一体机;
虚拟现实一体机,用于根据各个虚拟现实一体机的用户数据,在虚拟现实一体机的虚拟现实场景中绘制各个虚拟现实一体机的用户化身,以及采集自身用户的位姿追踪数据,将采集到的自身用户的位姿追踪数据发送给数据处理服务器,其中位姿追踪数据携带用户ID;
数据处理服务器,用于实时接收多个虚拟现实一体机发送的多个用户的位姿追踪数据,基于位姿追踪数据携带的用户ID将多个用户的位姿追踪数据进行融合,将得到各个用户的位姿融合数据发送给各个虚拟现实一体机,其中位姿融合数据携带用户ID;
虚拟现实一体机,还用于利用位姿融合数据及其携带的用户ID对自身虚拟现实一体机的虚拟现实场景中具有相同用户ID的用户化身的位姿状态进行更新,实现虚拟现实场景中不同用户化身之间的交互。
第三方面,本申请实施例还提供了一种电子设备,包括:处理器;以及被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行上述的基于虚拟现实一体机的交互方法。
第四方面,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储一个或多个程序,所述一个或多个程序当被包括多个应用程序的电子设备执行时,使得所述电子设备执行上述的基于虚拟现实一体机的交互方法。
本申请实施例采用的上述至少一个技术方案能够达到以下有益效果:
本申请实施例将多个VR一体机的用户虚拟化到同一VR场景中,可以使处于同一物理区域或处于不同物理区域的用户体验相同的VR环境,通过将多个用户的位姿追踪数据进行融合,利用数据融合所得的位姿融合数据及用户ID对每个VR一体机所显示的VR场景中相应用户ID的用户化身的位姿状态进行更新,实现VR场景中不同用户化身之间的交互,使其他用户可以通过第三视角在VR一体机上实时观察其他用户在VR场景中的状态, 增强用户的VR体验,为内容创作者将社交和多用户维度融入VR世界提供可能。
附图说明
图1是本申请一个实施例示出的基于VR一体机的交互方法流程图;
图2是本申请一个实施例示出的多用户进行VR远程交互的示意图;
图3是本申请一个实施例示出的基于VR一体机的交互***结构示意图;
图4是本申请一个实施例示出的一种电子设备的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
VR一体机通常设置为每个用户佩戴单个VR一体机进行虚拟现实交 互。这主要是由于VR一体机的头戴式性质决定其一次只能为一个用户提供VR的能力。因此,当两个或多个VR一体机的用户同时处于现实世界中进行虚拟现实体验时,他们可能会沉浸在不同的虚拟现实环境中,而完全不知道彼此的体验。例如,当两个VR一体机的用户位于临近的物理位置,但他们的VR体验则是完全不同的VR环境。
基于以上描述,本申请实施例为VR一体机的用户提供了一种新型的交互方案,使其整体上不那么孤立、更具社区性和娱乐性。本申请实施例不仅可以使同一物理环境位置和不同物理环境的VR一体机的用户能够见证相同的VR场景,而且允许不同用户在VR环境的化身进行交互。
图1是本申请一个实施例示出的基于VR一体机的交互方法流程图,如图1所示,本申请实施例的交互方法包括如下步骤:
步骤S110,根据多个VR一体机的用户数据,在VR场景中绘制各个VR一体机的用户化身。
本实施例中,在VR场景中绘制各个VR一体机的用户化身,可以理解为在每个VR一体机所显示的VR场景中,绘制包括VR一体机自身用户的用户化身以及其他VR一体机的用户化身。
其中,用户数据包括但不限于用户性别、用户身高、用户肤色等用户体表特征数据,基于用户数据实现用户化身的个性化绘制。
步骤S120,实时获取多个VR一体机采集到的多个用户的位姿追踪数据,位姿追踪数据携带用户ID。
本实施例中,每个VR一体机采集自身用户的位姿追踪数据,获取多个VR一体机采集到的多个用户的位姿追踪数据,可以理解为:获取每个VR一体机采集到的该VR一体机自身用户的位姿追踪数据。假设由N个VR一体机,第一个VR一体机采集其自身用户(表示为用户1)的位姿追踪数据,第二个VR一体机采集其自身用户(表示为用户2)的位姿追踪数据,如此第N个VR一体机采集其自身用户(表示为用户N)的位姿追踪数据,实时获取这N个VR一体机采集的位姿追踪数据。
其中,位姿追踪数据为6DoF(degree of freedom,自由度)数据,包 括用户位置数据和用户姿态数据,例如利用VR一体机内置的传感器组件采集用户的位姿追踪数据,和/或,利用VR一体机外接的手柄控制器中的传感器采集用户的位姿追踪数据。
步骤S130,基于位姿追踪数据携带的用户ID将多个用户的位姿追踪数据进行融合,得到各个用户的位姿融合数据,位姿融合数据携带用户ID。
步骤S140,利用位姿融合数据及其携带的用户ID对各个VR一体机的VR场景中具有相同用户ID的用户化身的位姿状态进行更新,实现VR场景中不同用户化身之间的交互。
由图1所示可知,本实施例将多个VR一体机的用户虚拟化到同一VR场景中,可以使处于同一物理区域或处于不同物理区域的用户体验相同的VR环境,通过将多个用户的位姿追踪数据进行融合,利用数据融合所得的位姿融合数据及用户ID对每个VR一体机所显示的VR场景中相应用户ID的用户化身的位姿状态进行更新,实现VR场景中不同用户化身之间的交互,使其他用户可以通过第三视角在VR一体机上实时观察其他用户在VR场景中的状态,增强用户的VR体验,为内容创作者将社交和多用户维度融入VR世界提供可能。
下面以一个具有N个VR一体机用户之间的远场交互为例,结合图2对图1中的交互方法的实现步骤进行具体说明。
应当注意的是:图2所示实施例是以多人远程社交为实例具体描述基于VR一体机的多人远程交互方法,该方法也可以扩展到其他实际应用中,比如基于VR的多人远程办公,基于VR的多人远程会议等应用,本申请实施例对具体应用场景不作具体限定。
如图2所示,本实施例需要构建一个数据处理服务器,服务器的处理能力、应用场景渲染能力等硬件配置、规格可根据实际应用中用户端(即图2所示的VR一体机)个数和VR内容的渲染复杂度予以确定。每个用户端通过无线网络处理器,比如无线路由器等设备,和网络进行连接,进而和数据处理服务器进行网络对接,实现网络通信。本实施例还构建一个主控制器,该主控制器在该***中承担管理员的角色,用于对***中的VR 一体机进行管理,本实施例中主控制器的物理结构与VR一体机的物理结构相同。
图2所示的VR一体机为头戴式一体机,VR一体机的头戴端内置CPU、GPU、无线网络模块等组件,VR一体机的计算处理事件全部在头戴端实现。一个示例,图2所示的VR一体机的头戴端内置6DoF定位模组,6DoF定位模组包括2个及2个以上的Camera(相机)传感器,IMU(Inertial measurement unit,惯性测量单元)9轴惯性导航传感器,通过计算机视觉算法结合Camera数据和IMU 9轴惯性导航传感器数据,实时获取VR一体机的6DoF位姿追踪数据,即获取用户相对真物理环境中的位置和姿态信息。
需要说明的是,在一些实施例中,VR一体机也可以为其他形式,本申请不做具体限制。
图2所示的VR一体机外接两个手柄控制器,用户左右手分别控制两个手柄控制器,利用手柄控制器用户可以实时和VR场景中的内容进行交互。本实施例中,手柄控制器内置光学传感器或者3轴电磁传感器或者超声波传感器,结合内置在手柄控制器中的IMU 9轴惯性导航传感器数据,实时计算出手柄的6DoF数据,实时获得用户的位姿追踪数据。
可以理解的是,本实施例实时获取到的多个用户的位姿追踪数据可以包括通过不同传感器模组获取到的数据,例如图2所示的利用内置在VR一体机头戴端的6DoF定位模组采集到的用户头部的位姿追踪数据,以及利用内置在VR一体机外接的手柄控制器中的传感器模组采集到的用户手部的位姿追中数据。其中,头戴端的6DoF数据和手柄控制器的6DoF数据都是经过VR一体机头戴端的CPU计算处理所得。
结合图2所示的交互场景,步骤S120中的实时获取多个VR一体机采集到的多个用户的位姿追踪数据包括:按照设定频率向多个VR一体机发送数据请求,并在预设时间内接收相应VR一体机回复的位姿追踪数据。
相应的,步骤S130基于位姿追踪数据携带的用户ID将多个用户的位姿追踪数据进行融合包括将在预设时间内接收到的相应虚拟现实一体机回复的位姿追踪数据进行融合。
例如,数据处理服务器每1ms向N个VR一体机发送数据请求,每个VR一体机在接收到数据请求时,将当前采集到的自身用户的位姿追踪数据发送给数据处理服务器。数据处理服务器在向N个VR一体机发送数据请求的同时启动一个定时器,在定时器的预设时间内,例如在20ms的预设时间内,数据处理服务器接收各个VR一体机回复到的位姿追踪数据。假设在预设时间内,数据处理服务器仅接收到N个VR一体机中的部分VR一体机回复的位姿追踪数据,则数据处理服务器将接收到的这部分VR一体机回复的位姿追踪数据进行融合。
本实施例利用数据处理服务器按照频率发送数据请求,通过设置接收数据的预设时间,只对预设时间内接收到的位姿追踪数据进行数据融合,避免数据处理器等待所有VR一体机回复位姿追踪数据之后再进行数据融合导致的时延。
在一些实施例中,可以通过下述方法进行数据融合:
将多个用户的位姿追踪数据进行坐标变换,得到位于同一坐标系中的各个用户的位姿融合数据。
由于数据处理服务器所接收到的VR一体机回复的位姿追踪数据是基于VR一体机自身坐标系描述的,因此,本实施例的数据处理服务器在接收到位姿追踪数据之后,将接收到的位姿追踪数据进行坐标变换,将位姿追踪数据映射到VR场景所在坐标系中,以便于后续各个VR一体机基于位置融合数据更新VR场景中的用户化身。
在一些实施例中,上述步骤S110中,利用位姿融合数据及其携带的用户ID对各个VR一体机的VR场景中具有相同用户ID的用户化身的位置状态进行更新,包括:
利用各个VR一体机启动的第一线程获取位姿融合数据,利用位姿融合数据及其携带的用户ID渲染更新VR场景中具有相同用户ID的用户化身;以及
利用各个VR一体机启动的与第一线程并行的第二线程获取VR一体机采集到的自身用户的位姿追踪数据,利用采集到的自身用户的位姿追踪数 据更新VR场景中自身化身的位姿状态。
由于VR一体机采集自身用户的位姿追踪数据的计算频率与数据处理服务器获得位姿融合数据的计算率不同,因此,本施例中的各个VR一体机启动两个并行处理的线程分别更新自身化身的位姿状态和其他用户化身的位姿状态,避免不同计算频率造成位姿状态的更新延时。
在一些实施例中,上述步骤S110还包括接收VR一体机发送的交互请求,该交互请求携带验证信息与用户数据;根据验证信息对VR一体机发送的交互请求进行确认,若验证通过,向VR一体机发送允许交互响应,并基于用户数据在VR场景中绘制该验证通过的VR一体机的用户化身;若验证未通过向VR一体机发送拒绝交互响应。
结合图2所示的交互场景,在VR一体机启动多人远程交互时,该VR一体机向数据处理服务器发送交互请求,该交互请求携带验证信息与用户数据;数据处理服务器解析该交互请求并保存,以及数据处理服务器将该交互请求转发给主控制器,主控制器基于该交互请求携带的验证信息进行判断,若验证通过,向数据处理服务器发送允许交互响应,数据处理服务器接收到的允许交互响应时,将允许交互响应发送给VR一体机,以及将保存的该VR一体机的用户数据发送给***中所有其他VR一体机,使得***中的VR一体机在VR场景中绘制该验证通过的VR一体机的用户化身。若验证未通过,向数据处理服务器发送拒绝交互响应,数据处理服务器将接收到的拒绝交互响应时发送给该VR一体机。
在一些实施例中,图1所示的交互方法还包括:接收移除命令,该移除命令携带待移除用户化身的用户ID;根据该待移除用户化身的用户ID,将该用户ID的用户化身从各个VR一体机的VR场景中移除。
依然结合图2所示的交互场景,在主控制器接收到移除命令时,例如接收到管理员发送的移除命令时,将移除命令发送给数据处理服务器,数据处理服务器将移除命令转发给各个VR一体机,各个VR一体机在接收到移除命令时,将相应的用户化身从VR场景中移除。
基于上述描述,本实施例可以实现多用户的VR场景交互。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以存储于一计算机可读取存储介质中,该程序可以执行图1所述的各步骤。其中,所述的存储介质,如:ROM/RAM、磁碟、光盘等。
与前述方法相对应,本申请还提供了一种基于虚拟现实一体机的交互***。图3是本申请一个实施例示出的基于VR一体机的交互***结构示意图,如图3所述,本实施例的***300包括:数据处理服务器310,与数据处理服务器310网络连接的多个虚拟现实一体机320;
VR一体机320,用于根据各个VR一体机的用户数据,在VR一体机的VR场景中绘制各个VR一体机的用户化身,以及采集自身用户的位姿追踪数据,将采集到的自身用户的位姿追踪数据发送给数据处理服务器310,其中位姿追踪数据携带用户ID;
数据处理服务器310,用于实时接收多个VR一体机发送的多个用户的位姿追踪数据,基于位姿追踪数据携带的用户ID将多个用户的位姿追踪数据进行融合,将得到各个用户的位姿融合数据发送给各个VR一体机,其中位姿融合数据携带用户ID;
VR一体机320,还用于利用位姿融合数据及其携带的用户ID对自身虚VR一体机的VR景中具有相同用户ID的用户化身的位姿状态进行更新,实现VR场景中不同用户化身之间的交互。
在一些实施例中,数据处理服务器310,还用于按照设定频率向多个VR一体机发送数据请求,并在预设时间内接收相应VR一体机回复的位姿追踪数据;
VR一体机320,还用于按照接收到的数据请求向数据处理服务器310回复的位姿追踪数据;
数据处理服务器310,还用于将在预设时间内接收到的相应VR一体机回复的位姿追踪数据进行融合。
在一些实施例中,数据处理服务器310,具体是用于将多个用户的位姿追踪数据进行坐标变换,得到位于同一坐标系中的各个用户的位姿融合数 据。
在一些实施例中,VR一体机320,还用于启动第一线程来获取位姿融合数据,利用位姿融合数据及其携带的用户ID渲染更新VR场景中具有相同用户ID的用户化身;以及启动与第一线程并行的第二线程获取VR一体机采集到的自身用户的位姿追踪数据,利用采集到的自身用户的位姿追踪数据更新VR场景中自身化身的位姿状态。
在一些实施例中,交互***还包括与数据处理服务器310网络连接的主控制器;
数据处理服务器310,还用于接收VR一体机发送的交互请求并将接收到的交互请求发送给主控制器,所述交互请求携带验证信息与用户数据;
主控制器,用于根据所述验证信息对VR一体机发送的交互请求进行确认,并将验证结果发送给数据处理服务器;
数据处理服务器310,还用于根据验证结果响应交互请求,若验证结果为验证通过,向虚拟现实一体机发送允许交互响应,若验证结果为验证未通过,向虚拟现实一体机发送拒绝交互响应;
VR一体机320,还用于接收交互响应,若交互响应为允许交互响应,则基于用户数据在VR场景中绘制该验证通过的虚拟现实一体机的用户化身,交互响应为拒绝交互响应则结束本次交互。
综上所述,本申请实施例将多个VR一体机的用户虚拟化到同一VR场景中,可以使处于同一物理区域或处于不同物理区域的用户体验相同的VR环境,通过将多个用户的位姿追踪数据进行融合,利用数据融合所得的位姿融合数据及用户ID对每个VR一体机所显示的VR场景中相应用户ID的用户化身的位姿状态进行更新,实现VR场景中不同用户化身之间的交互,使其他用户可以通过第三视角在VR一体机上实时观察其他用户在VR场景中的状态,增强用户的VR体验,为内容创作者将社交和多用户维度融入VR世界提供可能。
图4是本申请一个实施例示出的一种电子设备的结构示意图。请参考图4,在硬件层面,该电子设备包括处理器,可选地还包括内部总线、网络 接口、存储器。其中,存储器可能包含内存,例如高速随机存取存储器(Random-Access Memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少1个磁盘存储器等。当然,该电子设备还可能包括其他业务所需要的硬件。
处理器、网络接口和存储器可以通过内部总线相互连接,该内部总线可以是ISA(Industry Standard Architecture,工业标准体系结构)总线、PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图4中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
存储器,用于存放程序。具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。存储器可以包括内存和非易失性存储器,并向处理器提供指令和数据。
处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,在逻辑层面上形成基于虚拟现实一体机的交互***。处理器,执行存储器所存放的程序,并具体用于执行以下操作:
根据多个虚拟现实一体机的用户数据,在虚拟现实场景中绘制各个虚拟现实一体机的用户化身;
实时获取多个虚拟现实一体机采集到的多个用户的位姿追踪数据,位姿追踪数据携带用户ID;
基于位姿追踪数据携带的用户ID将多个用户的位姿追踪数据进行融合,得到各个用户的位姿融合数据,位姿融合数据携带用户ID;
利用位姿融合数据及其携带的用户ID对各个虚拟现实一体机的虚拟现实场景中具有相同用户ID的用户化身的位姿状态进行更新,实现虚拟现实场景中不同用户化身之间的交互。
上述如本申请图3所示实施例揭示的基于虚拟现实一体机的交互***执行的方法可以应用于处理器中,或者由处理器实现。处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤 可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
该电子设备还可执行图1中基于虚拟现实一体机的交互方法,并实现基于虚拟现实一体机的交互***在图3所示实施例的功能,本申请实施例在此不再赘述。
需要说明的是:
在此提供的算法和显示不与任何特定计算机、虚拟装置或者其它设备固有相关。各种通用装置也可以与基于在此的示教一起使用。根据上面的描述,构造这类装置所要求的结构是显而易见的。此外,本申请也不针对任何特定编程语言。应当明白,可以利用各种编程语言实现在此描述的本申请的内容,并且上面对特定语言所做的描述是为了披露本申请的最佳实施方式。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本申请的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本申请并帮助理解各个发明方面中的一 个或多个,在上面对本申请的示例性实施例的描述中,本申请的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本申请要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本申请的单独实施例。
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本申请的范围之内并且形成不同的实施例。例如,在下面的权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本申请的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本申请实施例的交互***中的一些或者全部部件的一些或者全部功能。本申请还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本申请的程序可以存储在计算机可读介质上,或者可以具有一个或者多个 信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本申请进行说明而不是对本申请进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本申请可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。

Claims (12)

  1. 一种基于虚拟现实一体机的交互方法,包括:
    根据多个虚拟现实一体机的用户数据,在虚拟现实场景中绘制各个虚拟现实一体机的用户化身;
    实时获取多个虚拟现实一体机采集到的多个用户的位姿追踪数据,位姿追踪数据携带用户ID;
    基于位姿追踪数据携带的用户ID将多个用户的位姿追踪数据进行融合,得到各个用户的位姿融合数据,位姿融合数据携带用户ID;
    利用位姿融合数据及其携带的用户ID对各个虚拟现实一体机的虚拟现实场景中具有相同用户ID的用户化身的位姿状态进行更新,实现虚拟现实场景中不同用户化身之间的交互。
  2. 如权利要求1所述的方法,其中,实时获取多个虚拟现实一体机采集到的多个用户的位姿追踪数据,包括:
    按照设定频率向多个虚拟现实一体机发送数据请求,并在预设时间内接收相应虚拟现实一体机回复的位姿追踪数据。
  3. 如权利要求2所述的方法,其中,基于位姿追踪数据携带的用户ID将多个用户的位姿追踪数据进行融合,包括:
    将在预设时间内接收到的相应虚拟现实一体机回复的位姿追踪数据进行融合。
  4. 如权利要求1所述的方法,其中,基于位姿追踪数据携带的用户ID将多个用户的位姿追踪数据进行融合,得到各个用户的位姿融合数据,包括:
    将多个用户的位姿追踪数据进行坐标变换,得到位于同一坐标系中的各个用户的位姿融合数据。
  5. 如权利要求1所述的方法,其中,利用位姿融合数据及其携带的用户ID对各个虚拟现实一体机的虚拟现实场景中具有相同用户 ID的用户化身的位置状态进行更新,包括:
    利用各个虚拟现实一体机启动的第一线程获取位姿融合数据,利用位姿融合数据及其携带的用户ID渲染更新虚拟现实场景中具有相同用户ID的用户化身;以及
    利用各个虚拟现实一体机启动的与第一线程并行的第二线程获取虚拟现实一体机采集到的自身用户的位姿追踪数据,利用采集到的自身用户的位姿追踪数据更新虚拟现实场景中自身化身的位姿状态。
  6. 如权利要求1所述的方法,其中,根据多个虚拟现实一体机的用户数据,在各个虚拟现实一体机的虚拟现实场景中绘制各个虚拟现实一体机的用户化身,包括:
    接收虚拟现实一体机发送的交互请求,所述交互请求携带验证信息与用户数据;
    根据所述验证信息对虚拟现实一体机发送的交互请求进行确认,若验证通过,向虚拟现实一体机发送允许交互响应,并基于用户数据在虚拟现实场景中绘制该验证通过的虚拟现实一体机的用户化身;若验证未通过向虚拟现实一体机发送拒绝交互响应。
  7. 如权利要求6所述的方法,其中,还包括:
    接收移除命令,所述移除命令携带待移除用户化身的用户ID;
    根据所述待移除用户化身的用户ID,将该用户ID的用户化身从各个虚拟现实一体机的虚拟现实场景中移除。
  8. 一种基于虚拟现实一体机的交互***,其中,包括:数据处理服务器,与数据处理服务器网络连接的多个虚拟现实一体机;
    虚拟现实一体机,用于根据各个虚拟现实一体机的用户数据,在虚拟现实一体机的虚拟现实场景中绘制各个虚拟现实一体机的用户化身,以及采集自身用户的位姿追踪数据,将采集到的自身用户的位姿追踪数据发送给数据处理服务器,其中位姿追踪数据携带用户ID;
    数据处理服务器,用于实时接收多个虚拟现实一体机发送的多个 用户的位姿追踪数据,基于位姿追踪数据携带的用户ID将多个用户的位姿追踪数据进行融合,将得到各个用户的位姿融合数据发送给各个虚拟现实一体机,其中位姿融合数据携带用户ID;
    虚拟现实一体机,还用于利用位姿融合数据及其携带的用户ID对自身虚拟现实一体机的虚拟现实场景中具有相同用户ID的用户化身的位姿状态进行更新,实现虚拟现实场景中不同用户化身之间的交互。
  9. 根据权利要求8所述的交互***,其中,
    数据处理服务器,还用于按照设定频率向多个虚拟现实一体机发送数据请求,并在预设时间内接收相应虚拟现实一体机回复的位姿追踪数据;
    虚拟现实一体机,还用于按照接收到的数据请求向数据处理服务器回复的位姿追踪数据;
    数据处理服务器,还用于将在预设时间内接收到的相应虚拟现实一体机回复的位姿追踪数据进行融合。
  10. 根据权利要求9所述的交互***,其中,还包括与数据处理服务器网络连接的主控制器;
    数据处理服务器,还用于接收虚拟现实一体机发送的交互请求并将接收到的交互请求发送给主控制器,所述交互请求携带验证信息与用户数据;
    主控制器,用于根据所述验证信息对虚拟现实一体机发送的交互请求进行确认,并将验证结果发送给数据处理服务器;
    数据处理服务器,还用于根据验证结果响应交互请求,若验证结果为验证通过,向虚拟现实一体机发送允许交互响应,若验证结果为验证未通过,向虚拟现实一体机发送拒绝交互响应;
    虚拟现实一体机,还用于接收交互响应,若交互响应为允许交互响应,则基于用户数据在虚拟现实场景中绘制该验证通过的虚拟现实 一体机的用户化身,交互响应为拒绝交互响应则结束本次交互。
  11. 一种电子设备,包括:处理器;以及被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行权利要求1-7任一项所述的基于虚拟现实一体机的交互方法。
  12. 一种计算机可读存储介质,所述计算机可读存储介质存储一个或多个程序,所述一个或多个程序当被包括多个应用程序的电子设备执行时,使得所述电子设备执行权利要求1-7任一项所述的基于虚拟现实一体机的交互方法。
PCT/CN2021/112624 2020-08-14 2021-08-13 一种基于虚拟现实一体机的交互方法和*** WO2022033596A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/881,920 US11720169B2 (en) 2020-08-14 2022-08-05 Interaction method and system based on virtual reality equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010818597.6A CN112130660B (zh) 2020-08-14 2020-08-14 一种基于虚拟现实一体机的交互方法和***
CN202010818597.6 2020-08-14

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/881,920 Continuation US11720169B2 (en) 2020-08-14 2022-08-05 Interaction method and system based on virtual reality equipment

Publications (1)

Publication Number Publication Date
WO2022033596A1 true WO2022033596A1 (zh) 2022-02-17

Family

ID=73851564

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/112624 WO2022033596A1 (zh) 2020-08-14 2021-08-13 一种基于虚拟现实一体机的交互方法和***

Country Status (3)

Country Link
US (1) US11720169B2 (zh)
CN (1) CN112130660B (zh)
WO (1) WO2022033596A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115828836A (zh) * 2023-02-16 2023-03-21 江西格如灵科技有限公司 电器线路布置方法、***、计算机及可读存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112130660B (zh) 2020-08-14 2024-03-15 青岛小鸟看看科技有限公司 一种基于虚拟现实一体机的交互方法和***
CN113262465A (zh) * 2021-04-27 2021-08-17 青岛小鸟看看科技有限公司 一种虚拟现实交互方法、设备及***
CN117115400A (zh) * 2023-09-15 2023-11-24 深圳市红箭头科技有限公司 实时显示全身人体动作的方法、装置、计算机设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170243324A1 (en) * 2016-02-22 2017-08-24 Google Inc. Separate time-warping for a scene and an object for display of virtual reality content
US20180268589A1 (en) * 2017-03-16 2018-09-20 Linden Research, Inc. Virtual reality presentation of body postures of avatars
CN109671118A (zh) * 2018-11-02 2019-04-23 北京盈迪曼德科技有限公司 一种虚拟现实多人交互方法、装置及***
CN110928414A (zh) * 2019-11-22 2020-03-27 上海交通大学 三维虚实融合实验***
CN111307146A (zh) * 2020-03-02 2020-06-19 北京航空航天大学青岛研究院 一种基于双目相机和imu的虚拟现实头戴显示设备定位***
CN112130660A (zh) * 2020-08-14 2020-12-25 青岛小鸟看看科技有限公司 一种基于虚拟现实一体机的交互方法和***

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120306850A1 (en) * 2011-06-02 2012-12-06 Microsoft Corporation Distributed asynchronous localization and mapping for augmented reality
US20160012465A1 (en) * 2014-02-08 2016-01-14 Jeffrey A. Sharp System and method for distributing, receiving, and using funds or credits and apparatus thereof
US10062208B2 (en) * 2015-04-09 2018-08-28 Cinemoi North America, LLC Systems and methods to provide interactive virtual environments
US9898864B2 (en) * 2015-05-28 2018-02-20 Microsoft Technology Licensing, Llc Shared tactile interaction and user safety in shared space multi-person immersive virtual reality
US20170249785A1 (en) * 2016-02-29 2017-08-31 Vreal Inc Virtual reality session capture and replay systems and methods
US20180255285A1 (en) * 2017-03-06 2018-09-06 Universal City Studios Llc Systems and methods for layered virtual features in an amusement park environment
US20180359448A1 (en) * 2017-06-07 2018-12-13 Digital Myths Studio, Inc. Multiparty collaborative interaction in a virtual reality environment
CN107263473A (zh) * 2017-06-19 2017-10-20 中国人民解放军国防科学技术大学 一种基于虚拟现实的人机交互方法
CN107272454A (zh) * 2017-06-19 2017-10-20 中国人民解放军国防科学技术大学 一种基于虚拟现实的实时人机交互方法
US10445941B2 (en) * 2017-06-21 2019-10-15 Number 9, LLC Interactive mixed reality system for a real-world event
CN109313484B (zh) * 2017-08-25 2022-02-01 深圳市瑞立视多媒体科技有限公司 虚拟现实交互***、方法及计算机存储介质
US10672190B2 (en) * 2017-10-05 2020-06-02 Microsoft Technology Licensing, Llc Customizing appearance in mixed reality
CN108020223B (zh) * 2017-11-29 2021-05-18 北京众绘虚拟现实技术研究院有限公司 一种基于惯性测量装置的力反馈设备手柄的姿态测量方法
CN109445573A (zh) * 2018-09-14 2019-03-08 重庆爱奇艺智能科技有限公司 一种用于虚拟化身形象互动的方法与装置
CN110379014A (zh) * 2019-07-30 2019-10-25 招商局重庆交通科研设计院有限公司 基于bim+vr技术的交互式道路仿真方法及平台
CN111199561B (zh) * 2020-01-14 2021-05-18 上海曼恒数字技术股份有限公司 一种用于虚拟现实设备的多人协同定位方法及***

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170243324A1 (en) * 2016-02-22 2017-08-24 Google Inc. Separate time-warping for a scene and an object for display of virtual reality content
US20180268589A1 (en) * 2017-03-16 2018-09-20 Linden Research, Inc. Virtual reality presentation of body postures of avatars
CN109671118A (zh) * 2018-11-02 2019-04-23 北京盈迪曼德科技有限公司 一种虚拟现实多人交互方法、装置及***
CN110928414A (zh) * 2019-11-22 2020-03-27 上海交通大学 三维虚实融合实验***
CN111307146A (zh) * 2020-03-02 2020-06-19 北京航空航天大学青岛研究院 一种基于双目相机和imu的虚拟现实头戴显示设备定位***
CN112130660A (zh) * 2020-08-14 2020-12-25 青岛小鸟看看科技有限公司 一种基于虚拟现实一体机的交互方法和***

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115828836A (zh) * 2023-02-16 2023-03-21 江西格如灵科技有限公司 电器线路布置方法、***、计算机及可读存储介质
CN115828836B (zh) * 2023-02-16 2023-05-05 江西格如灵科技有限公司 电器线路布置方法、***、计算机及可读存储介质

Also Published As

Publication number Publication date
CN112130660A (zh) 2020-12-25
US11720169B2 (en) 2023-08-08
CN112130660B (zh) 2024-03-15
US20220374073A1 (en) 2022-11-24

Similar Documents

Publication Publication Date Title
WO2022033596A1 (zh) 一种基于虚拟现实一体机的交互方法和***
CN105050671B (zh) 等待时间敏感的游戏特征的客户端渲染***和方法
JP6244593B1 (ja) 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム
US10438394B2 (en) Information processing method, virtual space delivering system and apparatus therefor
US9842164B2 (en) Avatar service system and method for animating avatar on a terminal on a network
JP6462059B1 (ja) 情報処理方法、情報処理プログラム、情報処理システム、および情報処理装置
EP2527019A2 (en) Sprite strip renderer
US20110197201A1 (en) Network based real-time virtual reality input/output system and method for heterogeneous environment
US11278810B1 (en) Menu placement dictated by user ability and modes of feedback
CN109271021B (zh) 头戴式设备的控制方法、装置、头戴式设备及存储介质
WO2017071385A1 (zh) 一种虚拟现实场景中目标物体的控制方法及装置
JP2022121451A (ja) プログラム、情報処理装置、情報処理システム、および情報処理方法
JP6113897B1 (ja) 仮想空間を提供する方法、仮想体験を提供する方法、プログラム、および記録媒体
US10688399B2 (en) Group gameplay with users in proximity using a gaming platform
WO2022174575A1 (zh) 头戴设备交互模式的切换方法及***
JP6999538B2 (ja) 情報処理方法、情報処理プログラム、情報処理システム、および情報処理装置
JP7041484B2 (ja) プログラム、情報処理装置、情報処理システム、および情報処理方法
WO2007029568A1 (ja) 画像処理プログラム及び,これを用いた画像処理システム
JP2020042593A (ja) プログラム、情報処理装置、および方法
JPWO2019102676A1 (ja) 情報処理装置、情報処理方法、およびプログラム
JP7438786B2 (ja) プログラム、情報処理方法、及び情報処理装置
JP2018011193A (ja) 情報処理方法及び当該情報処理方法をコンピュータに実行させるためのプログラム
JP6189495B1 (ja) 仮想空間を提供する方法、仮想体験を提供する方法、プログラム、および記録媒体
EP4384290A1 (en) Aligning scanned environments for multi-user communication sessions
JP2024069926A (ja) プログラム、コンピュータシステムおよび記録用画像処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21855646

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/06/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21855646

Country of ref document: EP

Kind code of ref document: A1