CN106843507B - Virtual reality multi-person interaction method and system - Google Patents

Virtual reality multi-person interaction method and system Download PDF

Info

Publication number
CN106843507B
CN106843507B CN201710185939.3A CN201710185939A CN106843507B CN 106843507 B CN106843507 B CN 106843507B CN 201710185939 A CN201710185939 A CN 201710185939A CN 106843507 B CN106843507 B CN 106843507B
Authority
CN
China
Prior art keywords
data
inertial
optical positioning
output
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710185939.3A
Other languages
Chinese (zh)
Other versions
CN106843507A (en
Inventor
徐志
邱春麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Multispace Media & Exhibition Co ltd
Original Assignee
Suzhou Multispace Media & Exhibition Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Multispace Media & Exhibition Co ltd filed Critical Suzhou Multispace Media & Exhibition Co ltd
Priority to CN201710185939.3A priority Critical patent/CN106843507B/en
Publication of CN106843507A publication Critical patent/CN106843507A/en
Application granted granted Critical
Publication of CN106843507B publication Critical patent/CN106843507B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a virtual reality multi-person interaction system, which comprises: the system comprises a head-mounted visual device (1), an image rendering computer (2), a hybrid dynamic capturing space positioning system (3) and a central server (4); the head-mounted visual equipment (1) is correspondingly connected with the image rendering computer (2), and the hybrid dynamic capturing space positioning system (3) comprises: a plurality of optical positioning modules (31) and inertial dynamic capturing modules (32), a hybrid dynamic capturing server (33); the optical positioning module (31) includes: the first output end (311), the inertia capturing module (32) comprises: a second output (321), the hybrid dynamic capture server (33) comprising: the first input end (331), the second input end (332) and the third output end (333), the third output end (333) is connected with the image rendering computer (2), and the image rendering computer (2) provides the display image output by the head-mounted visual device (1). The invention has convenient operation, simple structure and extremely high commercial value.

Description

Virtual reality multi-person interaction method and system
Technical Field
The invention relates to the field of virtual reality, in particular to a method and a system for virtual reality multi-person interaction.
Background
The virtual reality technology is a computer simulation system capable of creating and experiencing a virtual world, and utilizes a computer to generate a simulation environment, so that a user is immersed in the environment through system simulation of multi-source information fusion, interactive three-dimensional dynamic views and entity behaviors.
At present, the virtual reality technology is continuously developed and innovated, and if the virtual reality technology is utilized to realize that multiple persons interact in a virtual environment, the goal of continuous fumbling is required at present, for example, how to realize real-time performance of multiple persons at home, how to realize virtual basketball game of multiple persons without basketball courts, etc., all the problems need to be solved.
At present, a method and a system for virtual reality multi-person interaction are not available.
Disclosure of Invention
Aiming at the technical defects existing in the prior art, the invention aims to provide a virtual reality multi-person interaction system, which comprises: the system comprises a head-mounted visual device 1, an image rendering computer 2, a hybrid dynamic capturing space positioning system 3 and a central server 4; the head-mounted visual device 1 is correspondingly connected with the image rendering computer 2, and the hybrid dynamic capturing space positioning system 3 comprises: a plurality of optical positioning modules 31 and inertial dynamic capturing modules 32 respectively arranged at multiple points of the object, and a hybrid dynamic capturing server 33; the optical positioning module 31 includes: the first output terminal 311, the inertial capture module 32 includes: a second output 321, the hybrid dynamic capture server 33 includes: the first output end 311 is connected to the first input end 331, the second output end 321 is connected to the second input end 332, and the third output end 333 is connected to the image rendering computer 2, and the image rendering computer 2 provides a display image output by the head-mounted visual device 1.
Preferably, the head-mounted visual device 1 comprises: the display lens 11 is connected to a display image input 12, said display image input 12 being connected to said image rendering computer 2.
Preferably, the image rendering computer 2 includes: the mixed data input end 21, the image production module 22, the image rendering module 23 and the display image output end 24 are sequentially connected, the mixed data input end 21 is connected with the third output end 333, and the display image output end 24 is connected with the head-mounted visual device 1.
Preferably, the first output 311 is an optical positioning data output, the second output 321 is an inertial motion data output, and the third output 333 is a hybrid data output.
Preferably, the optical positioning module 31 includes: the optical positioning point 312 is arranged at a plurality of first joint points of the object, the infrared camera 313 shoots an infrared image of the optical positioning point 312 and then transmits the infrared image to the positioning processor 314, and the first output end 311 is an output end of the positioning processor 314.
Preferably, the inertial capture module 32 includes: the sensor 322 is arranged on a plurality of second articulation points of the object and is used for collecting the acceleration of the second articulation points and the angular velocity of connecting lines between the second articulation points; the inertial capture processor 323 includes an acquisition input end and an azimuth inertial positioning data output end, the acquisition input end is connected with the sensor, and the second output end 321 is the azimuth inertial positioning data output end.
Preferably, the hybrid dynamic capture server 33 further includes: the calibration module 334 compares the data of the first input 331 and the second input 332 and outputs the calibrated data at the third output 333.
According to another aspect of the present invention, there is provided a method of virtual reality multi-person interaction, comprising:
collecting optical positioning data and inertial motion capturing data of the optical positioning module and the inertial motion capturing module;
processing and calibrating the optical positioning data and inertial measurement data to form hybrid data;
and generating a moving image based on the mixed data and performing image rendering to form a display image.
Preferably, the optical positioning data is P 1 ′(x 1 ′,y 1 ′,z 1 '), the inertial motion capture data is P 2 ′(x 2 ′,y 2 ′,z 2 '), said processing and calibrating said optical positioning data and inertial capture data to form hybrid data comprising the steps of:
capturing corresponding standard optical positioning data P in standard human body posture 1 (x 1 ,y 1 ,z 1 ) Standard inertial capture data P 2 (x 2 ,y 2 ,z 2 );
Based on the standard optical positioning data and the standard inertial motion capturing data, executing a matching step on the optical positioning data and the inertial motion capturing data, wherein the specific mode is as follows: |x 1 ′-x 2 ′∣α∣x 1 -x 2 ∣+∣y 1 -y 2 ∣+∣z 1 -z 2 ∣,∣y 1 ′-y 2 ′∣α∣x 1 -x 2 ∣+∣y 1 -y 2 ∣+∣z 1 -z 2 ∣,∣z 1 ′-z 2 ′∣α∣x 1 -x 2 ∣+∣y 1 -y 2 ∣+∣z 1 -z 2 ∣;
And calculating the average coordinate values of all the optical positioning data and the inertial motion capturing data which are successfully matched as the mixed data.
Preferably, if a part of the optical positioning data and a part of the inertial measurement data do not perform the matching step and the standard optical positioning data and the standard inertial measurement data are completely matched, the optical positioning data and the inertial measurement data have no missing data;
if a part of the optical positioning data and a part of the inertial motion capturing data do not execute the matching step and the standard optical positioning data and the standard inertial motion capturing data are not completely matched, missing data exists in the optical positioning data and the inertial motion capturing data.
Preferably, if there is missing data in the optical positioning data and the inertial measurement data, a plurality of interpolation methods are used to repair the optical positioning data and the inertial measurement data, and a matching step is performed based on the repaired optical positioning data and the repaired inertial measurement data.
Preferably, the method further comprises:
the head-mounted visual device is provided and a display image is output by the head-mounted visual device.
Preferably, the head-mounted visual device further comprises a human body support, and the optical positioning module and the inertial motion capturing module are arranged on the human body support.
The invention has the beneficial effects that: according to the invention, the motion of a person is captured and analyzed through the hybrid dynamic capturing space positioning system, the analysis result is sent to the image rendering computer for rendering, the rendering result is sent to the central server, the central server integrates a plurality of rendering results, the final rendering result is determined, the final rendering result is sent to the head-mounted visual equipment of each person, and finally the final rendering result is seen by eyes of the person. The invention has convenient operation, simple structure and extremely high commercial value.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a schematic diagram of the modular connection of a virtual reality, multi-person interactive system, in accordance with an embodiment of the present invention;
FIG. 2 shows a schematic block diagram of a first embodiment of the present invention, the optical positioning block;
fig. 3 shows a schematic diagram of a module connection of the inertial capture module according to a second embodiment of the invention;
fig. 4 shows a schematic diagram of module connection of the hybrid dynamic capture server according to a third embodiment of the present invention; and
fig. 5 is a schematic flow chart of a method for virtual reality multi-person interaction according to another embodiment of the invention.
Detailed Description
In order to better and clearly show the technical scheme of the invention, the invention is further described below with reference to the accompanying drawings.
Fig. 1 shows a schematic diagram of module connection of a virtual reality multi-person interaction system according to an embodiment of the present invention, and as understood by those skilled in the art, the virtual reality multi-person interaction system includes a plurality of positioning modules disposed on a human body and reflected to a central server, the central server integrates real-time motion situations of multiple persons, and renders the real-time motion situations, and finally reflects the real-time motion situations to a head-mounted visual device of each person, so as to achieve the purpose of realizing virtual environment and virtual reality multi-person interaction by wearing the visual device, and specifically, the virtual reality multi-person interaction system includes: the head-mounted visual equipment 1, the image rendering computer 2, the hybrid dynamic capturing space positioning system 3 and the central server 4, as understood by those skilled in the art, the head-mounted visual equipment 1 can be a virtual reality helmet or virtual reality glasses, and is mainly used for enabling a person to see a virtual reality scene, more specifically, the head-mounted visual equipment further comprises a human body support, the human body support is suitable for human body shape, further, the optical positioning module and the inertial dynamic capturing module are arranged on the human body support, and the optical positioning module and the inertial dynamic capturing module are used for acquiring optical positioning data and inertial dynamic capturing data, which will be further described in the following specific embodiments and are not repeated herein.
The image rendering computer 2 may be a processing unit disposed on the head-mounted visual device 1, or may be a fixed computing device disposed far away from the head-mounted visual device 1, the image rendering computer is mainly configured to analyze received and captured human motions and render the motions under the condition of a virtual scene based on the analysis, the hybrid dynamic capture spatial positioning system 3 is mainly configured to capture motions of people in virtual reality multi-person interaction, generate a track, a direction, and the like based on the motions, provide conditions and a basis for rendering the image rendering computer 2, and the central server is configured to integrate multiple motions captured by the hybrid dynamic capture spatial positioning system 3, render different scenes according to different head-mounted visual devices 1, and send the different scenes to each head-mounted visual device.
Preferably, the head-mounted visual device 1 is correspondingly connected with the image rendering computer 2, and the hybrid dynamic capture space positioning system 3 comprises: a plurality of optical positioning modules 31 and inertial dynamic capturing modules 32 respectively arranged at multiple points of the object, and a hybrid dynamic capturing server 33; the optical positioning module 31 includes: the first output terminal 311, the inertial capture module 32 includes: a second output 321, the hybrid dynamic capture server 33 includes: the first input 331, the second input 332, and the third output 333 are further connected to the head-mounted visual device 1 correspondingly to the image rendering computer 2, in such an embodiment, the head-mounted visual device 1 is connected to one image rendering computer 2 correspondingly, and in other embodiments, each of the head-mounted visual devices 1 is connected to one image rendering computer 2 correspondingly. In a preferred embodiment, a plurality of optical positioning modules 31 and inertial capture modules 32 are provided at the head and hand of the subject, while in other embodiments, a plurality of optical positioning modules 31 and inertial capture modules 32 may also be provided at the foot, waist, etc.
The optical positioning module 31 mainly achieves positioning by means of infrared shooting, the inertial motion capturing module 32 captures the motion track and position by means of a speed sensor, the hybrid motion capturing server 33 is used for integrating and calculating the optical positioning module 31 and the inertial motion capturing module 32 to obtain a most preferred motion direction and motion track, the first input end 331 and the second input end 332 of the hybrid motion capturing server 33 are used for acquiring the data of the optical positioning module 31 and the inertial motion capturing module 32, and the third output end 333 is used for transmitting the integrated data.
Preferably, the first output end 311 is connected to the first input end 331, the second output end 321 is connected to the second input end 332, the third output end 333 is connected to the image rendering computer 2, and the image rendering computer 2 provides a display image output by the head-mounted visual device 1, the first output end 311 is an optical positioning data output end, the connection between the first output end 311 and the first input end 331 is that the optical positioning module 31 transmits a positioning result to the hybrid dynamic capturing server 33, the connection between the second output end 321 and the second input end 332 is that the captured result is transmitted to the hybrid dynamic capturing server 33, and further, the hybrid dynamic capturing server 33 transmits the integrated data result to the image rendering computer 2 through the third output end 333 for rendering, and transmits the positioning result as a display image to the head-mounted visual device 1.
Preferably, the head-mounted visual device 1 comprises: the display lens 11 is connected to a display image input 12, which display image input 12 is connected to the image rendering computer 2, in such an embodiment, the display lens is adapted to a part of the human head glasses for displaying a display image from the image rendering computer 2, and the display image input 12 is adapted to receive a display image from the image rendering computer 2.
Preferably, the image rendering computer 2 includes: the mixed data input end 21, the image production module 22, the image rendering module 23 and the display image output end 24 are sequentially connected, the mixed data input end 21 is connected with the third output end 333, and the display image output end 24 is connected with the head-mounted visual device 1. It will be understood by those skilled in the art that the third output 333 is a hybrid data output, the hybrid data input 21 is configured to receive the motion data from the third output 333 in the hybrid capture server 33, and after acquiring the motion data, the image rendering computer 2 preferably establishes a virtual environment through the image production module 22, constructs a virtual scene, and performs rendering by combining the motion data through the image rendering module 23, and integrates the rendering result into the virtual environment, and further, the display image output 24 is connected to the display image input 12 of the head-mounted visual device 1, and transmits the final rendering result to the head-mounted visual device 1 through the display image input 12.
Fig. 2 shows a schematic diagram of a module connection of the optical positioning module, which is part of the hybrid dynamic capture spatial positioning system for providing visual positioning, as a first embodiment of the invention.
Further, the optical positioning module 31 includes: the optical positioning point 312 is arranged at a plurality of first joint points of the object, the infrared camera 313 shoots an infrared image of the optical positioning point 312 and then transmits the infrared image to the positioning processor 314, and the first output end 311 is an output end of the positioning processor 314.
In such an embodiment, the optical positioning point 312 is preferably disposed at a critical part of the human body, preferably at a head and hand position of the human body, and may be disposed at a foot or the like for positioning the azimuth of the human body, and the infrared camera 313 is used for performing infrared photographing according to the optical positioning point 312, obtaining an infrared image according to the visual displacement of the human body, the change of the azimuth, the left-right up-down and back-and-forth movement of the optical positioning point, and transmitting to the positioning processor. The positioning processor is configured to pre-process the infrared data and transmit the infrared data to the first input end 331 through the first output end 311.
Fig. 3 shows a schematic diagram of a module connection of the inertial capture module, which is another part of the hybrid dynamic capture space positioning system for providing positioning in motion, as a second embodiment of the present invention.
Further, the inertial capture module 32 includes a sensor 322 and an inertial capture processor 323, where the sensor 322 is used to sense the motion of the human body, preferably an acceleration sensor, an angular velocity sensor, and in other embodiments, the sensor further includes a displacement sensor, a height sensor, etc., and the inertial capture processor 323 is used to process the data acquired by the sensor 332.
Further, the sensor 322 is disposed at a plurality of second joints of the subject and collects acceleration of the second joints and angular velocity of connection between the second joints, and it is understood by those skilled in the art that the second joints may be disposed at a position covering the first joints, or may be disposed at other joints, for example, a hand joint, an ankle, a knee, a shoulder, etc., the inertial capture processor 323 includes an acquisition input end and an azimuth inertial positioning data output end, the acquisition input end is connected with the sensor, the second output end 321 is the azimuth inertial positioning data output end, the acquisition input end is connected with the sensor, the azimuth inertial positioning data output end is the second output end 321, and the second output end 321 is connected with the second input end 332.
Fig. 4 shows a schematic diagram of module connection of the hybrid dynamic capture server according to the third embodiment of the present invention, where the hybrid dynamic capture server is configured to combine the visual positioning and the motion positioning obtained in the first embodiment and the second embodiment to obtain a most preferred motion image display.
Specifically, the hybrid dynamic capture server 33 further includes a calibration module 334, and the calibration module 334 compares the data of the first input 331 and the second input 332 and outputs the calibrated data at the third output 333.
As a third embodiment of the present invention, those skilled in the art understand that in the actual virtual environment interaction, often, because of some objective facts, people cannot completely transmit their meaning representations through limbs, and the calibration module needs to perform analysis to achieve complete meaning representations on ideas of people, while in other embodiments, the calibration module may calibrate irregular postures of people, and perfect unsmooth movements, so that other people see complete, smooth and aesthetic movements through the head-mounted visual device.
For example, in a specific embodiment, it is assumed that a person plays a basketball game using a virtual reality multi-person interactive system, where a person uses a jump-up basketball stand, and in this case, since in a practical situation, the person reaches a target threshold height and the person position matches the basketball stand position, but the basketball stand is subject to an error due to objective conditions, and only half of the basketball stand is stopped, at this time, according to a calibration module, the basketball stand can be calibrated and repaired, and when the person catches the basketball stand, the person is a complete basketball stand, including the person landing, and in another variation of the embodiment, if the person does not jump up, the person does not reach the target height specified by the system, but the person's posture looks like the basketball stand, and according to the motion trail of the hand joint captured by the inertial motion capturing module 32, the person is judged to be a basketball stand or a shooting according to the calibration module. Finally, the calibrated data is transmitted to the image rendering computer as final data for the person.
Fig. 5 is a schematic flow chart of a method of virtual reality multi-person interaction according to another embodiment of the present invention, and the method of virtual reality multi-person interaction will be further described with reference to the embodiments shown in fig. 1 to 4 and the specific examples.
Firstly, step S101 is entered, optical positioning data and inertial capturing data of the optical positioning module and the inertial capturing module are collected, and it is understood by those skilled in the art that the step is to obtain motion data of a person, the optical positioning data is obtained through the optical positioning module, the inertial capturing data is obtained through the inertial capturing module, further, the optical positioning module cooperates with the optical positioning point and an infrared camera, the infrared camera captures an infrared image of the optical positioning point and then transmits the infrared image to the positioning processor, the inertial capturing module cooperates with the sensor, the sensor is arranged at a plurality of second nodes of the object and collects acceleration of the second nodes and angular velocity of a connecting line between the second nodes, the inertial capturing processor includes an input end and an azimuth inertial positioning data output end, and the sensor transmits the obtained data to the inertial capturing processor.
Then, step S102 is performed to process and calibrate the optical positioning data and the inertial measurement data to form hybrid data, where in such an embodiment, the hybrid data is data for image rendering, and the basic actions of the person can be determined by processing the optical positioning data and the inertial measurement data, and then the hybrid data is more suitable for image display by the calibration process of the calibration module.
As a preferred embodiment of step S102, the optical positioning data is P 1 ′(x 1 ′,y 1 ′,z 1 '), the inertial motion capture data is P 2 ′(x 2 ′,y 2 ′,z 2 '), the step S102 is implemented by:
first, corresponding standard optical positioning data P under standard human posture is captured 1 (x 1 ,y 1 ,z 1 ) Standard inertial capture data P 2 (x 2 ,y 2 ,z 2 ). In particular, the standard optical positioning can be captured by the optical positioning module and the inertial motion capturing module when the human body is in the state of stretching and standing by the two armsData and standard inertial measurement data, which one skilled in the art would understand can place the human body in a three-dimensional coordinate system, are actually composed of a series of three-dimensional coordinate values, where x 1 ,y 1 ,z 1 X 2 ,y 2 ,z 2 Representing the values of the x-axis, y-axis and z-axis, respectively.
And performing a matching step on the optical positioning data and the inertial motion capturing data based on standard optical positioning data and standard inertial motion capturing data, wherein the optical positioning data and the inertial motion capturing data are a series of three-dimensional coordinate data, and screening the optical positioning data and the inertial motion capturing data according to the following standard. . It can be seen that the optical positioning data and the inertial motion capturing data form a group of data, each group of the optical positioning data and the inertial motion capturing data corresponds to a corresponding group of optical positioning modules and inertial motion capturing modules, and the standard optical positioning data and the standard inertial motion capturing data page correspond to the group of optical positioning modules and the inertial motion capturing modules, that is, in practical application, each group of optical positioning modules and the inertial motion capturing modules are used as references to respectively match the corresponding standard optical positioning data, the standard inertial motion capturing data, the optical positioning data and the inertial motion capturing data, and the matching modes are addition and subtraction operations, so that the operation speed can be improved and the error of multiple operations can be reduced.
More preferably, in this step, if a part of the optical positioning data and a part of the inertial measurement data are not matched, and the standard optical positioning data and the standard inertial measurement data are all matched, then the optical positioning data and the inertial measurement data have no missing data. If a part of the optical positioning data and a part of the inertial motion capturing data do not execute the matching step and the standard optical positioning data and the standard inertial motion capturing data are not completely matched, missing data exists in the optical positioning data and the inertial motion capturing data.
Further, if the optical positioning data and the inertial measurement data have missing data, repairing the optical positioning data and the inertial measurement data by adopting a plurality of interpolation methods, and executing a matching step based on the repaired optical positioning data and the repaired inertial measurement data. Specifically, the multiple interpolation method can be applied to various fields, and a lagrangian difference method, a newton difference method, and a hermite interpolation method can be adopted, which can be implemented by a person skilled in the art in combination with the prior art. More specifically, the preferred mode aims to improve the continuity and integrity of the optical positioning data and the inertial measurement data, and thus to obtain more accurate hybrid data.
And finally, calculating the average coordinate values of all the optical positioning data and the inertial motion capturing data which are successfully matched as the mixed data. Specifically, the average value of the values of the x axis, the y axis and the z axis is calculated respectively, and finally the mixed data is obtained, wherein the mixed data is also three-dimensional coordinate data. Finally, proceeding to step S103, moving images are generated based on the mixed data and image rendering is performed to form a display image, and it will be understood by those skilled in the art that in such an embodiment, the mixed data enters the image rendering computer through the mixed data input terminal, an image production module, an image rendering module are provided in the image rendering computer, the image generation module generates moving images and performs image rendering according to the image rendering module, and further, the display image is formed and output to the head-mounted visual device through a display image output terminal. More specifically, the head-mounted visual device outputs a display image, and the image of the display image is a rendered image obtained after processing in steps S101 to S103.
The foregoing describes specific embodiments of the present invention. It is to be understood that the invention is not limited to the particular embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the claims without affecting the spirit of the invention.

Claims (10)

1. A system for virtual reality multi-person interaction, comprising: the system comprises a head-mounted visual device (1), an image rendering computer (2), a hybrid dynamic capturing space positioning system (3) and a central server (4); the head-mounted visual equipment (1) is correspondingly connected with the image rendering computer (2), and the hybrid dynamic capturing space positioning system (3) comprises: a plurality of optical positioning modules (31) and inertial dynamic capturing modules (32) which are respectively arranged on multiple points of the object, and a hybrid dynamic capturing server (33); the optical positioning module (31) comprises: -a first output (311), the inertial capture module (32) comprising: -a second output (321), the hybrid dynamic capture server (33) comprising: a first input end (331), a second input end (332) and a third output end (333), wherein the first output end (311) is connected with the first input end (331), the second output end (321) is connected with the second input end (332), the third output end (333) is connected with the image rendering computer (2), and the image rendering computer (2) provides a display image output by the head-mounted visual device (1), and the optical positioning module (31) further comprises: optical locating point (312), infrared camera (313) and location treater (314), a plurality of first articulation points of object are located to optical locating point (312), infrared camera (313) are right after infrared image shooting is carried out to optical locating point (312) will infrared image transmission to location treater (314), first output (311) are the output of location treater (314), inertial motion catching module (32) still include: the sensor (322) is arranged on a plurality of second articulation points of the object and used for collecting the acceleration of the second articulation points and the angular velocity of a connecting line between the second articulation points; the inertial motion capture processor (323) comprises an acquisition input end and an azimuth inertial positioning data output end, wherein the acquisition input end is connected with the sensor, and the second output end (321) is the azimuth inertial positioning data output end.
2. The virtual reality multi-person interactive system of claim 1, characterized in that the head mounted visual device (1) comprises: the display lens (11) is connected with the display image input end (12), and the display image input end (12) is connected with the image rendering computer (2).
3. The virtual reality multi-person interactive system of claim 1, characterized in that the image rendering computer (2) comprises: the device comprises a mixed data input end (21), an image production module (22), an image rendering module (23) and a display image output end (24) which are sequentially connected, wherein the mixed data input end (21) is connected with a third output end (333), and the display image output end (24) is connected with the head-mounted visual equipment (1).
4. A virtual reality, multi-person interactive system as claimed in claim 3, characterized in that the first output (311) is an optical positioning data output, the second output (321) is an inertial movement data output, and the third output (333) is a hybrid data output.
5. The virtual reality multi-person interactive system of claim 1, characterized in that the hybrid dynamic capture server (33) further comprises: and the calibration module (334), the calibration module (334) compares the data of the first input end (331) and the second input end (332) and outputs the calibrated data at the third output end (333).
6. A method of virtual reality multiplayer interaction based on the system of any of claims 1 to 5, comprising:
collecting optical positioning data and inertial motion capturing data of the optical positioning module and the inertial motion capturing module;
processing and calibrating the optical positioning data and inertial measurement data to form hybrid data;
generating a moving image based on the mixed data and performing image rendering to form a display image;
wherein the optical positioning data is P 1 ′(x 1 ′,y 1 ′,z 1 '), the inertial motion capture data is P 2 ′(x 2 ′,y 2 ′,z 2 '), said processing and calibrating said optical positioning data and inertial capture data to form hybrid data comprising the steps of:
capturing corresponding standard optical positioning data P in standard human body posture 1 (x 1 ,y 1 ,z 1 ) Standard inertial capture data P 2 (x 2 ,y 2 ,z 2 );
Based on the standard optical positioning data and the standard inertial motion capturing data, executing a matching step on the optical positioning data and the inertial motion capturing data, wherein the specific mode is as follows:
and calculating the average coordinate values of all the optical positioning data and the inertial motion capturing data which are successfully matched as the mixed data.
7. The method of claim 6, wherein if a portion of the optical positioning data and a portion of the inertial measurement data are not matched and the standard optical positioning data and the standard inertial measurement data are all matched, the optical positioning data and the inertial measurement data have no missing data;
if a part of the optical positioning data and a part of the inertial motion capturing data do not execute the matching step and the standard optical positioning data and the standard inertial motion capturing data are not completely matched, missing data exists in the optical positioning data and the inertial motion capturing data.
8. The method of claim 7, wherein if there is missing data in the optical positioning data and the inertial capture data, performing a matching step on the repaired optical positioning data and the inertial capture data by using a plurality of interpolation methods.
9. The method of virtual reality multiplayer interaction of any one of claims 6-8, further comprising:
the head-mounted visual device is provided and a display image is output by the head-mounted visual device.
10. The method of virtual reality multiplayer interaction of any one of claims 6-8, wherein the head mounted visual device further comprises a human body support, the optical positioning module and the inertial capture module being disposed on the human body support.
CN201710185939.3A 2017-03-24 2017-03-24 Virtual reality multi-person interaction method and system Active CN106843507B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710185939.3A CN106843507B (en) 2017-03-24 2017-03-24 Virtual reality multi-person interaction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710185939.3A CN106843507B (en) 2017-03-24 2017-03-24 Virtual reality multi-person interaction method and system

Publications (2)

Publication Number Publication Date
CN106843507A CN106843507A (en) 2017-06-13
CN106843507B true CN106843507B (en) 2024-01-05

Family

ID=59131093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710185939.3A Active CN106843507B (en) 2017-03-24 2017-03-24 Virtual reality multi-person interaction method and system

Country Status (1)

Country Link
CN (1) CN106843507B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109298776B (en) * 2017-07-25 2021-02-19 阿里巴巴(中国)有限公司 Augmented reality interaction system, method and device
CN109313484B (en) * 2017-08-25 2022-02-01 深圳市瑞立视多媒体科技有限公司 Virtual reality interaction system, method and computer storage medium
CN107773254A (en) * 2017-12-05 2018-03-09 苏州创捷传媒展览股份有限公司 A kind of method and device for testing Consumer's Experience
CN110433486A (en) * 2018-05-04 2019-11-12 武汉金运激光股份有限公司 A kind of starting, response method and device realized more people and carry out somatic sensation television game
CN111796670A (en) * 2020-05-19 2020-10-20 北京北建大科技有限公司 Large-space multi-person virtual reality interaction system and method
CN111947650A (en) * 2020-07-14 2020-11-17 杭州瑞声海洋仪器有限公司 Fusion positioning system and method based on optical tracking and inertial tracking
CN113633962A (en) * 2021-07-15 2021-11-12 北京易智时代数字科技有限公司 Large-space multi-person interactive integrated system
CN114900678B (en) * 2022-07-15 2022-09-30 北京蔚领时代科技有限公司 VR end-cloud combined virtual concert rendering method and system
CN115624384B (en) * 2022-10-18 2024-03-22 方田医创(成都)科技有限公司 Operation auxiliary navigation system, method and storage medium based on mixed reality technology
CN115778544B (en) * 2022-12-05 2024-02-27 方田医创(成都)科技有限公司 Surgical navigation precision indicating system, method and storage medium based on mixed reality

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279186A (en) * 2013-05-07 2013-09-04 兰州交通大学 Multiple-target motion capturing system integrating optical localization and inertia sensing
CN104658012A (en) * 2015-03-05 2015-05-27 第二炮兵工程设计研究院 Motion capture method based on inertia and optical measurement fusion
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
CN105551059A (en) * 2015-12-08 2016-05-04 国网山西省电力公司技能培训中心 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion
CN205581785U (en) * 2016-04-15 2016-09-14 向京晶 Indoor virtual reality interactive system of many people
CN206819290U (en) * 2017-03-24 2017-12-29 苏州创捷传媒展览股份有限公司 A kind of system of virtual reality multi-person interactive

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10073516B2 (en) * 2014-12-29 2018-09-11 Sony Interactive Entertainment Inc. Methods and systems for user interaction within virtual reality scene using head mounted display

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279186A (en) * 2013-05-07 2013-09-04 兰州交通大学 Multiple-target motion capturing system integrating optical localization and inertia sensing
CN104658012A (en) * 2015-03-05 2015-05-27 第二炮兵工程设计研究院 Motion capture method based on inertia and optical measurement fusion
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
CN105551059A (en) * 2015-12-08 2016-05-04 国网山西省电力公司技能培训中心 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion
CN205581785U (en) * 2016-04-15 2016-09-14 向京晶 Indoor virtual reality interactive system of many people
CN206819290U (en) * 2017-03-24 2017-12-29 苏州创捷传媒展览股份有限公司 A kind of system of virtual reality multi-person interactive

Also Published As

Publication number Publication date
CN106843507A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN106843507B (en) Virtual reality multi-person interaction method and system
WO2020054442A1 (en) Articulation position acquisition method and device, and motion acquisition method and device
KR101323966B1 (en) A system and method for 3D space-dimension based image processing
D’Antonio et al. Validation of a 3D markerless system for gait analysis based on OpenPose and two RGB webcams
US9041775B2 (en) Apparatus and system for interfacing with computers and other electronic devices through gestures by using depth sensing and methods of use
KR101768958B1 (en) Hybird motion capture system for manufacturing high quality contents
CN110544301A (en) Three-dimensional human body action reconstruction system, method and action training system
CN112069933A (en) Skeletal muscle stress estimation method based on posture recognition and human body biomechanics
US10445930B1 (en) Markerless motion capture using machine learning and training with biomechanical data
CN105252532A (en) Method of cooperative flexible attitude control for motion capture robot
CN109804220A (en) System and method for tracking the fortune dynamic posture on head and eyes
CN104699247A (en) Virtual reality interactive system and method based on machine vision
CN111353355B (en) Motion tracking system and method
CN109799900A (en) The wireless wrist connected for three-dimensional imaging, mapping, networking and interface calculates and controls device and method
CN110544302A (en) Human body action reconstruction system and method based on multi-view vision and action training system
CN110633005A (en) Optical unmarked three-dimensional human body motion capture method
CN206819290U (en) A kind of system of virtual reality multi-person interactive
Yang et al. 3-D markerless tracking of human gait by geometric trilateration of multiple Kinects
CN111489392B (en) Single target human motion posture capturing method and system in multi-person environment
WO2024094227A1 (en) Gesture pose estimation method based on kalman filtering and deep learning
Chen et al. Camera networks for healthcare, teleimmersion, and surveillance
CN109859237B (en) Human skeleton motion analysis method based on infrared scanning
Jung et al. Realization of a hybrid human motion capture system
CN116485953A (en) Data processing method, device, equipment and readable storage medium
CN109543517A (en) A kind of computer vision artificial intelligence application method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant