CN106843507A - A kind of method and system of virtual reality multi-person interactive - Google Patents

A kind of method and system of virtual reality multi-person interactive Download PDF

Info

Publication number
CN106843507A
CN106843507A CN201710185939.3A CN201710185939A CN106843507A CN 106843507 A CN106843507 A CN 106843507A CN 201710185939 A CN201710185939 A CN 201710185939A CN 106843507 A CN106843507 A CN 106843507A
Authority
CN
China
Prior art keywords
data
dynamic
inertia
catch
output end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710185939.3A
Other languages
Chinese (zh)
Other versions
CN106843507B (en
Inventor
徐志
邱春麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Multispace Media & Exhibition Co Ltd
Original Assignee
Suzhou Multispace Media & Exhibition Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Multispace Media & Exhibition Co Ltd filed Critical Suzhou Multispace Media & Exhibition Co Ltd
Priority to CN201710185939.3A priority Critical patent/CN106843507B/en
Publication of CN106843507A publication Critical patent/CN106843507A/en
Application granted granted Critical
Publication of CN106843507B publication Critical patent/CN106843507B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a kind of system of virtual reality multi-person interactive, including:Wear-type visual device (1), image rendering computer (2), mixing are moved and catch space positioning system (3), central server (4);As rendering computers (2), mixing is dynamic to catch space positioning system (3) includes wear-type visual device (1) correspondence connection figure:Multiple optical alignment modules (31) and inertia are moved catches module (32), and mixing is dynamic to catch server (33);Optical alignment module (31) includes:First output end (311), inertia is dynamic to catch module (32) includes:Second output end (321), mixing is dynamic to catch server (33) includes:First input end (331), the second input (332) and the 3rd output end (333), 3rd output end (333) is connected with image rendering computer (2), and provides the display image that wear-type visual device (1) is exported by image rendering computer (2).Easy to operate, simple structure of the invention, with high commercial value.

Description

A kind of method and system of virtual reality multi-person interactive
Technical field
The present invention relates to field of virtual reality, more particularly to a kind of method and system of virtual reality multi-person interactive.
Background technology
Virtual reality technology is a kind of computer simulation system that can be created with the experiencing virtual world, and it utilizes computer A kind of simulated environment is generated, is that the system of a kind of Multi-source Information Fusion, interactive Three-Dimensional Dynamic what comes into a driver's and entity behavior is imitated Really user is set to be immersed in the environment.
Current virtual reality technology continuous development innovation, if realizing many people virtual using virtual reality technology Environment in interact to become and need at present the constantly target groped, for example, how to realize carrying out at home the reality of many people When concert, virtual Basketball Match of many people etc. how is realized in the case of no basketball court, these problems are all urgently to be resolved hurrily.
And do not have a kind of method and system of virtual reality multi-person interactive at present.
The content of the invention
For the technological deficiency that prior art is present, it is it is an object of the invention to provide a kind of virtual reality multi-person interactive System, including:Wear-type visual device 1, image rendering computer 2, mixing are moved and catch space positioning system 3, central server 4;Institute State the correspondence connection of wear-type visual device 1 described image rendering computers 2, the mixing is dynamic to catch space positioning system 3 includes:It is many The individual optical alignment module 31 and inertia for being respectively arranged on object multiple spot is dynamic to catch module 32, and mixing is dynamic to catch server 33;The optics Locating module 31 includes:First output end 311, the inertia is dynamic to catch module 32 includes:Second output end 321, the mixing is dynamic Catching server 33 includes:First input end 331, the second input 332 and the 3rd output end 333, first output end 311 with The first input end 331 is connected, and second output end 321 is connected with second input 332, the 3rd output end 333 It is connected with described image rendering computers 2, and the wear-type visual device 1 is provided by described image rendering computers 2 and is exported Display image.
Preferably, the wear-type visual device 1 includes:Display camera lens 11 and display image input 12, the display Image input 12 is connected with described image rendering computers 2.
Preferably, described image rendering computers 2 include:Blended data input 21, the image production mould being sequentially connected Block 22, image rendering module 23 and display image output end 24, the blended data input 21 and the 3rd output end 333 Connection, the display image output end 24 is connected with the wear-type visual device 1.
Preferably, first output end 311 is optical alignment data output end, and second output end 321 is inertia Exercise data output end, the 3rd output end 333 is blended data output end.
Preferably, the optical alignment module 31 includes:Optical locating point 312, infrared camera 313 and location processor 314, the optical locating point 312 is located at multiple first artis of object, 313 pairs of optical alignments of the infrared camera Point 312 is carried out infrared image delivery to the location processor 314 after infrared image shooting, and first output end 311 is The output end of the location processor 314.
Preferably, the inertia is dynamic catches module 32 and includes:Sensor 322 and inertia are moved catches processor 323, the sensor 322 are located at multiple second joint points of object and gather line angular speed between the acceleration and second joint point of second joint point; The inertia is dynamic to catch processor 323 includes obtaining input and orientation inertial positioning data output end, the acquisition input with Sensor is connected, and second output end 321 is the orientation inertial positioning data output end.
Preferably, the mixing is dynamic catches server 33 and also includes:Calibration module 334, the calibration module 334 is defeated by first The data for entering the input 332 of end 331 and second are compared and the data after the output calibration of the 3rd output end 333.
According to another aspect of the present invention, there is provided a kind of method of virtual reality multi-person interactive, including:
Gather the optical alignment module and inertia moves and catches the optical alignment data and inertia of module and dynamic catch data;
Data of catching dynamic to the optical alignment data and inertia are processed and are calibrated to form blended data;
Moving image is generated based on the blended data and image rendering is carried out to form display image.
Preferably, the optical alignment data are P1′(x1', y1', z1'), the dynamic data of catching of the inertia are for P2′(x2', y2', z2'), it is described it is dynamic to the optical alignment data and inertia catch data and processed and calibrated included with forming blended data Following steps:
Catch the corresponding normalized optical location data P under human body standard gestures1(x1, y1, z1) and standard inertia is dynamic catches Data P2(x2, y2, z2);
It is dynamic to the optical alignment data and the inertia based on the dynamic data of catching of normalized optical location data and standard inertia Catch data and perform matching step, concrete mode is as follows:|x1′-x2′|≤|x1-x2|+|y1-y2|+|z1-z2|, | y1′-y2′|≤| x1-x2|+|y1-y2|+|z1-z2|, | z1′-z2′|≤|x1-x2|+|y1-y2|+|z1-z2|;
The calculating whole optical alignment data that the match is successful and the inertia move the average coordinates value conduct for catching data The blended data.
Preferably, if the part optical alignment data and the dynamic data of catching of the part inertia are not carried out matching step And the normalized optical location data and the standard inertia are dynamic catches data all matching is finished, then the optical alignment suddenly, Data and the dynamic data of catching of the inertia are in the absence of missing data;
Catch data and be not carried out matching step if the part optical alignment data and the part inertia is dynamic, and it is described Normalized optical location data and the standard inertia is dynamic to catch data all matching is not finished, then optical alignment data and described Inertia is dynamic to catch data and there is missing data.
Preferably, data are caught there is missing data if the optical alignment data and the inertia are dynamic, using multinomial interpolation To repairing, the optical alignment data and the inertia are dynamic to catch data to method, and based on the optical alignment data after repairing and repairing after The dynamic data of catching of inertia perform matching step.
Preferably, also include:
There is provided the wear-type visual device and by the wear-type visual device output display image.
Preferably, the wear-type visual device also includes human body support, and the optical alignment module and the inertia are dynamic Module is caught to be located on the human body support.
Beneficial effects of the present invention:The present invention catches space positioning system and captures the action of people and divided by the way that mixing is dynamic Analysis, analysis result is sent to image rendering computer and is rendered, and rendering result is sent to central server, center clothes Business device is integrated multiple rendering results, it is determined that last rendering result, and send to everyone wear-type visual device, Finally seen by human eye.Easy to operate, simple structure of the invention, with high commercial value.
Brief description of the drawings
The detailed description made to non-limiting example with reference to the following drawings by reading, further feature of the invention, Objects and advantages will become more apparent upon:
Fig. 1 shows specific embodiment of the invention, a kind of module connection of system of virtual reality multi-person interactive Schematic diagram;
Fig. 2 shows the first embodiment of the present invention, the module connection diagram of the optical alignment module;
Fig. 3 shows the second embodiment of the present invention, and the inertia moves the module connection diagram for catching module;
Fig. 4 shows the third embodiment of the present invention, the dynamic module connection diagram for catching server of mixing;And
Fig. 5 shows another embodiment of the present invention, a kind of method of virtual reality multi-person interactive it is specific Schematic flow sheet.
Specific embodiment
In order to preferably make technical scheme clearly show, the present invention is made into one below in conjunction with the accompanying drawings Step explanation.
Fig. 1 shows specific embodiment of the invention, a kind of module connection of system of virtual reality multi-person interactive Schematic diagram, it will be appreciated by those skilled in the art that the system of the virtual reality multi-person interactive sets multiple fixed by with human body Position module, and central server is reflected into, the real time kinematics situation of the integrated many people of central server, and rendered, most Reflect that everyone wears visual device afterwards, realize and wear visual device and realize that virtual environment is mutual with many people of virtual reality Dynamic purpose, specifically, the system of the virtual reality multi-person interactive includes:Wear-type visual device 1, image rendering computer 2nd, mix to move and catch space positioning system 3, central server 4, it will be appreciated by those skilled in the art that the wear-type visual device 1 can To be virtual implementing helmet, or virtual reality eyes, it is mainly used in making one to see virtual reality scenario, specifically Ground, the wear-type visual device also includes human body support, and the human body support is applied to human bodily form, further, light Locating module and the dynamic module of catching of the inertia are learned on the human body support, the optical alignment module and the inertia are moved and caught Module is used to obtain optical alignment data and inertia is moved and catches data, is done in these specific embodiments that will be described below and further retouched State, will not be described here.
Described image rendering computers 2 can be provided in processing unit in wear-type visual device 1, or remote From the fixed arithmetic facility that wear-type visual device 1 is set, described image rendering computers are mainly used in receiving, capturing Human action is analyzed, and is rendered in the case of virtual scene based on the analysis, and the dynamic space of catching of the mixing is determined Position system 3 is mainly used in catching the action of the people in virtual reality multi-person interactive, and track of the generation based on the action, Direction etc., is that image rendering computer 2 renders offer condition and basis, and the central server is used for integrated multiple described Mixing is dynamic to catch many human actions that space positioning system 3 is caught, and is rendered according to different wear-type visual devices 1 different Scene, and send to each wear-type visual device.
Preferably, the correspondence of the wear-type visual device 1 connection described image rendering computers 2, the mixing is dynamic to catch sky Between alignment system 3 include:Multiple be respectively arranged on object multiple spot optical alignment module 31 and inertia it is dynamic catch module 32, mixing is dynamic to catch Server 33;The optical alignment module 31 includes:First output end 311, the inertia is dynamic to catch module 32 includes:Second output End 321, the mixing is dynamic to catch server 33 includes:First input end 331, the second input 332 and the 3rd output end 333, enter One step ground, the correspondence of the wear-type visual device 1 connection described image rendering computers 2, in such embodiments, the head The correspondence one image rendering computer 2 of connection of formula visual device 1 is worn, and in other examples, each described wear-type can Depending on the correspondence one image rendering computer 2 of connection of equipment 1.In a preferred embodiment, set in head and the hand position of object It is equipped with multiple optical alignment modules 31 and inertia is moved and catches module 32, and in other examples, can also be in pin, waist etc. Position sets multiple optical alignment modules 31 and inertia is moved and catches module 32.
The optical alignment module 31 mainly realizes positioning by way of infrared shooting, and the dynamic module 32 of catching of the inertia is led to Track, the position of velocity sensor seizure motion are crossed, the dynamic server 33 of catching of the mixing is used for optical alignment module 31 and is used to Property it is dynamic catch module 32 and carry out integrated, computing, show that one is most preferably moved orientation and movement locus, the mixing is dynamic to catch clothes The input 332 of first input end 331 and second of business device 33 is used to obtain the optical alignment module 31 and inertia is moved and catches mould The data of block 32, the 3rd output end 333 be used for transmit it is described it is integrated after data.
Preferably, first output end 311 is connected with the first input end 331, second output end 321 and institute State the second input 332 to connect, the 3rd output end 333 is connected with described image rendering computers 2, and renders meter by described image Calculation machine 2 provides the display image of the output of wear-type visual device 1, and first output end 311 is that optical alignment data are defeated Go out end, first output end 311 is connected positioning result Ji Wei the optical alignment module 31 with the first input end 331 It is transferred to that the mixing is dynamic to catch server 33, second output end 321, second output end 321 and the described second input The mixing is dynamic to catch server 33 to hold 332 connections to be as transferred to the result of seizure, further, described to mix the dynamic service of catching Device 33 by the 3rd output end 333 will be integrated after data result be transferred to described image rendering computers 2 and rendered, And it is transferred to the wear-type visual device 1 using rendering result as display image.
Preferably, the wear-type visual device 1 includes:Display camera lens 11 and display image input 12, the display Image input 12 is connected with described image rendering computers 2, in such embodiments, display camera lens and the human body head The position of glasses is adapted, and for showing the display image from image rendering computer 2, the display image input 12 is used In display image of the reception from image rendering computer 2.
Preferably, described image rendering computers 2 include:Blended data input 21, the image production mould being sequentially connected Block 22, image rendering module 23 and display image output end 24, the blended data input 21 and the 3rd output end 333 Connection, the display image output end 24 is connected with the wear-type visual device 1.It will be appreciated by those skilled in the art that described Three output ends 333 are blended data output end, and the blended data input 21 is used to receive from the dynamic service of catching of the mixing The exercise data of the 3rd output end 333 described in device 33, after the exercise data is obtained, described image rendering computers 2 are excellent Selection of land produces module 22 and sets up virtual environment by image, builds virtual scene, and combine by described image rendering module 23 The exercise data is rendered, and rendering result is dissolved into the virtual environment, and further, the display image is defeated Go out the display image input 12 of the connection of the end 24 wear-type visual device 1, by the final rendering result by described Display image input 12 is transferred to the wear-type visual device 1.
Fig. 2 shows the first embodiment of the present invention, the module connection diagram of the optical alignment module, as this The first embodiment of invention, the optical alignment module is the dynamic part caught in space positioning system of the mixing, for carrying For visual positioning.
Further, the optical alignment module 31 includes:Optical locating point 312, infrared camera 313 and localization process Device 314, the optical locating point 312 is located at multiple first artis of object, and 313 pairs of optics of the infrared camera are determined Site 312 is carried out infrared image delivery to the location processor 314, first output end 311 after infrared image shooting It is the output end of the location processor 314.
In such embodiments, the optical locating point 312 is preferably provided in the key position of human body, preferably sets The head and hand position in human body are put, foot etc. position can also be arranged on, the orientation for positioning human body is described red Outer camera 313 is used to carry out infrared shooting according to the optical locating point 312, and obtain according to the displacement on human visual, The change in orientation, the left and right of the optical locating point move forward and backward up and down, obtain infrared image, are transferred to the localization process Device.The location processor is used to pre-process the infrared data, and the infrared data is defeated by described first Go out end 311 and be transferred to the first input end 331.
Fig. 3 shows the second embodiment of the present invention, and the inertia moves the module connection diagram for catching module, used as this The second embodiment of invention, it is the dynamic another part caught in space positioning system of the mixing that the inertia is moved and catches module, is used for Positioning in offer action.
Further, the inertia is dynamic catches module 32 and catches processor 323, the sensing including sensor 322 and inertia are dynamic Device 332 is used to perceive the action of human body, it is therefore preferable to acceleration transducer, angular-rate sensor, and other real Apply in example, the sensor also includes displacement transducer, height sensor etc., and the dynamic processor 323 of catching of the inertia is for locating Manage the data acquired in the sensor 332.
Further, the sensor 322 is located at multiple second joint points of object and gathers the acceleration of second joint point Line angular speed between degree and second joint point, it will be appreciated by those skilled in the art that the setting of second joint point can be covering The position of first artis, it is also possible to which artis is set in addition, for example, swivel of hand, ankle, knee, shoulder etc., described The dynamic processor 323 of catching of inertia includes obtaining input and orientation inertial positioning data output end, the acquisition input and sensing Device is connected, and second output end 321 is the orientation inertial positioning data output end, and the acquisition input connects the biography Sensor, the orientation inertial positioning data output end is second output end 321, and second output end 321 connects institute State second input 332.
Fig. 4 shows the third embodiment of the present invention, and the dynamic module connection diagram for catching server of mixing is described The dynamic server of catching of mixing is for the visual positioning that will be obtained in the embodiment one and the embodiment two and action On positioning be combined, show that a kind of highly preferred motion images show.
Specifically, the dynamic server 33 of catching of the mixing also includes calibration module 334, and the calibration module 334 is defeated by first The data for entering the input 332 of end 331 and second are compared and the data after the output calibration of the 3rd output end 333.
As the third embodiment of the present invention, it will be appreciated by those skilled in the art that in actual virtual environment interaction, often Because the fact that some are objective causes the people can not completely to transmit the declaration of will of oneself by limbs, and the calibration Module needs what is done, and declaration of will complete on thought of people is exactly realized by analyzing, and in a further embodiment, institute Stating calibration module can also calibrate to the nonstandard posture of people, it is slack action carry out it is perfect so that other people By wear-type visual device it is seen that complete, smooth, the rich action for closing sense.
For example, in a specific embodiment, it is assumed that people carry out basketball using the system of virtual reality multi-person interactive Match, wherein someone using dunk shot of jumping up posture, at this moment, because under actual conditions, this person has reached targets threshold Highly, and this person position is mutually agreed with dunk shot position, but the posture of dunk shot generates error due to objective condition, and one has only been detained Partly stop, now according to calibration module, calibration reparation can be carried out to the posture of the dunk shot, and other people are seeing this person's button It is complete dunk shot action during basket, including action of this person when landing etc., and in another change case of the present embodiment, If this person does not takeoff, be not reaching to the object height of system defined, but this person posture as dunk shot, then according to described used Property moves the movement locus for catching the swivel of hand that module 32 is captured, and judges that this person is dunk shot or shooting according to calibration module.Most Afterwards, described image rendering computers are transferred to using the data after calibration as the final data of this person.
Fig. 5 shows another embodiment of the present invention, a kind of method of virtual reality multi-person interactive it is specific Schematic flow sheet, the method for the virtual reality multi-person interactive will combine the specific embodiment and tool that are shown in Fig. 1 to Fig. 4 Body embodiment is further described through to implementation of the invention.
First, into step S101, gather the optical alignment module and inertia move the optical alignment data of catching module and Inertia is dynamic to catch data, it will be appreciated by those skilled in the art that the step is to obtain the exercise data of people, it is fixed by the optics Position module obtains the optical alignment data, obtains that the inertia is dynamic to catch data by the dynamic module of catching of the inertia, further, The optical alignment module coordinates the optical locating point and infrared camera, and the infrared camera is to the optical alignment Point is carried out infrared image delivery to the location processor after infrared image shooting, and the dynamic module of catching of the inertia coordinates the biography Sensor, the sensor located at object multiple second joint points and between gathering the acceleration and second joint point of second joint point Line angular speed, the dynamic processor of catching of inertia includes obtaining input and orientation inertial positioning data output end, and the sensor will The data of acquisition are transferred to that the inertia is dynamic to catch processor.
Subsequently, into step S102, it is dynamic to the optical alignment data and inertia catch data processed and calibrated with Blended data is formed, in such embodiments, the blended data is the data for image rendering, by the light Learn location data and inertia moves the treatment for catching data, it may be determined that the elemental motion of people, then by the calibration of the calibration module Treatment, makes the blended data be more suitable for image and shows.
Used as a preferred embodiment of step S102, the optical alignment data are P1′(x1', y1', z1'), it is described The dynamic data of catching of inertia are for P2′(x2', y2', z2'), the step S102 is realized by following steps:
First, the corresponding normalized optical location data P under human body standard gestures is caught1(x1, y1, z1) and standard it is used Property dynamic catch data P2(x2, y2, z2).Specifically, can be when human body be in both arms and stretches erectility, by optical alignment mould Block and inertia is dynamic to catch that module catches normalized optical location data and standard inertia is dynamic catches data, it will be appreciated by those skilled in the art that Human body can be placed in a three-dimensional system of coordinate, then the normalized optical location data and standard inertia are moved and catches data actually It is made up of a series of D coordinates value, wherein, x1, y1, z1And x2, y2, z2The numerical value of x-axis, y-axis, z-axis is represented respectively.
Secondly, based on the dynamic data of catching of normalized optical location data and standard inertia to optical alignment data and described used Property dynamic data of catching perform matching step, specifically, dynamic to catch data be also a series of for the optical alignment data and the inertia Three-dimensional coordinate data, catches data and screens according to following standard is dynamic to the optical alignment data and the inertia.|x1′-x2′ |≤|x1-x2|+|y1-y2|+|z1-z2|, | y1′-y2′|≤|x1-x2|+|y1-y2|+|z1-z2|, | z1′-z2′|≤|x1-x2|+| y1-y2|+|z1-z2|.As can be seen that the optical alignment data and the dynamic data of catching of the inertia constitute one group of data, and every group The optical alignment data and the inertia is dynamic to catch data dynamic with corresponding one group of optical alignment module and inertia to catch module relative Should, and the normalized optical location data and the standard inertia are dynamic, and to catch data page dynamic to that should organize optical alignment module and inertia Catch module, i.e. in actual application, be by every group of optical alignment module and inertia are dynamic catch module on the basis of, correspondence is matched respectively The normalized optical location data and the standard inertia is dynamic catches data and the optical alignment data and the inertia is dynamic catches Data, such matching way is signed magnitude arithmetic(al), can improve arithmetic speed and reduce the error of multiple computing.
It is further preferable that in this step, if the part optical alignment data and the part inertia are moved is caught number According to being not carried out matching step, and the normalized optical location data and standard inertia is dynamic catches data all matching is finished, Then the optical alignment data and the dynamic data of catching of the inertia are in the absence of missing data.If the part optical alignment data And the inertia dynamic data of catching in part are not carried out matching step, and the normalized optical location data and the standard inertia are moved and caught All matching is not finished data, then the optical alignment data and the inertia are dynamic catches data and there is missing data.
Further, data are caught there is missing data if the optical alignment data and the inertia are dynamic, inserted using multinomial Value method is moved and catches data to repairing the optical alignment data and the inertia, and based on the optical alignment data after repairing and repairing The dynamic data of catching of inertia afterwards perform matching step.Specifically, multinomial interpolation method can apply to multiple fields, can use glug Bright day difference approach, newton difference approach and Hermite interpolation method, those skilled in the art can give with reference to prior art To realize.More specifically, the purpose of this preferred embodiment is that the raising optical alignment data and the inertia are moved and catches data Continuity and integrality, and then draw more accurate blended data.
Finally, the calculating whole optical alignment data that the match is successful and the inertia move the average coordinates value for catching data As the blended data.Specifically, x-axis, y-axis, the average value of the numerical value of z-axis are calculated respectively, finally give the mixed number According to the blended data is also a three-dimensional coordinate data.Finally, into step S103, generated based on the blended data and moved Image simultaneously carries out image rendering to form display image, it will be appreciated by those skilled in the art that in such embodiments, the mixing Data enter described image rendering computers by the blended data input, are provided with described image rendering computers Image production module, image rendering module, described image generation module generation moving image, and according to described image rendering module Image rendering is carried out, further, the display image is formed, and can to the wear-type by the output of display image output end Depending on equipment.More specifically, by the wear-type visual device output display image, the image of the display image is process After step S101 to step S103 treatment, what is drawn renders image.
Specific embodiment of the invention is described above.It is to be appreciated that the invention is not limited in above-mentioned Particular implementation, those skilled in the art can within the scope of the claims make various deformations or amendments, this not shadow Sound substance of the invention.

Claims (13)

1. a kind of system of virtual reality multi-person interactive, it is characterised in that including:Wear-type visual device (1), image rendering meter Calculation machine (2), mixing are moved and catch space positioning system (3), central server (4);Wear-type visual device (1) the correspondence connection institute State image rendering computer (2), the mixing is dynamic to catch space positioning system (3) includes:Multiple is respectively arranged on the light of object multiple spot Learn locating module (31) and inertia is moved and catches module (32), mixing is dynamic to catch server (33);The optical alignment module (31) includes: First output end (311), the inertia is dynamic to catch module (32) includes:Second output end (321), the mixing is dynamic to catch server (33) include:First input end (331), the second input (332) and the 3rd output end (333), first output end (311) It is connected with the first input end (331), second output end (321) is connected with second input (332), and the 3rd is defeated Go out end (333) to be connected with described image rendering computers (2), and the wear-type is provided by described image rendering computers (2) The display image of visual device (1) output.
2. the system of virtual reality multi-person interactive as claimed in claim 1, it is characterised in that the wear-type visual device (1) include:Display camera lens (11) and display image input (12), the display image input (12) render with described image Computer (2) is connected.
3. the system of virtual reality multi-person interactive as claimed in claim 1, it is characterised in that described image rendering computers (2) include:Blended data input (21), image production module (22), image rendering module (23) and the display being sequentially connected Output end of image (24), the blended data input (21) is connected with the 3rd output end (333), and the display image is defeated Go out end (24) to be connected with the wear-type visual device (1).
4. the system of virtual reality multi-person interactive as claimed in claim 3, it is characterised in that first output end (311) It is optical alignment data output end, second output end (321) is inertia motion data output end, the 3rd output end (333) it is blended data output end.
5. the system of virtual reality multi-person interactive as claimed in claim 1, it is characterised in that the optical alignment module (31) Including:Optical locating point (312), infrared camera (313) and location processor (314), the optical locating point (312) are located at Multiple first artis of object, the infrared camera (313) carries out infrared image shooting to the optical locating point (312) Afterwards by infrared image delivery to the location processor (314), first output end (311) is the location processor (314) output end.
6. the system of virtual reality multi-person interactive as claimed in claim 1, it is characterised in that the inertia is dynamic to catch module (32) Including:Sensor (322) and inertia are moved catches processor (323), multiple second joint points of the sensor (322) located at object And gather line angular speed between the acceleration and second joint point of second joint point;The inertia is dynamic to catch processor (323) includes Input and orientation inertial positioning data output end are obtained, the acquisition input is connected with sensor, second output end (321) it is the orientation inertial positioning data output end.
7. the system of virtual reality multi-person interactive as claimed in claim 1, it is characterised in that the mixing is dynamic to catch server (33) also include:Calibration module (334), the calibration module (334) is by first input end (331) and the second input (332) Data be compared and the 3rd output end (333) output calibration after data.
8. a kind of method of virtual reality multi-person interactive, based on the system as described in any one of claim 1 to 7, its feature exists In, including:
Gather the optical alignment module and inertia moves and catches the optical alignment data and inertia of module and dynamic catch data;
Data of catching dynamic to the optical alignment data and inertia are processed and are calibrated to form blended data;
Moving image is generated based on the blended data and image rendering is carried out to form display image.
9. the method for virtual reality multi-person interactive as claimed in claim 8, it is characterised in that the optical alignment data are P1′ (x1', y1', z1'), the dynamic data of catching of the inertia are for P2′(x2', y2', z2'), it is described dynamic to the optical alignment data and inertia Catch data and processed and calibrated and comprised the following steps with forming blended data:
Catch the corresponding normalized optical location data P under human body standard gestures1(x1, y1, z1) and standard inertia is dynamic catches data P2(x2, y2, z2);
Catch that data are dynamic to the optical alignment data and the inertia to catch number based on normalized optical location data and standard inertia are dynamic According to matching step is performed, concrete mode is as follows:
The calculating whole optical alignment data that the match is successful and the inertia move the average coordinates value for catching data as described Blended data.
10. the method for virtual reality multi-person interactive as claimed in claim 9, it is characterised in that if the part optics Location data and the part inertia is dynamic to catch data and is not carried out matching step, and the normalized optical location data and the standard Inertia is dynamic to catch data all matching is finished, then the optical alignment data and the dynamic data of catching of the inertia are in the absence of missing number According to;
Catch data and be not carried out matching step, and the standard if the part optical alignment data and the part inertia is dynamic Optical alignment data and the standard inertia is dynamic to catch data all matching is not finished, then the optical alignment data and the inertia It is dynamic to catch data and there is missing data.
The method of 11. virtual reality multi-person interactives as claimed in claim 10, it is characterised in that if the optical alignment data And the inertia is dynamic catches data and there is missing data, using multinomial interpolation method to repairing the optical alignment data and the inertia It is dynamic to catch data, and perform matching step based on the dynamic data of catching of the inertia after the optical alignment data after repairing and repairing.
The method of the 12. virtual reality multi-person interactive as any one of claim 8 to 11, it is characterised in that also include:
There is provided the wear-type visual device and by the wear-type visual device output display image.
The method of the 13. virtual reality multi-person interactive as any one of claim 8 to 11, it is characterised in that the head The formula visual device of wearing also includes human body support, and the optical alignment module and the dynamic module of catching of the inertia are located at the human body support On.
CN201710185939.3A 2017-03-24 2017-03-24 Virtual reality multi-person interaction method and system Active CN106843507B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710185939.3A CN106843507B (en) 2017-03-24 2017-03-24 Virtual reality multi-person interaction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710185939.3A CN106843507B (en) 2017-03-24 2017-03-24 Virtual reality multi-person interaction method and system

Publications (2)

Publication Number Publication Date
CN106843507A true CN106843507A (en) 2017-06-13
CN106843507B CN106843507B (en) 2024-01-05

Family

ID=59131093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710185939.3A Active CN106843507B (en) 2017-03-24 2017-03-24 Virtual reality multi-person interaction method and system

Country Status (1)

Country Link
CN (1) CN106843507B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107773254A (en) * 2017-12-05 2018-03-09 苏州创捷传媒展览股份有限公司 A kind of method and device for testing Consumer's Experience
WO2019019974A1 (en) * 2017-07-25 2019-01-31 广州市动景计算机科技有限公司 Augmented reality interaction system, method and device
CN109313484A (en) * 2017-08-25 2019-02-05 深圳市瑞立视多媒体科技有限公司 Virtual reality interactive system, method and computer storage medium
CN110433486A (en) * 2018-05-04 2019-11-12 武汉金运激光股份有限公司 A kind of starting, response method and device realized more people and carry out somatic sensation television game
CN111796670A (en) * 2020-05-19 2020-10-20 北京北建大科技有限公司 Large-space multi-person virtual reality interaction system and method
CN111947650A (en) * 2020-07-14 2020-11-17 杭州瑞声海洋仪器有限公司 Fusion positioning system and method based on optical tracking and inertial tracking
CN113633962A (en) * 2021-07-15 2021-11-12 北京易智时代数字科技有限公司 Large-space multi-person interactive integrated system
CN114900678A (en) * 2022-07-15 2022-08-12 北京蔚领时代科技有限公司 VR end-cloud combined virtual concert rendering method and system
CN115624384A (en) * 2022-10-18 2023-01-20 方田医创(成都)科技有限公司 Operation auxiliary navigation system, method and storage medium based on mixed reality technology
CN115778544A (en) * 2022-12-05 2023-03-14 方田医创(成都)科技有限公司 Operation navigation precision indicating system, method and storage medium based on mixed reality

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279186A (en) * 2013-05-07 2013-09-04 兰州交通大学 Multiple-target motion capturing system integrating optical localization and inertia sensing
CN104658012A (en) * 2015-03-05 2015-05-27 第二炮兵工程设计研究院 Motion capture method based on inertia and optical measurement fusion
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
CN105551059A (en) * 2015-12-08 2016-05-04 国网山西省电力公司技能培训中心 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion
US20160187969A1 (en) * 2014-12-29 2016-06-30 Sony Computer Entertainment America Llc Methods and Systems for User Interaction within Virtual Reality Scene using Head Mounted Display
CN205581785U (en) * 2016-04-15 2016-09-14 向京晶 Indoor virtual reality interactive system of many people
CN206819290U (en) * 2017-03-24 2017-12-29 苏州创捷传媒展览股份有限公司 A kind of system of virtual reality multi-person interactive

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279186A (en) * 2013-05-07 2013-09-04 兰州交通大学 Multiple-target motion capturing system integrating optical localization and inertia sensing
US20160187969A1 (en) * 2014-12-29 2016-06-30 Sony Computer Entertainment America Llc Methods and Systems for User Interaction within Virtual Reality Scene using Head Mounted Display
CN104658012A (en) * 2015-03-05 2015-05-27 第二炮兵工程设计研究院 Motion capture method based on inertia and optical measurement fusion
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
CN105551059A (en) * 2015-12-08 2016-05-04 国网山西省电力公司技能培训中心 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion
CN205581785U (en) * 2016-04-15 2016-09-14 向京晶 Indoor virtual reality interactive system of many people
CN206819290U (en) * 2017-03-24 2017-12-29 苏州创捷传媒展览股份有限公司 A kind of system of virtual reality multi-person interactive

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019019974A1 (en) * 2017-07-25 2019-01-31 广州市动景计算机科技有限公司 Augmented reality interaction system, method and device
CN109313484B (en) * 2017-08-25 2022-02-01 深圳市瑞立视多媒体科技有限公司 Virtual reality interaction system, method and computer storage medium
CN109313484A (en) * 2017-08-25 2019-02-05 深圳市瑞立视多媒体科技有限公司 Virtual reality interactive system, method and computer storage medium
CN107773254A (en) * 2017-12-05 2018-03-09 苏州创捷传媒展览股份有限公司 A kind of method and device for testing Consumer's Experience
CN110433486A (en) * 2018-05-04 2019-11-12 武汉金运激光股份有限公司 A kind of starting, response method and device realized more people and carry out somatic sensation television game
CN111796670A (en) * 2020-05-19 2020-10-20 北京北建大科技有限公司 Large-space multi-person virtual reality interaction system and method
CN111947650A (en) * 2020-07-14 2020-11-17 杭州瑞声海洋仪器有限公司 Fusion positioning system and method based on optical tracking and inertial tracking
CN113633962A (en) * 2021-07-15 2021-11-12 北京易智时代数字科技有限公司 Large-space multi-person interactive integrated system
CN114900678A (en) * 2022-07-15 2022-08-12 北京蔚领时代科技有限公司 VR end-cloud combined virtual concert rendering method and system
CN114900678B (en) * 2022-07-15 2022-09-30 北京蔚领时代科技有限公司 VR end-cloud combined virtual concert rendering method and system
CN115624384A (en) * 2022-10-18 2023-01-20 方田医创(成都)科技有限公司 Operation auxiliary navigation system, method and storage medium based on mixed reality technology
CN115624384B (en) * 2022-10-18 2024-03-22 方田医创(成都)科技有限公司 Operation auxiliary navigation system, method and storage medium based on mixed reality technology
CN115778544A (en) * 2022-12-05 2023-03-14 方田医创(成都)科技有限公司 Operation navigation precision indicating system, method and storage medium based on mixed reality
CN115778544B (en) * 2022-12-05 2024-02-27 方田医创(成都)科技有限公司 Surgical navigation precision indicating system, method and storage medium based on mixed reality

Also Published As

Publication number Publication date
CN106843507B (en) 2024-01-05

Similar Documents

Publication Publication Date Title
CN106843507A (en) A kind of method and system of virtual reality multi-person interactive
US11468612B2 (en) Controlling display of a model based on captured images and determined information
CN107820593B (en) Virtual reality interaction method, device and system
CN105320271B (en) It is calibrated using the head-mounted display of direct Geometric Modeling
KR101295471B1 (en) A system and method for 3D space-dimension based image processing
Canessa et al. Calibrated depth and color cameras for accurate 3D interaction in a stereoscopic augmented reality environment
US11199711B2 (en) Enhanced reality systems
US20200097732A1 (en) Markerless Human Movement Tracking in Virtual Simulation
JP6369811B2 (en) Gait analysis system and gait analysis program
CN104699247A (en) Virtual reality interactive system and method based on machine vision
CN110633005A (en) Optical unmarked three-dimensional human body motion capture method
CN106256394A (en) The training devices of mixing motion capture and system
CN206819290U (en) A kind of system of virtual reality multi-person interactive
KR20170044318A (en) Method for collaboration using head mounted display
Schönauer et al. Wide area motion tracking using consumer hardware
CN112040209B (en) VR scene projection method and device, projection system and server
McGuirk A multi-view video based deep learning approach for human movement analysis
JP2021099666A (en) Method for generating learning model
CN111860275A (en) Gesture recognition data acquisition system and method
CN112698725B (en) Method for realizing penetrating screen system based on eye tracker tracking
Sheng et al. Marker-less Motion Capture Technology Based on Binocular Stereo Vision and Deep Learning
CN114283447B (en) Motion capturing system and method
TWI811108B (en) Mixed reality processing system and mixed reality processing method
AU2020436767B2 (en) Markerless motion capture of hands with multiple pose estimation engines
JP2022050776A (en) Human body portion tracking method and human body portion tracking system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant