CN203405772U - Immersion type virtual reality system based on movement capture - Google Patents

Immersion type virtual reality system based on movement capture Download PDF

Info

Publication number
CN203405772U
CN203405772U CN201320558125.7U CN201320558125U CN203405772U CN 203405772 U CN203405772 U CN 203405772U CN 201320558125 U CN201320558125 U CN 201320558125U CN 203405772 U CN203405772 U CN 203405772U
Authority
CN
China
Prior art keywords
motion
captured
bundled
buttocks
virtual environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CN201320558125.7U
Other languages
Chinese (zh)
Inventor
刘昊扬
戴若犁
李龙威
陈金舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Nuo Yiteng Science And Technology Ltd
Original Assignee
Beijing Nuo Yiteng Science And Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Nuo Yiteng Science And Technology Ltd filed Critical Beijing Nuo Yiteng Science And Technology Ltd
Priority to CN201320558125.7U priority Critical patent/CN203405772U/en
Application granted granted Critical
Publication of CN203405772U publication Critical patent/CN203405772U/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

An immersion type virtual reality system based on movement capture comprises a movement capturing device, an environment feedback device and a 3D virtual environment emulator. The movement capturing device comprises a plurality of movement capturing modules and a central processing chip, wherein the movement capturing modules are bounded on different portions of the body, each movement capturing module comprises a triaxial MEMS acceleration sensor for measuring an acceleration signal, a triaxial MEMS angular velocity sensor for measuring an angular velocity signal and a triaxial MEMS magnetometer for measuring a geomagnetism signal, and the central processing chip is connected with the movement capturing modules, receives the acceleration signals, the angular velocity signals and the geomagnetism signals and sends the postures and position information of the portions of the human body to the 3D virtual environment emulator through a first interface, wherein the postures and the position information are generated by the acceleration signals, the angular velocity signals and the geomagnetism signals. The environment feedback device comprises a plurality of different environment feedback devices which are respectively connected with the 3D virtual environment emulator, and the different environment feedback devices are respectively used for feeding a video control signal, an audio control signal, a force control signal and a touch control signal back to the different portions of the human body.

Description

A kind of immersion virtual reality system based on motion-captured
Technical field
The utility model is about movement capturing technology and virtual reality technology, particularly about a kind of immersion virtual reality system based on motion-captured.
Background technology
Movement capturing technology can record the action of object in digital mode, current movement capturing technology mainly comprises following several:
Mechanical motion catches: rely on mechanical hook-up to measure motion, this mechanical hook-up is comprised of a plurality of joints and rigidity connecting rod, at joint, angular transducer being housed changes to measure joint angles, rigidity connecting rod also can change adjustable length expansion link into, and installation position displacement sensor is to measure the variation of length.Motion-captured by object to be caught is connected with mechanical hook-up, the motion of moving object driving mechanical device, thus by object under test under the sensor record on mechanical hook-up, move.Mechanical motion catch cost low, demarcate simple, precision compared with high and easily realize real time data to catch and not limited by place.But mechanical motion catching mode is difficult to realize for multivariant joint motions seizure, simultaneously due to self size and weight, the motion of object under test (particularly strenuous exercise) is caused to more serious obstruction and interference.
Electromagnetic type is motion-captured: generally emissive source, receiver and data processing unit, consist of.Launching site produces the alternating electromagnetic field by certain spatial and temporal distributions; Receiver is arranged on the key position of object under test, and receiver is followed object under test and moved, and the signal receiving is passed to data processing unit by wired mode.This motion-captured mode not only can obtain spatial positional information, can also obtain azimuth information, and real-time is good.But this motion-captured mode is strict for environmental requirement, neighbouring can not have metal, and wire cable is larger to the movement limit of object, and sample frequency is lower.
Acoustics formula is motion-captured: with the motion-captured comparing class of electromagnetic type seemingly, by ultrasonic transmitter, receiver and processing unit, formed.It is fixed on a plurality of transmitters at each position of object under test, transmitter continues to send ultrasound wave, each receiver draws the distance between transmitter and receiver by calculating the time of sound wave from transmitter to receiver, and 3 form leg-of-mutton receiver and just can determine the locus of transmitter.The motion-captured cost of acoustics formula is lower, but low precision and require transmitter and receiver between unobstructed.
Optical profile type is motion-captured: conventionally comprise 10~20 cameras, arrange the overlapping region of the range of movement of object under test in camera around object under test.The key position of object under test sticks the reflective spot of some speciality or luminous point as the sign of visual identity and processing.After system calibrating, the motion of the continuous shot object of camera also preserves image sequence to analyze and process, and calculates each monumented point in certain flashy locus, thereby and obtains its movement locus accurately.The motion-captured advantage of optical profile type is the restriction that there is no mechanical hook-up, wire cable etc., allow the range of movement of object larger, and sample frequency is higher, can meet the needs that most sports are measured, but this system price is expensive, the marked ratio of system is more loaded down with trivial details, can only catch the object of which movement of camera overlapping region, and when motion more complicated, sign is easily obscured and blocks, thereby produces wrong result.
Motion-captured based on inertial sensor: traditional mechanical type inertia sensor prolonged application is in the navigation of aircraft, boats and ships, high speed development along with MEMS (micro electro mechanical system) (MEMS) technology, the technology maturation of mini inertial sensor, in recent years, people start to attempt motion-captured based on mini inertial sensor.Basic skills is that Inertial Measurement Unit (IMU) is connected on object under test and follows object under test and move together.Inertial Measurement Unit generally includes micro-acceleration gauge (acceleration measurement signal) and gyroscope (measured angular rate signal), by the integration to the quadratic integral of acceleration signal and gyroscope signal, can obtain positional information and the azimuth information of object under test.Due to the application of MEMS technology, it is very little that the size of IMU and weight can be done, thereby very little to the motion effects of object under test, and for place require lowly, the range of movement of permission is large, the cost compare of simultaneity factor is low.The shortcoming of this technology is that the integration of sensor easily produces drift, and sensor itself is easily interfered, thereby higher to the requirement of system.
Virtual reality technology: relate to the fields such as computer graphics, human-computer interaction technology, sensing technology, artificial intelligence, it generates the sensations such as three-dimensional vision, hearing, touch feel true to nature, sense of smell with computing machine, make people pass through appropriate device as participant, naturally virtual world is experienced and reciprocation.User carries out position while moving, and computer can carry out complex calculations immediately, passes accurate 3D world image back generation telepresenc.This technology the is integrated later development of the technology such as computer graphics techniques, computer simulation technique, artificial intelligence, sensing technology, display technique, network parallel processing is a kind of by the auxiliary hi-tech simulation system generating of computer technology.Immersion degree is a key character of virtual reality system, refers to that user feels to be present in the really degree in simulated environment as leading role.Desirable simulated environment should make user's indistinguishable true and false, user is put in the three-dimensional virtual environment of computing machine establishment whole-heartedly, and all in this environment look it is genuine, and it is genuine sounding, it is genuine moving up, as the sensation in real world.Interactivity is another key character of virtual reality system, refer to user to object in simulated environment can operational degree and from environment, obtain the natural degree (comprising real-time) of feedback.For example, user can remove directly to capture object virtual in simulated environment with hand, and at this moment hand has the sensation of holding thing, and can feel the weight of object, and the object of being grabbed in the visual field also can move with the movement of setting about at once.
US Patent No. 6839041 has disclosed a kind of virtual reality browing system and method.The method is all installed optical encoder and is measured rotatablely moving of head on each turning axle of head.According to the head orientation measuring, generate the image corresponding with head visual angle and be shown on the display of head installation.Due to the image showing just corresponding to visual angle and there is no time delay, thereby give a kind of sensation that immerses the virtual environment of appointment of user.This system calculates according to the speed of the head movement measuring, acceleration information the position that head will reach, thereby thereby can generate in advance image corresponding to corresponding visual angle eliminates time delay.This virtual reality browing system also can adopt remote camera to create image.When adopting remote camera to create image, the head orientation that the information such as the position measuring according to optical encoder, speed, acceleration are calculated, video camera moves to the position corresponding with visual angle.Video camera moves and the time delay of image transfer etc. is calculated compensation in advance by information such as head speed, acceleration.
Such scheme adopts optical encoder to catch motion, and optical encoder volume large (such as the measurement of 3 axles in a position needs 3 independently sensors), installs fixing trouble, thereby can not carry out all-around exercises seizure to mobile human body; Optical encoder also can impact and limit the motion of human body; Because the head pose of only fixing a point is towards seizure, can only measure rotation angle information simultaneously, this scheme can only change visual angle and browse, and have no idea in a whole body substitution virtual environment completely to go,, introduce the translation change in location of head thereby the immersion degree of whole virtual reality system and interactivity not high.
US Patent No. 8217995 has disclosed in conjunction with spherical camera and motion-captured cooperation immersion virtual environment.This system comprises the display that virtual environment emulator, optical profile type motion capture system, spherical camera, head are installed etc.Virtual environment emulator, according to computer-aided design (CAD) (CAD) data, produces the 3 D stereo emulated interface around user.Optical profile type motion capture system is marked at user's head or whole body, and around on wall or a plurality of video cameras are installed on tripod, according to the user's head inclination and the rotation that capture, the real-time picture that user is shown converts (such as convergent-divergent, translation, inclination etc.).This system allows a plurality of users to enter same virtual environment this virtual environment is observed simultaneously.This system also can detect the incarnation of user in virtual environment and conflicting of environment, such as people in virtual environment touches metope, changes the color of this metope etc.By spherical camera, virtual environment emulator can switch between emulation and actual long-range shooting picture.Measurement according to motion capture system to head angle, also can realize the long-range shooting picture of reality is carried out to the operations such as convergent-divergent, translation, thereby cause long-range sensation of attending.
Such scheme adopts optical profile type motion capture system to catch motion, and comparison in equipment is expensive.If adopt the hard-wired video camera of metope, can be subject to motion-captured place restriction; If adopt tripod that video camera is installed, the demarcation meeting of system is very loaded down with trivial details, and if when scope of activities is larger, may needs repeatedly mobile tripod and need to repeatedly demarcate; When motion more complicated, optical markers easily produces and obscures or block, thereby makes the mistake; Because do not adopt the special interactivity equipment such as sense of touch, this scheme mainly realizes visual impression, and can not bring the multi-faceted impression of virtual environment to user.For example in virtual environment, experienced tactile wall, this scheme only can be made corresponding demonstration and can not on the sense organ in sense of touch and so on, give user's experience on the picture of virtual environment.
Utility model content
The utility model provides a kind of immersion virtual reality system based on motion-captured, so that the people of real world and virtual environment can be carried out the omnibearing interactions such as vision, sense of touch, power, the sense of hearing.
To achieve these goals, the utility model provides a kind of immersion virtual reality system based on motion-captured, this immersion virtual reality system comprises: motion capture device, environmental feedback device and 3D virtual environment emulator, described motion capture device is wireless or wired connection described in the first interface of 3D virtual environment emulator, described 3D virtual environment emulator is the environmental feedback device described in wireless or wired connection by a plurality of signaling interfaces;
Described motion capture device comprises:
A plurality of motion-captured modules, be bundled in respectively the different parts of health, the motion-captured module described in each comprises: for the 3 axis MEMS acceleration transducer of acceleration measurement signal, for the 3 axis MEMS angular-rate sensor of measured angular rate signal and for measuring the 3 axis MEMS magnetometer of Geomagnetic signal;
Central processing element, connect described a plurality of motion-captured module, receive described acceleration, angular velocity signal and Geomagnetic signal, and attitude and the positional information of the human body of degree of will speed up, angular velocity signal and Geomagnetic signal generation send to described 3D virtual environment emulator by described first interface;
Described environmental feedback device comprises: a plurality of different environmental feedback devices, connect respectively described 3D virtual environment emulator, described a plurality of different environmental feedback device is respectively used to the different parts to human body video, audio frequency, power and Tactile control signal feedback.
In one embodiment, described central processing element is MCU, DSP or FPGA.
In one embodiment, the number of described motion-captured module is 3, and 3 described motion-captured modules are bundled in respectively head, trunk and buttocks, or are bundled in respectively one of one of head, two upper arm and two forearms.
In one embodiment, the number of described motion-captured module is 6,6 described motion-captured modules are bundled in respectively head, buttocks, both thighs and both legs, or are bundled in respectively one of one of one of head, trunk, buttocks, two upper arm, two forearms and both hands.
In one embodiment, the number of described motion-captured module is 9,9 described motion-captured modules are bundled in respectively one of one of head, trunk, buttocks, both thighs, both legs, two upper arm and two forearms, or are bundled in respectively head, trunk, buttocks, two upper arm, two forearm and both hands.
In one embodiment, the number of described motion-captured module is 11,11 described motion-captured modules are bundled in respectively one of one of head, trunk, buttocks, both thighs, both legs, both feet, two upper arm and two forearms, or are bundled in respectively head, trunk, buttocks, both thighs, both legs, two upper arm and two forearm.
In one embodiment, the number of described motion-captured module is 15, is bundled in respectively head, trunk, buttocks, both thighs, both legs, both feet, two upper arm, two forearm and both hands.
In one embodiment, the number of described motion-captured module is 17, is bundled in respectively head, trunk, buttocks, both thighs, both legs, both feet, two upper arm, two forearm, both hands and both shoulders.
In one embodiment, the number of described motion-captured module is 18 to 20, is bundled in respectively head, trunk, buttocks, both thighs, both legs, both feet, two upper arm, two forearm, both hands, both shoulders and 1 to 3 hand-held stage property.
In one embodiment, described environmental feedback device comprises: for video control signal being fed back to the 3D helmet or the 3D glasses of human eye.
In one embodiment, described environmental feedback device comprises: for force control signal being fed back to force feedback gloves, force feedback upper garment, force feedback exoskeleton, controlled treadmill and the electro photoluminescence paster of human body.
In one embodiment, described environmental feedback device comprises: for audio control signal being fed back to the sound equipment of people's ear.
In one embodiment, motion-captured module also comprises: for the radio frequency chip of wireless transmission, connect described central processing element.
Further, motion-captured module also comprises: power supply and voltage conversion circuit.
The beneficial effects of the utility model are, the motion-captured module volume adopting in motion capture system of the present utility model is little, lightweight, are tied to the motion that the person did not affect human body when upper; Sample rate is high, can gather complicated, high-speed motion; The flexible configuration of motion-captured module, can catch the motion of local (such as head), whole body and hand-held device; Limit in the motion-captured place that is not subject to, and catches effect and not affected by blocking of object in true environment; Motion capture system cost is relatively low.The utility model because can be real-time the human body of real world (is comprised to its trunk, four limbs, hand-held stage property etc.) and motion introduce virtual world, and be mapped on corresponding role, and by appropriate mode real-time virtual environment is fed back in real world people's perception to role's effect, thereby the immersion sense that has greatly improved virtual reality, increased the interactivity of role and virtual environment simultaneously, make people can access more vivid experience.
Accompanying drawing explanation
In order to be illustrated more clearly in the utility model embodiment or technical scheme of the prior art, to the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described below, apparently, accompanying drawing in the following describes is only embodiment more of the present utility model, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the structural representation of the immersion virtual reality system based on motion-captured in the utility model embodiment;
Fig. 2 is the structural representation of a motion-captured module in the utility model embodiment;
Fig. 3 is the virtual environment schematic diagram of 3D virtual environment emulator in the utility model embodiment;
Fig. 4 is the speech recognition system structural representation of 3D virtual environment emulator in the utility model embodiment.
Embodiment
Below in conjunction with the accompanying drawing in the utility model embodiment, the technical scheme in the utility model embodiment is clearly and completely described, obviously, described embodiment is only the utility model part embodiment, rather than whole embodiment.Embodiment based in the utility model, those of ordinary skills are not making the every other embodiment obtaining under creative work prerequisite, all belong to the scope of the utility model protection.
As shown in Figure 1, the present embodiment provides a kind of immersion virtual reality system based on motion-captured, and this immersion virtual reality system comprises: motion capture device 101, environmental feedback device 102 and 3D virtual environment emulator 103.Motion capture device 101 connects the interface 1 of 3D virtual environment emulator 103 by wireless or cable, and 3D virtual environment emulator 103 passes through a plurality of signaling interfaces (interface 2 is as shown in Figure 1 to interface 8, and interface quantity of the present utility model is not as limit) JA(junction ambient) feedback assembly 102 by wireless or cable.
As shown in Figure 1, motion capture device 101 comprises: a plurality of motion-captured modules (module 1 is as shown in the figure to module 18).
During concrete enforcement, the number of motion-captured module can according to circumstances be selected arbitrarily, in one embodiment, the number of motion-captured module is 3,3 described motion-captured modules are bundled in respectively head, trunk and buttocks, or are bundled in respectively one of one of head, two upper arm (left upper arm and right upper arm) and two forearm (left forearm and right forearm).
In one embodiment, the number of motion-captured module is 6,6 described motion-captured modules are bundled in respectively head, buttocks, both thighs (left thigh and right thigh) and both legs (left leg and right leg), or are bundled in respectively one of one of one of head, trunk, buttocks, two upper arm, two forearms and both hands (left hand and the right hand).
In one embodiment, the number of motion-captured module is 9,9 described motion-captured modules are bundled in respectively one of one of head, trunk, buttocks, both thighs, both legs, two upper arm and two forearms, or are bundled in respectively head, trunk, buttocks, two upper arm, two forearm and both hands.
In one embodiment, the number of motion-captured module is 11,11 described motion-captured modules are bundled in respectively one of one of head, trunk, buttocks, both thighs, both legs, both feet (left foot and right crus of diaphragm), two upper arm and two forearms, or are bundled in respectively head, trunk, buttocks, both thighs, both legs, two upper arm and two forearm.
In one embodiment, the number of described motion-captured module is 15, is bundled in respectively head, trunk, buttocks, both thighs, both legs, both feet, two upper arm, two forearm and both hands.
In one embodiment, the number of motion-captured module is 17, is bundled in respectively head, trunk, buttocks, both thighs, both legs, both feet, two upper arm, two forearm, both hands and both shoulders.
In one embodiment, the number of motion-captured module is 18 to 20, and wherein, 17 motion-captured modules are bound to human body, are bundled in respectively head, trunk, buttocks, both thighs, both legs, both feet, two upper arm, two forearm, both hands and both shoulders; 1 to 3 motion-captured module is bound to hand-held stage property.
The simple declaration that the above-mentioned situation that is 3,6,9,11,15,17,18 to 20 to the number of motion-captured module is carried out, only for exemplifying effect, the number of the motion-captured module of the utility model and binding position be not as limit.
During concrete enforcement, motion-captured module is except being bundled on people's health, can also be bundled on people's hand-held stage property, so in another embodiment, the number of motion-captured module can be 18 (module 1 is as shown in the figure to modules 18), module 1 to module 17 is bundled in respectively buttocks, head, chest, two thighs, two shanks, two pin, two shoulders, two upper arm, two underarms and two hands, and module 18 is bundled on hand-held stage property.
As shown in Figure 2, each motion-captured module 201 includes: 3 axis MEMS acceleration transducer, 3 axis MEMS angular-rate sensor (cry not only gyro sensor) and 3 axis MEMS magnetometer (but also being electronic compass sensor).
3 axis MEMS acceleration transducer is for acceleration measurement signal, and 3 axis MEMS angular-rate sensor is for measured angular rate signal, and 3 axis MEMS magnetometer is used for measuring Geomagnetic signal.Acceleration signal, angular velocity signal and Geomagnetic signal send to central processing element 203 to process by data transmission bus 1, generate displacement and azimuth information.
Preferably, motion-captured module 201 can also comprise: microprocessor 202, is connected respectively with 3 axis MEMS acceleration transducer, 3 axis MEMS angular-rate sensor and 3 axis MEMS magnetometer.
Microprocessor 202 receives acceleration signal, angular velocity signal and Geomagnetic signal, can carry out integration to angular velocity signal, generating direction information, and integral formula is:
Figure BDA0000379312150000082
wherein, θ tand θ ofor dimensional orientation, ω tfor angular velocity, according to above-mentioned integral formula, just can obtain azimuth information.Then according to acceleration signal and Geomagnetic signal, integral error is revised, the azimuth information of generate revising, and described Geomagnetic signal, acceleration signal and azimuth information correction exported to central processing element 203.
Central processing element 203 connects microprocessor 202 by data transmission bus 1, receive the azimuth information of Geomagnetic signal, acceleration signal and the correction of microprocessor output, acceleration signal is wherein carried out to quadratic integral, generate displacement information, quadratic integral formula is:
Figure BDA0000379312150000081
wherein, P represents displacement, and v is speed, and a is acceleration, and T is for stopping constantly, and 0 is initial time, and t is constantly middle.
During concrete enforcement, in order to make azimuth information obtained above more accurate, need and according to biomechanics constraint and with extraneous contiguity constraint, azimuth information and displacement information are revised.Biomechanics constraint correction formula is: P=P a+ K (P θ-P a), wherein, P afor the displacement of certain bone of calculating according to acceleration quadratic integral, P θfor the displacement of the same bone that calculates according to the connected relation of bone, the dimensional orientation of each bone and the locus of basic point, the scale factor of K for adopting Kalman Filtering or additive method to calculate, its size depends on P awith P θthe relative size of error, only enumerated the connected biomechanics constrained displacement correction of bone herein, other biomechanics constraint repeats no more as relative motion scope between the bone of the permission degree of freedom in each joint, permission etc.With the correction formula of extraneous contiguity constraint be: P '=P+ (P o-P c), wherein, P ' is the displacement of revised health a part, the displacement of the front health of correction a part that P is calculating, P cfor the displacement of the body part of human body 104 in contact point before the correction of calculating, P odisplacement for the contact point external world.For example, when judging that human body stands on one leg while contacting to earth, take the displacement on the ground, place of contacting to earth to deduct the displacement of the calculating of the sole contacting to earth, the be added to displacement of calculating of health all sites of this displacement difference is got on, just obtain the displacement of revised whole body.This method of revising displacement is equally applicable to the correction of whole body speed and the contact correction of other types.Central processing element 203 is microprocessor (hardware device such as MCU, DSP or FPGA).Microprocessor processes and generates to above-mentioned acceleration signal, angular velocity signal and Geomagnetic signal the common technology that displacement and azimuth information are this area, not for the application's technological improvement, does not repeat them here.
Usually, the biological constraints of human body comprises the range of movement constraint (as the rotational freedom of each joint permission, the relative displacement of permission etc.) in the connected constraint of each joint, each joint etc.Human body and extraneous constraint comprise environment that ground, metope, step etc. are known and contiguity constraint of human body etc.
3D virtual environment emulator 103 is in fact a computing machine that simulation software is installed, and from material object, it is exactly a main frame, and its core key is exactly the simulation software being installed on wherein.3D virtual environment emulator 103 produces a 3D virtual environment, and this virtual environment comprises a virtual scene, and as in open country, buildings etc., a role corresponding with user and a series of virtual objects are as article, animal etc.After ,Dang simulation software operation, it can produce a 3D virtual environment as shown in Figure 3, and this environment comprises a virtual scene, and as in open country, buildings etc., a virtual role corresponding with user and a series of virtual objects are as article, animal etc.Between three, can as real world, interact and meet the physics law (as Newton's law, universal gravitation etc.) of certain real world.
3D virtual environment emulator 103 for generation of one corresponding to user's virtual role and around the 3D virtual environment of this virtual role, and the described azimuth information and the displacement information that receive are mapped on described virtual role, thereby make virtual role synchronously produce the identical action of human body in reality.3D virtual environment emulator 103 can be according to the interaction situation of the visual angle of virtual role and virtual role and virtual environment simultaneously, and the different signaling interface that corresponding video, audio frequency, power and Tactile control signal are passed through respectively sends to environmental feedback device 102.
3D virtual environment emulator 103 is by the interface 1 with motion capture device 101, complete information substitution virtual worlds such as the action of the real world human body capturing, motion, limbs, visual angles, when real world human body produces motion, the role of virtual world synchronously produces corresponding action.3D virtual environment emulator 103, according to virtual world human body and extraneous interaction, provides corresponding control signal to environmental feedback device 102 by the interface with environmental feedback device 102, thereby the perception of human body in virtual world in real world is provided.For example, when position of human body and visual angle change, the image that the role in the virtual world after 3D virtual environment emulator 103 changes position and visual angle should see, shows by the 3D helmet/glasses, bore hole 3D system or other 3D display device; When the role in virtual world and virtual environment produce interaction force, 3D virtual environment emulator 103 produces corresponding control signal and by interface, controls the perception that the actuation devices such as corresponding force feedback device or electro photoluminescence paster make the human body in real world produce corresponding power.3D virtual environment emulator 103 can also comprise speech recognition system.As shown in Figure 4, speech recognition system comprises voice training and speech recognition.In voice training, by a large amount of speech datas, with certain training algorithm, set up acoustic model; In speech recognition, the voice of input are carried out to feature extraction, then mate identification with the acoustic model of setting up before, finally draw recognition result.Thereby make the people of real world carry out speech exchange by microphone, sound system and virtual world or the people who enters same virtual world.Environmental feedback device 102 is the perceptible feedback of human body in virtual world to the perception of human body in real world, and these perception comprise the interaction force of image, sound, human and environment etc.Environmental feedback device 102 comprises: a plurality of different environmental feedback devices, connect respectively described 3D virtual environment emulator, described a plurality of different environmental feedback device is respectively used to the different parts to human body video, audio frequency, power and Tactile control signal feedback.
Environmental feedback device mainly comprises: the 3D helmet or 3D glasses, force feedback gloves, force feedback upper garment, force feedback exoskeleton, controlled treadmill, electro photoluminescence paster, sound effect system (sound equipment) etc.Force feedback upper garment, force feedback gloves, force feedback exoskeleton are all force feedback equipments, and principle is similar, are all to produce by certain driver some position that certain acting force is applied to human body.Electro photoluminescence paster is electrode patch, and paster is attached on skin, then between two pasters, applies voltage, can produce spread effect to the nerve between two pasters or muscle.These environmental feedback devices are all existing equipments, and the utility model no longer describes in detail.
The mode that image information in virtual world shows by the 3D helmet/glasses or bore hole 3D feeds back in the perception of real world human body.Acoustic information in virtual world feeds back to by sound effect system in the perception of real world human body, and in virtual world, the interaction of human and environment feeds back in the perception of real world human body by peripheral hardwares such as force feedback upper garment, force feedback gloves, electro photoluminescence paster, force feedback exoskeleton or controlled treadmills.For a simple example: such as people removes to grab an object in virtual environment, 3D virtual environment emulator 103 can contact according to people the position of object and control signal of the characteristic of object generation itself and send to the corresponding site that the driver generation power of controlling gloves corresponding site on force feedback gloves is applied to staff in virtual environment, makes people produce the sensation of really having caught that object.
As shown in Figure 2, motion-captured module also comprises: for example, for the radio frequency chip (2.4GHz chip) of wireless transmission, by data transmission bus 2, connect central processing element 203, central processing element 203 can be realized and 103 wireless connections of 3D virtual environment emulator by 2.4GHz chip, can also realize and each motion-captured module between wireless connections.
Further, motion-captured module also comprises: power supply and voltage conversion circuit, as shown in Figure 2, power supply and voltage conversion circuit comprise battery and power supply chip etc.
Below in conjunction with concrete example, describe immersion virtual reality system of the present utility model in detail.
Suppose that in the present embodiment, user role only carries out the belligerent of long-range magic with objects outside, do not have nearly body and fight hand-to-hand.17 motion-captured modules 201 of user's whole body binding, binding position comprises head, thoracic vertebrae, buttocks, shoulder (* 2), upper arm (* 2), underarm (* 2), hand (* 2), thigh (* 2), shank (* 2), pin (* 2).Each motion module comprises 3 axis MEMS acceleration transducer, 3 axis MEMS angular-rate sensor, 3 axis MEMS magnetometer etc.By the integration to angular velocity, can obtain the azimuth information of motion-captured module 201, simultaneously by the measurement to earth magnetism and acceleration of gravity, can obtain module with respect to the orientation of gravity direction and magnetic direction, thereby the orientation of module angular velocity integration be calibrated to the integral error of eliminating angular velocity with this azimuth information.Each motion-captured module sends to central processing element 203 wirelessly information such as acceleration, angular velocity, spatial attitudes.203 pairs of acceleration signals of central processing element carry out the displacement information that quadratic integral obtains each position, and judge the integral error in displacement and orientation is revised according to biomechanics constraint and with extraneous contiguity constraint.With plane ground contact, be judged to be example, when certain position of health is minimum point, and the vertical direction displacement on displacement and ground is approaching, and the speed at this position, acceleration are close to 0, judge that this position contacts with ground.Except the motion-captured module of binding on health, also can mounted movable capture module on user's handheld games stage property (as conjury stick).Catch measurement mechanism 101 and measure except the motion to human body, also the position of hand-held stage property and spatial attitude are measured.
Environmental feedback device comprises a plurality of electro photoluminescence pasters of pasting on 3D glasses, sound system, user's body and controlled treadmill etc.Wearable 3D glasses show 3D virtual environment; Various sound in sound system feedback virtual environment; The various stimulations of electro photoluminescence paster feedback virtual environment to role; Controlled treadmill when can run user, walk or jump limits people's actual activity scope.
As moved a game on computer, enter game environment, when having installed that the computing machine of virtual environment simulation software is opened and during simulation software operation, 3D virtual environment emulator 103 can produce a 3D virtual environment around user role, in 3D virtual environment, exist in some real worlds and do not have anything, such as Warcraft monster that magic attacks etc. is used in meeting.Role also can discharge magic (it is mainly that some visual effects that generate by simulation software are realized, and the trigger condition of generation can be that hand is made specific action or face is read out specific incantation) with hand or certain stage property (as conjury stick).Monster in virtual environment may be attacked user's role, and role also can initiatively attack the monster of virtual world or other players' role.In the face of the attack of monster or other player roles, user can dodge or discharge equally magic and keep out.In real world, make the action or while running of dodging, utilize motion capture device, in virtual world, role also can be synchronous makes corresponding action.According to the motion conditions of user in real world, thus controlled treadmill can make accordingly the range of movement that motion guarantees people in real world and be locked, in virtual world, user's role's range of movement is unrestricted.If user's role is hit by monster in virtual world or other players' role, 3D virtual environment emulator 103 can produce the stimulus signal corresponding with attack strength on the electro photoluminescence paster of health corresponding position, makes user produce the sensation of really being hit.
According to above-mentioned example, from another one angle, implementation procedure of the present utility model is described below:
In explanation, once immersion reality-virtualizing game based on motion-captured of the present utility model and the similarities and differences of common 3D Role Playing Game are first described before implementation procedure.
Identical point: both user controls virtual role carry out certain activity and experience in a virtual 3D environment.Difference: one is to control immersion 3D reality-virtualizing game to rely on user's action and language to control role, just as real world human body to self control, and common 3D Role Playing Game is controlled role with mouse-keyboard.What another were different is, common 3D Role Playing Game can only be seen the image of a plane on display, and can only see the interaction in own role and game environment, but can not go with other sense organ to experience the interaction of role and surrounding environment in game, adopting the immersion 3D reality-virtualizing game of the utility model technology can provide corresponding 3D virtual environment image according to the variation at role visual angle, make user visually as placed oneself in the midst of in virtual environment, simultaneously by environmental feedback device, user can pass through other position/sensory experience of health to virtual environment and role's interaction, as health really in virtual environment.
To adopt implementation procedure of the present utility model, i.e. the 3D reality-virtualizing game implementation procedure based on motion-captured below:
First, carry out designing and developing of 3D virtual environment simulation software.Comprise that the Scenario Design, role's design, the game object that are similar to common 3D Role Playing Game development and Design design (as monster, NPC etc.), game item design, game skill design, the design of game special efficacy etc.; It also comprise be different from the Mapping Design, speech recognition system of the kinematic parameter of common 3D game and role movement, according to environment and role interact the environmental feedback control signal of generation and information etc.During concrete enforcement, user's role is to experience a virtual wizard world of 3D, and the interaction of role and environment is mainly magic.3D virtual environment Simulation Software Design has been got well and has just been equivalent to Games Software exploitation.
Then, the configuration of motion capture system and environmental feedback system.Be similar to and take after a Games Software, user needs computer of configure, comprises the interactive devices such as mouse, keyboard, display.During concrete enforcement, because be that whole body enters in 3D virtual environment, and the hand-held conjury stick of meeting, so motion capture device has configured all-around exercises capture system and 1 motion-captured module that is tied to game item of 17 modules.Because of role in the present embodiment only can with the object of virtual environment carry out magic fight not closely body fight hand-to-hand, so environmental feedback system, except 3D glasses, sound system, has just adopted electro photoluminescence paster, the impression when being used for simulating role's health and being subject to magic attack.In addition, because the scene of 3D virtual world is very large, the place of real world is restricted, and has additionally adopted a controlled treadmill that the scope of activities of real world human body is limited.
Finally, experience the immersion 3D reality-virtualizing game based on motion-captured.This is similar to hardware and software and is all ready to rear really beginning and plays games.User has dressed the motion-captured module of whole body, in the body parameter of oneself input central processing element, and do as requested several required movements the binding error of module is calibrated to (aforesaid operations only need to carry out when using for the first time), then the wiring between 3 systems is connected, power-on, starts 3D virtual environment emulator and just can experience " truly " immersion 3D virtual reality world.
The beneficial effects of the utility model are, the motion-captured module volume adopting in motion capture system of the present utility model is little, lightweight, are tied to the motion that the person did not affect human body when upper; Sample rate is high, can gather complicated, high-speed motion; The flexible configuration of motion-captured module, can catch the motion of local (such as head), whole body and hand-held device; Limit in the motion-captured place that is not subject to, and catches effect and not affected by blocking of object in true environment; Motion capture system cost is relatively low.The utility model is because can (comprise its trunk the human body of real world in real time, four limbs, hand-held stage property etc.) and motion introduce virtual world, and be mapped on corresponding role, and in real time virtual environment is fed back in real world people's perception to role's effect by appropriate mode, thereby the immersion sense that has greatly improved virtual reality, increased the interactivity of role and virtual environment simultaneously, make people can access more vivid experience.
Those skilled in the art should understand, embodiment of the present utility model can be provided as method, system or computer program.Therefore, the utility model can adopt complete hardware implementation example, implement software example or in conjunction with the form of the embodiment of software and hardware aspect completely.And the utility model can adopt the form that wherein includes the upper computer program of implementing of computer-usable storage medium (including but not limited to magnetic disk memory, CD-ROM, optical memory etc.) of computer usable program code one or more.
The utility model is with reference to describing according to process flow diagram and/or the block scheme of the method for the utility model embodiment, equipment (system) and computer program.Should understand can be in computer program instructions realization flow figure and/or block scheme each flow process and/or the flow process in square frame and process flow diagram and/or block scheme and/or the combination of square frame.Can provide these computer program instructions to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, the instruction of carrying out by the processor of computing machine or other programmable data processing device is produced for realizing the device in the function of flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame appointments.
These computer program instructions also can be stored in energy vectoring computer or the computer-readable memory of other programmable data processing device with ad hoc fashion work, the instruction that makes to be stored in this computer-readable memory produces the manufacture that comprises command device, and this command device is realized the function of appointment in flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame.
These computer program instructions also can be loaded in computing machine or other programmable data processing device, make to carry out sequence of operations step to produce computer implemented processing on computing machine or other programmable devices, thereby the instruction of carrying out is provided for realizing the step of the function of appointment in flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame on computing machine or other programmable devices.
In the utility model, applied specific embodiment principle of the present utility model and embodiment are set forth, the explanation of above embodiment is just for helping to understand method of the present utility model and core concept thereof; , for one of ordinary skill in the art, according to thought of the present utility model, all will change in specific embodiments and applications, in sum, this description should not be construed as restriction of the present utility model meanwhile.

Claims (14)

1. the immersion virtual reality system based on motion-captured, it is characterized in that, described system comprises: motion capture device, environmental feedback device and 3D virtual environment emulator, described motion capture device is wireless or wired connection described in the first interface of 3D virtual environment emulator, described 3D virtual environment emulator is the environmental feedback device described in wireless or wired connection by a plurality of signaling interfaces;
Described motion capture device comprises:
A plurality of motion-captured modules, be bundled in respectively the different parts of health, the motion-captured module described in each comprises: for the 3 axis MEMS acceleration transducer of acceleration measurement signal, for the 3 axis MEMS angular-rate sensor of measured angular rate signal and for measuring the 3 axis MEMS magnetometer of Geomagnetic signal;
Central processing element, connect described a plurality of motion-captured module, receive described described acceleration, angular velocity signal and Geomagnetic signal, and attitude and the positional information of the human body of degree of will speed up, angular velocity signal and Geomagnetic signal generation send to described 3D virtual environment emulator by described first interface;
Described environmental feedback device comprises: a plurality of different environmental feedback devices, connect respectively described 3D virtual environment emulator, described a plurality of different environmental feedback device is respectively used to the different parts to human body video, audio frequency, power and Tactile control signal feedback.
2. system according to claim 1, is characterized in that, described central processing element is MCU, DSP or FPGA.
3. system according to claim 1, is characterized in that, the number of described motion-captured module is 3, and 3 described motion-captured modules are bundled in respectively head, trunk and buttocks, or are bundled in respectively one of one of head, two upper arm and two forearms.
4. system according to claim 1, it is characterized in that, the number of described motion-captured module is 6,6 described motion-captured modules are bundled in respectively head, buttocks, both thighs and both legs, or are bundled in respectively one of one of one of head, trunk, buttocks, two upper arm, two forearms and both hands.
5. system according to claim 1, it is characterized in that, the number of described motion-captured module is 9,9 described motion-captured modules are bundled in respectively one of one of head, trunk, buttocks, both thighs, both legs, two upper arm and two forearms, or are bundled in respectively head, trunk, buttocks, two upper arm, two forearm and both hands.
6. system according to claim 1, it is characterized in that, the number of described motion-captured module is 11,11 described motion-captured modules are bundled in respectively one of one of head, trunk, buttocks, both thighs, both legs, both feet, two upper arm and two forearms, or are bundled in respectively head, trunk, buttocks, both thighs, both legs, two upper arm and two forearm.
7. system according to claim 1, is characterized in that, the number of described motion-captured module is 15, is bundled in respectively head, trunk, buttocks, both thighs, both legs, both feet, two upper arm, two forearm and both hands.
8. system according to claim 1, is characterized in that, the number of described motion-captured module is 17, is bundled in respectively head, trunk, buttocks, both thighs, both legs, both feet, two upper arm, two forearm, both hands and both shoulders.
9. system according to claim 1, it is characterized in that, the number of described motion-captured module is 18 to 20, is bundled in respectively head, trunk, buttocks, both thighs, both legs, both feet, two upper arm, two forearm, both hands, both shoulders and 1 to 3 hand-held stage property.
10. system according to claim 1, is characterized in that, described environmental feedback device comprises: for video control signal being fed back to the 3D helmet or the 3D glasses of human eye.
11. systems according to claim 1, is characterized in that, described environmental feedback device comprises: for force control signal being fed back to force feedback gloves, force feedback upper garment, force feedback exoskeleton, controlled treadmill and the electro photoluminescence paster of human body.
12. systems according to claim 1, is characterized in that, described environmental feedback device comprises: for audio control signal being fed back to the sound equipment of people's ear.
13. systems according to claim 1, is characterized in that, motion-captured module also comprises: for the radio frequency chip of wireless transmission, connect described central processing element.
14. systems according to claim 1, is characterized in that, motion-captured module also comprises: power supply and voltage conversion circuit.
CN201320558125.7U 2013-09-09 2013-09-09 Immersion type virtual reality system based on movement capture Expired - Lifetime CN203405772U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201320558125.7U CN203405772U (en) 2013-09-09 2013-09-09 Immersion type virtual reality system based on movement capture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201320558125.7U CN203405772U (en) 2013-09-09 2013-09-09 Immersion type virtual reality system based on movement capture

Publications (1)

Publication Number Publication Date
CN203405772U true CN203405772U (en) 2014-01-22

Family

ID=49941702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201320558125.7U Expired - Lifetime CN203405772U (en) 2013-09-09 2013-09-09 Immersion type virtual reality system based on movement capture

Country Status (1)

Country Link
CN (1) CN203405772U (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015165162A1 (en) * 2014-04-29 2015-11-05 诺力科技有限公司 Machine movement sensing method and assemblies, and movement sensing system
CN106485973A (en) * 2016-10-21 2017-03-08 上海申电教育培训有限公司 Electric power skills training platform based on virtual reality technology
CN106843483A (en) * 2017-01-20 2017-06-13 深圳市京华信息技术有限公司 A kind of virtual reality device and its control method
CN107469315A (en) * 2017-07-24 2017-12-15 烟台中飞海装科技有限公司 A kind of fighting training system
CN107919066A (en) * 2016-10-10 2018-04-17 北京七展国际数字科技有限公司 The immersion display systems and method of a kind of arc curtain hyperbolic
WO2018177075A1 (en) * 2017-03-31 2018-10-04 腾讯科技(深圳)有限公司 Method and apparatus for simulating human body in virtual reality, storage medium, and electronic apparatus
CN109388142A (en) * 2015-04-30 2019-02-26 广东虚拟现实科技有限公司 A kind of method and system carrying out virtual reality travelling control based on inertial sensor
CN109419604A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 Lower limb rehabilitation training method and system based on virtual reality
CN109618183A (en) * 2018-11-29 2019-04-12 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN110108159A (en) * 2019-06-03 2019-08-09 武汉灏存科技有限公司 The analogue simulation system and method for the more people's interactions of large space
DE102018203433A1 (en) 2018-03-07 2019-09-12 Bayerische Motoren Werke Aktiengesellschaft Method for determining a comfort state of at least one vehicle occupant of a vehicle
CN110495166A (en) * 2017-12-15 2019-11-22 斯纳普公司 Spherical video editing
CN110515466A (en) * 2019-08-30 2019-11-29 贵州电网有限责任公司 A kind of motion capture system based on virtual reality scenario

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015165162A1 (en) * 2014-04-29 2015-11-05 诺力科技有限公司 Machine movement sensing method and assemblies, and movement sensing system
CN109388142A (en) * 2015-04-30 2019-02-26 广东虚拟现实科技有限公司 A kind of method and system carrying out virtual reality travelling control based on inertial sensor
CN107919066A (en) * 2016-10-10 2018-04-17 北京七展国际数字科技有限公司 The immersion display systems and method of a kind of arc curtain hyperbolic
CN106485973A (en) * 2016-10-21 2017-03-08 上海申电教育培训有限公司 Electric power skills training platform based on virtual reality technology
CN106843483A (en) * 2017-01-20 2017-06-13 深圳市京华信息技术有限公司 A kind of virtual reality device and its control method
WO2018177075A1 (en) * 2017-03-31 2018-10-04 腾讯科技(深圳)有限公司 Method and apparatus for simulating human body in virtual reality, storage medium, and electronic apparatus
CN107469315A (en) * 2017-07-24 2017-12-15 烟台中飞海装科技有限公司 A kind of fighting training system
CN109419604A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 Lower limb rehabilitation training method and system based on virtual reality
US11037601B2 (en) 2017-12-15 2021-06-15 Snap Inc. Spherical video editing
US11380362B2 (en) 2017-12-15 2022-07-05 Snap Inc. Spherical video editing
CN110495166A (en) * 2017-12-15 2019-11-22 斯纳普公司 Spherical video editing
DE102018203433A1 (en) 2018-03-07 2019-09-12 Bayerische Motoren Werke Aktiengesellschaft Method for determining a comfort state of at least one vehicle occupant of a vehicle
CN109618183A (en) * 2018-11-29 2019-04-12 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN110108159A (en) * 2019-06-03 2019-08-09 武汉灏存科技有限公司 The analogue simulation system and method for the more people's interactions of large space
CN110108159B (en) * 2019-06-03 2024-05-17 武汉灏存科技有限公司 Simulation system and method for large-space multi-person interaction
CN110515466A (en) * 2019-08-30 2019-11-29 贵州电网有限责任公司 A kind of motion capture system based on virtual reality scenario

Similar Documents

Publication Publication Date Title
CN203405772U (en) Immersion type virtual reality system based on movement capture
CN103488291B (en) Immersion virtual reality system based on motion capture
CN106648116B (en) Virtual reality integrated system based on motion capture
JP6973388B2 (en) Information processing equipment, information processing methods and programs
US20090046056A1 (en) Human motion tracking device
CN107533233B (en) System and method for augmented reality
CN101579238B (en) Human motion capture three dimensional playback system and method thereof
CN103759739B (en) A kind of multimode motion measurement and analytic system
US20210349529A1 (en) Avatar tracking and rendering in virtual reality
CN206497423U (en) A kind of virtual reality integrated system with inertia action trap setting
KR20210058958A (en) Systems and methods for generating complementary data for visual display
CN201431466Y (en) Human motion capture and thee-dimensional representation system
CN104197987A (en) Combined-type motion capturing system
CN106601062A (en) Interactive method for simulating mine disaster escape training
RU2107328C1 (en) Method for tracing and displaying of position and orientation of user in three-dimensional space and device which implements said method
JP2001504605A (en) Method for tracking and displaying a user's location and orientation in space, method for presenting a virtual environment to a user, and systems for implementing these methods
CN106843484B (en) Method for fusing indoor positioning data and motion capture data
CN106873787A (en) A kind of gesture interaction system and method for virtual teach-in teaching
CN105892626A (en) Lower limb movement simulation control device used in virtual reality environment
US20190344449A1 (en) Apparatus Control Systems and Method
US20180216959A1 (en) A Combined Motion Capture System
KR102162922B1 (en) Virtual reality-based hand rehabilitation system with haptic feedback
Dunbar et al. Augmenting human spatial navigation via sensory substitution
US11887259B2 (en) Method, system, and apparatus for full-body tracking with magnetic fields in virtual reality and augmented reality applications
CN107243147A (en) Boxing training virtual reality system and its implementation based on body-sensing sensor

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
C53 Correction of patent of invention or patent application
CB03 Change of inventor or designer information

Inventor after: Dai Ruoli

Inventor after: Liu Haoyang

Inventor after: Li Longwei

Inventor after: Chen Jinzhou

Inventor before: Liu Haoyang

Inventor before: Dai Ruoli

Inventor before: Li Longwei

Inventor before: Chen Jinzhou

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: LIU HAOYANG DAI RUOLI LI LONGWEI CHEN JINZHOU TO: DAI RUOLI LIU HAOYANG LI LONGWEI CHEN JINZHOU

CX01 Expiry of patent term

Granted publication date: 20140122

CX01 Expiry of patent term