CN109144273A - A kind of virtual fire-fighting experiential method based on VR technology - Google Patents
A kind of virtual fire-fighting experiential method based on VR technology Download PDFInfo
- Publication number
- CN109144273A CN109144273A CN201811055005.9A CN201811055005A CN109144273A CN 109144273 A CN109144273 A CN 109144273A CN 201811055005 A CN201811055005 A CN 201811055005A CN 109144273 A CN109144273 A CN 109144273A
- Authority
- CN
- China
- Prior art keywords
- hand
- node
- frame
- model
- finger
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a kind of virtual fire-fighting experiential method based on VR technology experiences equipment using virtual fire-fighting, comprising: head shows equipment;Gesture identification component;Computer;Specific method includes: that fire-fighting picture transmission is first shown equipment by computer to the end, virtual fire-fighting scene is provided, experience people's handle or gesture identification component interact in virtual fire-fighting scene with virtual objects, the image information of gesture identification component acquisition hand, it is converted to the location information of each node of the hand in virtual scene, the location information of each node of hand is corrected by reducing the algorithm of hand model shake, and computer obtains the feedback information of head-mounted display, handle, gesture identification component in real time.The present invention reduces the cost of fire safety education, the safety for improving fire safety education, the interest for increasing fire safety education.
Description
Technical field
The present invention relates to virtual fire-fightings to experience field, and in particular to a kind of virtual fire-fighting experiential method based on VR technology.
Background technique
In recent years, fire safety education is always the emphasis of all circles' concern.It establishes in textbook, education video and safe manoeuvre
Teaching method it is all more dull, be unable to reach ideal teaching purpose.The security against fire experience shop established on herein, is improved
Interest, however the construction a large amount of equipment of needs and bigger occupied area in shop are experienced, generally speaking cost compares
Height has the possibility for the safety accidents such as trampling when escape exercise is acted in a play, there are certain risks.
With the development of computer hardware technology, virtual reality technology is also further mature, and virtual reality technology is applied to
Education sector becomes the hot research direction of a comparison.By virtual reality technology, virtual teaching scene is simulated, to knowledge
Main points are explained in detail, are summarized theoretical and concept, are guided to learner from sensory aspect, with a kind of side of active interaction
Formula goes study to relevant knowledge and skills during guidance and exploration, can not only excite the learning interest of learner with
And creativity consciousness, additionally it is possible to make learner give full play to the visionary of oneself, reach teaching purpose.
The gesture identification component based on infrared camera is identifying gesture and when transmitting gesture data at this stage, if work light
Line is poor, works if any other infrared equipments, since gesture data is there are error (manpower natural shake can be amplified), hand
Model will appear jitter phenomenon after receiving data, this will affect the operating experience of user.
Summary of the invention
The invention discloses a kind of virtual fire-fighting experiential method based on VR technology, it is advanced virtual its object is to use
Reality technology, the cost for reducing fire safety education, the safety for improving fire safety education, the interest for increasing fire safety education
Taste.
Following technical scheme is used to achieve the goals above:
A kind of virtual fire-fighting experience equipment, comprising:
1) head shows equipment, and the aobvious equipment of this includes head-mounted display, the hand with the head-mounted display wireless interaction
Handle, the head-mounted display and handle are built-in with locating module.Two locating modules can be in tracking is worn simultaneously in space
The positioning system of formula display and handle.The equipment, which meets, realizes that the hardware of the function of display virtual scene and handle interaction is wanted
It asks.The aobvious equipment of HTC Vive can be used in aobvious equipment.
2) gesture identification component, locating module built in the gesture identification component.Leap can be used in gesture identification component
Motion motion sensing control device or Fingo gesture identification component.Fingo gesture identification component, uses infrared cameras track user
Gesture, identify the gesture of user in real time by Gesture Recognition Algorithm, by the definition to different gestures, may be implemented to pass through hand
Gesture come manipulate movement of the virtual portrait in virtual scene and with the interaction of dummy object.
3) computer shows equipment with the head and gesture identification component is connect.
Fire-fighting picture transmission is first shown equipment by computer to the end, provides virtual fire-fighting scene, experiences people's handle or gesture
Recognizer component interacts in virtual fire-fighting scene with virtual objects, and computer obtains head-mounted display, handle, gesture in real time to be known
Location information, image information, key feedback information of other component etc..
Equipment is experienced in virtual fire-fighting, uses virtual reality technology;It imparts knowledge to students with fairly perfect security against fire knowledge point;Tool
It can be experienced there are many scene;It is selected with a variety of interactive modes, including handle interaction, naked hand gesture interaction;Practice to user
As a result carrying out evaluation analysis helps user preferably to grasp security against fire knowledge.
Virtual fire-fighting scene include security against fire knowledge learning scene, the practice scene of fire-fighting equipment, fire disaster escaping scene,
Security risk checks scene, evaluation scene.
Security against fire knowledge learning scene is for learning security against fire knowledge, including life security against fire common sense, fire-fighting figure
Target identifies knowledge, the use knowledge of fire-fighting equipment.
The practice scene of fire-fighting equipment is used to learn and practice the use of fire-fighting equipment, including fire extinguisher, safety rope, fire extinguishing
The use of blanket etc. is practiced.
Fire disaster escaping scene is after fire occurs for the various living scenes of simulation, and user uses with the security against fire learnt
Knowledge is found and realizes the escape route of a safety using tools such as fire extinguishers to realize escape, and scene includes classroom, place
The common scene such as house, laboratory, bedroom.It also include flame comprising indispensable high-precision model such as furniture, desk etc. in scene
And other effects.
Security risk investigation scene is used to detect the study situation of the life fire disasters protection knowledge of user.10 safety are set
Hidden danger, user can be with clearances after having checked 10 hidden danger within a certain period of time, and otherwise prompt fails and to the hidden danger found out of haunting
Position.
Evaluation scene be fugitive generation function or failure after have this time escape operation evaluation, evaluation provide operation correctly and
The movement of mistake, and the chance of the movement one connection practice to mistake.
The modeling of virtual scene uses 3DMax software, which can establish the model of fine degree, can be with mould
The objects such as furniture, fire extinguisher in quasi- reality, and shown in virtual scene, enable user to there is body to face in virtual scene
The feeling in its border.
Virtual scene, which is built using Unity 3D game engine, develops realization.The engine can be true to nature simulation flame effect
Fruit, collision effect etc..Using SteamVR development kit, the system development of VR project is carried out, writes logical operation script using c#,
The sequence of operations such as movement, touching object, the trigger event in virtual scene may be implemented, virtual scene can be made with more true
Reality
A kind of virtual fire-fighting experiential method based on VR technology experiences equipment using virtual fire-fighting, and specific method includes:
Fire-fighting picture transmission is first shown equipment by computer to the end, provides virtual fire-fighting scene, experiences people's handle or gesture
Interaction of the recognizer component in virtual fire-fighting scene with virtual objects, gesture identification component obtain the image information of hand, are converted to
The location information of each node of hand in virtual scene, the location information of each node of hand are rectified by reducing the algorithm of hand model shake
Just, computer obtains the feedback information of head-mounted display, handle, gesture identification component in real time.
The feedback information includes location information, image information, key feedback information etc..
Wherein, using optimizing in the naked hand control scene of gesture identification component to Gesture Recognition Algorithm, one kind is devised
The algorithm of hand model shake can be reduced, specific as follows:
Gesture identification component based on infrared camera is identifying gesture and when transmitting gesture data, if working light compared with
Difference works if any other infrared equipments, since there are error (the natural shake of manpower can be amplified), the models of hand for gesture data
It will appear jitter phenomenon after receiving data.Be broadly divided into three kinds of situations: one, the model of entire hand translates in a certain range,
Hand motion of the amplitude obviously with hand in people's reality is not met;Two, most of position such as palm of hand is flat in a certain range
It moves, there are individual fingers more quick offset phenomena (movement that similar finger clicks mouse) occur;Three, loss of data, virtually
Hand model disappears, this can reduce the discrimination of gesture and influence the experience of user.In response to this problem, a reasonable number is devised
According to smoothing algorithm, the i.e. algorithm of reduction hand model shake.
Hand is the model of hand in virtual scene.
The algorithm for reducing hand model shake, comprising:
The location information of each node of hand be 21,5 every, finger has 4 nodes, wherein finger tip be 1 node, 3
Joint corresponds to three nodes, is independent 1 node at wrist, gesture identification component can be in virtual fire-fighting scene to 21 nodes
Position establish three-dimensional system of coordinate;
Situation one: set the model of hand with a thickness of 1A (unit), in reality hand with a thickness of 1B (unit), when entire hand
Model translated in 1A~2A unit range, reality in hand amplitude in 0~0.5B range intrinsic displacement, then judge the mould of hand
Type obviously with people reality in hand hand motion do not meet (as reality in hand remain static, the model of hand is in virtual scene
In be in dither state).
1) judge that hand motion of the model of hand obviously with hand in people's reality is not met, specifically include:
The model of hand has 21 nodes, takes one of point, and recording its current location is P1, is stored in array, next frame
The position data be P2, and so on, the data until deposit M frame are stored in array ..., if any in the array
Relative position distance t < λ of two points, and 21 nodes all meet this situation, then judgement be currently at the model of hand obviously with
The hand motion of hand is not met in people's reality;
Wherein, M is the value being arranged according to gesture identification component, and t is the relative position distance of any two point in array, λ
Refer to judgement shake the distance between the farthest point and central point of setting;
2) processing method:
The model of hand takes one of node, records the location information of the node the 1st to M frame, the section of the 1st frame
Point is set to P1 (P1x, P1y, P1z), and the node location of the 2nd frame is P2 (P2x, P2y, P2z) ..., the node of M frame
Position is PM(PMX, PMY, PMZ), stable point P0 (P0x, P0y, P0z), P0x=(P1x+P2x are indicated with the average data of M frame
+….+PMX)/M, P0y=(P1y+P2y+ ... .+PMY)/M, P0z=(P1z+P2z+ ... .+PMZ)/M determines the seat of P0 stable point
Mark;
When obtaining every frame data, the use of P0+ θ (P-P0) is the current frame position after reducing jitter amplitude, is as a result being
It is shown in system, setting P is the position coordinates for the present frame that the node is received from gesture identification component, and θ is displaced to shake
Compression ratio, θ (P-P0) are to the compressed displacement of modal displacement;
21 nodes do above processing (handling by step 2));
Situation two: when the finger of the hand in reality is blocked, the hand in reality is opposing stationary, and the palm in model is quiet
Only, the finger in model does reciprocal shake;
A) finger of the hand in reality is blocked, and the hand in reality is opposing stationary, and specific judgment method includes:
Hand model has 21 nodes, takes one of node, and recording the node in the position of first frame is Q1, is stored in number
In group, the position data of next frame is Q2, and so on, data QN until deposit N frame is stored in array ..., preceding N frame
Location information Q1~QN takes a node, the location of any two frame of node distance s < γ in the data of N frame, if 21 sections
There are 5 to 20 nodes to meet positional distance s < γ in point, then judges that the hand in reality is in relative static conditions;
Wherein, the value that N is arranged according to gesture identification component, s are the location of any two frame of the node in the data of N frame
Distance, γ refer to the threshold value of setting;
B) when the finger of the hand in reality is blocked, and the hand in reality is opposing stationary, on the finger of the hand in model
Each node can occur two toward complex point in 1 to 3 frames, and it is past to judge whether that the finger in model is done using finger tip node
Multiple shake;
Node on the finger tip of each finger of hand in modulus type takes one of node, the position of the node in record L frame
Data calculate the distance of each position data in array there are in array, if the position of any one frame and other frames in L frame
Distance meets u<ε or u>λ m taken and is then apart from maximum two positions if satisfied, then the finger in model does reciprocal shake
Two past complex point A, B;
Wherein, L is the value being arranged according to gesture identification component;U is the distance of any two frame in same node L frame, and ε is to set
Fixed threshold value, λ m are the jitter distance threshold value of the node of setting;
21 nodes do above processing (handling by step B));
C reciprocal shake) is done to the finger in model using smoothing algorithm to be corrected, and is specifically included:
A, it is the hand in model that the two-end-point back and forth shaken is done according to the finger in model as two past complex point A, B, U
Finger root node is directed toward the vector of the finger tip of same root finger;
B, vector BA is obtained toward the reciprocal B of complex point A-, vector angle, which calculates, uses cos θ=vector U × vector BA/ | vector U | ×
| vector BA | formula calculates;
C, A is correct stable point if the angle theta of vector BA and vector U is less than 90 degree;
B is correct stable point if the angle theta of vector BA and vector U is greater than 90 degree;
D, data when by correct stable point are set as the finger data of the hand of model;
Situation three: when gesture identification component identification reality in gesture when, when the movement range of the hand in reality is larger,
Environment-identification is deteriorated, beyond identification range when will appear in entire model hand data the case where being not present, this will lead to mould
The disappearance of type, if user holds object using virtual hand in virtual scene at this time, hand model disappearance will cause article and fall
Situations such as, it is unfavorable for user experience.
Judgment method:
System does not receive the hand data in model.
Processing method:
The M frame data before first losing are recorded, recovery last frame is current data, and makes prompt (example in virtual scene
Such as " hand loss of data "), gesture is simulated according to new signal if detecting the hand data in model again.
Compared with prior art, the present invention has the advantage that
The present invention controls opponent in scene to using the naked hand of gesture identification component using the algorithm for reducing hand model shake
Gesture recognizer optimizes, and is corrected to the node of hand in virtual scene, so that computer more accurately obtains gesture knowledge
The feedback information of other component improves the recognition efficiency of gesture identification component, so that people has better body in virtual fire-fighting experience
Interactivity and better experience sense are tested by the cost for reducing fire safety education, increases the safety for improving fire safety education
The interest of fire safety education.
Detailed description of the invention
Fig. 1 is that the present invention is based on the flow diagrams of the virtual fire-fighting experiential method of VR technology;
Fig. 2 is the structural schematic diagram that human body hand model has 21 nodes;
Fig. 3 be hand model obviously with people reality in hand the incongruent schematic diagram of hand motion;
Fig. 4 is that the palm in model is static, and the finger in model makees the schematic diagram back and forth shaken;
Fig. 5 is the schematic diagram for the vector U that the finger root node of the hand in model is directed toward the finger tip of same root finger;
Fig. 6 is schematic diagram of the angle theta less than 90 of vector BA and vector U;
Fig. 7 is that the angle theta of vector BA and vector U is greater than 90 degree of schematic diagram.
Specific embodiment
As shown in Figure 1, experiencing equipment, packet using virtual fire-fighting the present invention is based on the virtual fire-fighting experiential method of VR technology
It includes:
1) head shows equipment, and the aobvious equipment of this includes head-mounted display, the handle with head-mounted display wireless interaction, head
Head mounted displays and handle are built-in with locating module.Two locating modules can in tracked simultaneously in space head-mounted display with
The positioning system of handle.The equipment meets the hardware requirement for realizing the function of display virtual scene and handle interaction.Aobvious equipment
The aobvious equipment of HTC Vive can be used.
2) gesture identification component, locating module built in gesture identification component.Gesture identification component uses Fingo gesture identification
Component.Fingo gesture identification component identifies use by Gesture Recognition Algorithm using the gesture of infrared cameras track user in real time
The gesture at family may be implemented to manipulate shifting of the virtual portrait in virtual scene by gesture by the definition to different gestures
The dynamic and interaction with dummy object.
3) computer shows equipment with head and gesture identification component is connect.
Fire-fighting picture transmission is first shown equipment by computer to the end, provides virtual fire-fighting scene, experiences people's handle or gesture
Recognizer component interacts in virtual fire-fighting scene with virtual objects, and computer obtains head-mounted display, handle, gesture in real time to be known
Location information, image information, key feedback information of other component etc..
Equipment is experienced in virtual fire-fighting, uses virtual reality technology;It imparts knowledge to students with fairly perfect security against fire knowledge point;Tool
It can be experienced there are many scene;It is selected with a variety of interactive modes, including handle interaction, naked hand gesture interaction;Practice to user
As a result carrying out evaluation analysis helps user preferably to grasp security against fire knowledge.
Virtual fire-fighting scene include security against fire knowledge learning scene, the practice scene of fire-fighting equipment, fire disaster escaping scene,
Security risk checks scene, evaluation scene.
Security against fire knowledge learning scene is for learning security against fire knowledge, including life security against fire common sense, fire-fighting figure
Target identifies knowledge, the use knowledge of fire-fighting equipment.
The practice scene of fire-fighting equipment is used to learn and practice the use of fire-fighting equipment, including fire extinguisher, safety rope, fire extinguishing
The use of blanket etc. is practiced.
Fire disaster escaping scene is after fire occurs for the various living scenes of simulation, and user uses with the security against fire learnt
Knowledge is found and realizes the escape route of a safety using tools such as fire extinguishers to realize escape, and scene includes classroom, place
The common scene such as house, laboratory, bedroom.It also include flame comprising indispensable high-precision model such as furniture, desk etc. in scene
And other effects.
Security risk investigation scene is used to detect the study situation of the life fire disasters protection knowledge of user.10 safety are set
Hidden danger, user can be with clearances after having checked 10 hidden danger within a certain period of time, and otherwise prompt fails and to the hidden danger found out of haunting
Position.
Evaluation scene be fugitive generation function or failure after have this time escape operation evaluation, evaluation provide operation correctly and
The movement of mistake, and the chance of the movement one connection practice to mistake.
The modeling of virtual scene uses 3DMax software, which can establish the model of fine degree, can be with mould
The objects such as furniture, fire extinguisher in quasi- reality, and shown in virtual scene, enable user to there is body to face in virtual scene
The feeling in its border.
Virtual scene, which is built using Unity 3D game engine, develops realization.The engine can be true to nature simulation flame effect
Fruit, collision effect etc..Using SteamVR development kit, the system development of VR project is carried out, writes logical operation script using c#,
The sequence of operations such as movement, touching object, the trigger event in virtual scene may be implemented, virtual scene can be made with more true
Reality.
A kind of virtual fire-fighting experiential method based on VR technology experiences equipment using virtual fire-fighting, and specific method includes:
Fire-fighting picture transmission is first shown equipment by computer to the end, provides virtual fire-fighting scene, experiences people's handle or gesture
Interaction of the recognizer component in virtual fire-fighting scene with virtual objects, gesture identification component obtain the image information of hand, are converted to
The location information of each node of hand in virtual scene, the location information of each node of hand are rectified by reducing the algorithm of hand model shake
Just, computer obtains the feedback information of head-mounted display, handle, gesture identification component in real time;The feedback information includes
Location information, image information, key feedback information etc..Hand is the model of hand in virtual scene.
The algorithm for reducing hand model shake, comprising:
As shown in Fig. 2, the location information of each node of hand is 21,5 every, finger has 4 nodes, wherein finger tip 1
A node, corresponding three nodes in 3 joints are independent 1 node at wrist, i.e. thumb finger tip is joint 1, the 3 of thumb
Corresponding three nodes in a joint, i.e. node 2, node 3, node 4, index finger tip are joint 5, and 3 joints of index finger are three corresponding
Node, i.e. node 6, node 7, node 8, middle fingertip are joint 9, corresponding three nodes in 3 joints of middle finger, i.e. node 10,
Node 11, node 12, unknown finger tip are joint 13, corresponding three nodes in 3 joints referring to of the third finger, i.e. node 14, node 15,
Node 16, little finger finger tip are joint 17, and 3 joints of little finger correspond to three nodes, i.e. node 18, node 19, node 20,
It is node 21 at wrist;Gesture identification component can establish three-dimensional system of coordinate to the position of 21 nodes in virtual fire-fighting scene.
Situation one: set the model of hand with a thickness of 1A (unit), in reality hand with a thickness of 1B (unit), when entire hand
Model translated in 1A~2A unit range, reality in hand amplitude in 0~0.5B range intrinsic displacement, then judge the mould of hand
Type obviously with people reality in hand hand motion do not meet (as reality in hand remain static, the model of hand is in virtual scene
In be in dither state).As shown in figure 3, the model of entire hand translates in a certain range, amplitude obviously with hand in people's reality
Hand motion do not meet, as arrow moves back and forth.
Judge that hand motion of the model of hand obviously with hand in people's reality is not met, specifically include:
Hand model has 21 nodes, takes one of point, and recording its current location is P1, is stored in array, next frame
Position data be P2, be stored in array in ... until deposit M frame data (M here can know according to actual gesture
Different values is arranged in other component).If relative position distance t < λ of any two point in array, and 21 nodes all meet this
Situation, then judgement is currently at hand motion of the model of hand obviously with hand in people's reality and does not meet.
Wherein, M is the value being arranged according to gesture identification component, and t is the relative position distance of any two point in array, λ
Refer to judgement shake the distance between the farthest point and central point of setting.
Processing method:
In this case, the displacement of hand is smaller, takes one of node, records the location information of the node M frame, and the 1st
The node location of frame is P1 (P1x, P1y, P1z), and the node location of the 2nd frame is P2 (P2x, P2y, P2z) ..., the section of M frame
Point is set to PM(PMX, PMY, PMZ) ..., stable point P0 (P0x, P0y, P0z), P0x=can be indicated with the average data of M frame
(P1x+P2x+….+PMX)/M, P0y=(P1y+P2y+ ... .+PMY)/M, P0z=(P1z+P2z+ ... .+PMZ)/M, P0 stable point
Coordinate can determine.
When obtaining every frame data, the use of P0+ θ (P-P0) is the current frame position after reducing jitter amplitude, is as a result being
It is shown in system, setting P is the position coordinates for the present frame that the node is received from gesture identification component, and wherein P0 is to stablize point
It sets, θ is the compression ratio to shake displacement, and θ (P-P0) is to the compressed displacement of modal displacement, and the effect of realization is to reduce shake
Amplitude moves node by a small margin near stable point.
21 nodes do above processing;
Situation two: when the finger of the hand in reality is blocked, the hand in reality is opposing stationary, and the palm in model is quiet
Only, the finger in model does reciprocal shake, as shown in figure 4, gesture identification mould group is possible to meeting when a part of hand is blocked
Identify the position of the hand to make mistake, when one critical, virtual hand model will appear finger and quickly significantly be biased to the palm of the hand
~restore correct gesture, this reciprocal movement;
The finger of hand in reality is blocked, and the hand in reality is opposing stationary, and specific judgment method includes:
Hand model has 21 nodes, takes one of point, and recording its current location is Q1, is stored in array, next frame
Position data be Q2, and so on, be stored in array in ... until deposit N frame data QN, the location information of preceding N frame, Q1
~QN takes a node, the location of any two frame of node distance s < γ in the data of N frame, if there are 5 in 21 nodes
Meet positional distance s < γ to 20 nodes, then judges that the hand in reality is in relative static conditions;
Wherein, the value that N is arranged according to gesture identification component, s are the location of any two frame of the node in the data of N frame
Distance, γ refer to that the finger in the model of setting does the threshold value back and forth shaken;
It is every on the finger of the hand in model when the finger of the hand in reality is blocked, and the hand in reality is opposing stationary
A node can occur two toward complex point in 1 to 3 frames, judge whether that the finger in model is done back and forth using finger tip node
Shake.Such as correct and errors present P0, the P0 ' of shake, stablizes to jump to after P0 ' stablizes several frames after several frames near P0 point and jump
It jumps back near P0 point, in P0, P0 ' point is moved back and forth.By the location information for recording every frame of each point, it can be determined that go out this
Point is with the presence or absence of a reciprocating motion, if occurring this reciprocating motion in shorter several frames.
In this scene, correct node location is usually that finger tip close to gesture is in fingertip location under palm state
Stable point.
Node on the finger tip of each finger of hand in modulus type takes one of node, the position of the node in record L frame
Data calculate the distance of array A [] interior each position data there are in array A [], if in L frame the position of any one frame and its
The distance of his frame meets u<ε or u>λ m takes if satisfied, then the finger in model does reciprocal shake apart from maximum two positions
It sets then as two past complex point A, B;
Wherein, L is the value being arranged according to gesture identification component;U is the distance of any two frame in same node L frame, and ε is to set
Fixed threshold value, λ m are the jitter distance threshold value of the node of setting;Array A []: record a period of time on hand some node in void
Position data in quasi- scene, and store in this array.
Reciprocal shake is done to the finger in model using smoothing algorithm to be corrected, and is specifically included:
1, it is the hand in model that the two-end-point back and forth shaken is done according to the finger in model as two past complex point A, B, U
The vector of the finger tip of finger root node direction same root finger, as vector U, as shown in Figure 5;
2, vector BA is obtained toward the reciprocal B of complex point A-, vector angle, which calculates, uses cos θ=vector U × vector BA/ | vector U | ×
| vector BA | formula calculates.
If 3, the angle theta of vector BA and vector U is less than 90 degree, as shown in fig. 6, A is correct stable point,;
If the angle theta of vector BA and vector U is greater than 90 degree, as shown in fig. 7, B is correct stable point.
4, data when by correct stable point are set as the finger data of the hand of model.
Situation three: when gesture identification component identification reality in gesture when, when the movement range of the hand in reality is larger,
Environment-identification is deteriorated, beyond identification range when will appear in entire model hand data the case where being not present, this will lead to mould
The disappearance of type, if user holds object using virtual hand in virtual scene at this time, hand model disappearance will cause article and fall
Situations such as, it is unfavorable for user experience.
Judgment method:
System does not receive the hand data in model.
Processing method:
The M frame data before first losing are recorded, recovery last frame is current data, and makes prompt (example in virtual scene
Such as " hand loss of data "), gesture is simulated according to new signal if detecting the hand data in model again.
Claims (3)
1. a kind of virtual fire-fighting experiential method based on VR technology, which is characterized in that experience equipment using virtual fire-fighting, comprising:
Aobvious equipment, the aobvious equipment of this include head-mounted display and the handle with the head-mounted display wireless interaction,
The head-mounted display and handle are built-in with locating module;
Gesture identification component, locating module built in the gesture identification component;
Computer shows equipment with the head and gesture identification component is connect;
Specific method includes:
Fire-fighting picture transmission is first shown equipment by computer to the end, provides virtual fire-fighting scene, experiences people's handle or gesture identification
Component interacts in virtual fire-fighting scene with virtual objects, and gesture identification component obtains the image information of hand, is converted to virtual
The location information of each node of hand in scene, the location information of each node of hand are corrected by reducing the algorithm of hand model shake, meter
Calculation machine obtains the feedback information of head-mounted display, handle, gesture identification component in real time.
2. the virtual fire-fighting experiential method according to claim 1 based on VR technology, which is characterized in that the feedback letter
Breath includes location information, image information, key feedback information.
3. the virtual fire-fighting experiential method according to claim 1 based on VR technology, which is characterized in that the reduction hand
The algorithm of portion's model shake, comprising:
The location information of each node of hand is 21, and 5 every, finger has 4 nodes, wherein finger tip is 1 node, 3 joints
Three nodes are corresponded to, are independent 1 node at wrist;
Situation one: hand motion of the model of hand obviously with hand in people's reality is not met;
1) judge that hand motion of the model of hand obviously with hand in people's reality is not met, specifically include:
The model of hand has 21 nodes, takes one of point, and recording its current location is P1, is stored in array, and next frame is somebody's turn to do
Point position data is P2, and so on, data until deposit M frame are stored in array ..., if any two in the array
Relative position distance t < λ of point, and 21 nodes all meet this situation, then the model that judgement is currently at hand obviously shows with people
The hand motion of hand is not met in reality;
Wherein, M is the value being arranged according to gesture identification component, and t is the relative position distance of any two point in array, what λ referred to
It is judgement shake the distance between the farthest point and central point of setting;
2) processing method:
The model of hand takes one of node, records the location information of the node the 1st to M frame, the node position of the 1st frame
It is set to P1 (P1x, P1y, P1z), the node location of the 2nd frame is P2 (P2x, P2y, P2z) ..., the node location of M frame
For PM(PMX, PMY, PMZ), it is indicated stable point P0 (P0x, P0y, P0z) with the average data of M frame, P0x=(P1x+P2x+ ... .+
PMX)/M, P0y=(P1y+P2y+ ... .+PMY)/M, P0z=(P1z+P2z+ ... .+PMZ)/M determines the coordinate of P0 stable point;
It the use of P0+ θ (P-P0) is the current frame position after reducing jitter amplitude, as a result in systems when obtaining every frame data
It has been shown that, setting P are the position coordinates for the present frame that the node is received from gesture identification component, and θ is the compression to shake displacement
Rate, θ (P-P0) are to the compressed displacement of modal displacement;
21 nodes do above processing;
Situation two: when the finger of the hand in reality is blocked, the hand in reality is opposing stationary, and the palm in model is static, mould
Finger in type does reciprocal shake;
A) finger of the hand in reality is blocked, and the hand in reality is opposing stationary, and specific judgment method includes:
Hand model has 21 nodes, takes one of node, and recording the node in the position of first frame is Q1, is stored in array
In, the position data of next frame is Q2, and so on, data QN until deposit N frame, the position of preceding N frame are stored in array ...
Confidence ceases Q1~QN, takes a node, the location of any two frame of node distance s < γ in the data of N frame, if 21 nodes
Middle there are 5 to 20 nodes to meet positional distance s < γ, then judges that the hand in reality is in relative static conditions;
Wherein, the value that N is arranged according to gesture identification component, s be in the data of N frame the location of any two frame of the node away from
From γ refers to the threshold value of setting;
B) when the finger of the hand in reality is blocked, and the hand in reality is opposing stationary, each of on the finger of the hand in model
Node can occur two toward complex point in 1 to 3 frames, judge whether that the finger in model is done using finger tip node and back and forth tremble
It is dynamic;
Node on the finger tip of each finger of hand in modulus type takes one of node, the positional number of the node in record L frame
According to, there are in array, calculate the distance of each position data in array, if in L frame the position of any one frame and other frames away from
From meeting u<ε or u>λ m, if satisfied, then the finger in model does reciprocal shake, taking apart from maximum two positions is then two
A past complex point A, B;
Wherein, L is the value being arranged according to gesture identification component;U is the distance of any two frame in same node L frame, and ε is setting
Threshold value, λ m are the jitter distance threshold value of the node of setting;
21 nodes do above processing;
C reciprocal shake) is done to the finger in model using smoothing algorithm to be corrected, and is specifically included:
A, it is the finger of the hand in model that the two-end-point back and forth shaken is done according to the finger in model as two past complex point A, B, U
Root node is directed toward the vector of the finger tip of same root finger;
B, vector BA is obtained toward the reciprocal B of complex point A-, vector angle, which calculates, uses cos θ=vector U × vector BA/ | vector U | × | to
Measure BA | formula calculates;
C, A is correct stable point if the angle theta of vector BA and vector U is less than 90 degree;
B is correct stable point if the angle theta of vector BA and vector U is greater than 90 degree;
D, data when by correct stable point are set as the finger data of the hand of model;
Situation three: system does not receive the hand data in model;
The T frame data before first losing are recorded, recovery last frame is current data, and makes prompt in virtual scene, if again
Detect that the hand data in model then simulate gesture according to new signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811055005.9A CN109144273B (en) | 2018-09-11 | 2018-09-11 | Virtual fire experience method based on VR technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811055005.9A CN109144273B (en) | 2018-09-11 | 2018-09-11 | Virtual fire experience method based on VR technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109144273A true CN109144273A (en) | 2019-01-04 |
CN109144273B CN109144273B (en) | 2021-07-27 |
Family
ID=64824599
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811055005.9A Active CN109144273B (en) | 2018-09-11 | 2018-09-11 | Virtual fire experience method based on VR technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109144273B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109767668A (en) * | 2019-03-05 | 2019-05-17 | 郑州万特电气股份有限公司 | Virtual Fire Training device based on Unity3D |
CN109992107A (en) * | 2019-02-28 | 2019-07-09 | 济南大学 | Virtual control device and its control method |
CN110827414A (en) * | 2019-11-05 | 2020-02-21 | 江西服装学院 | Virtual digital library experience device based on VR technique |
CN111369854A (en) * | 2020-03-20 | 2020-07-03 | 广西生态工程职业技术学院 | Vr virtual reality laboratory operating system and method |
CN112102667A (en) * | 2020-09-27 | 2020-12-18 | 国家电网有限公司技术学院分公司 | Video teaching system and method based on VR interaction |
CN112835449A (en) * | 2021-02-03 | 2021-05-25 | 青岛航特教研科技有限公司 | Virtual reality and somatosensory device interaction-based safety somatosensory education system |
CN113223364A (en) * | 2021-06-29 | 2021-08-06 | 中国人民解放军海军工程大学 | Submarine cable diving buoy simulation training system |
CN115454240A (en) * | 2022-09-05 | 2022-12-09 | 无锡雪浪数制科技有限公司 | Meta universe virtual reality interaction experience system and method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102930753A (en) * | 2012-10-17 | 2013-02-13 | 中国石油化工股份有限公司 | Gas station virtual training system and application |
CN104750397A (en) * | 2015-04-09 | 2015-07-01 | 重庆邮电大学 | Somatosensory-based natural interaction method for virtual mine |
US20170235377A1 (en) * | 2015-02-13 | 2017-08-17 | Leap Motion, Inc. | Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments |
CN108196686A (en) * | 2018-03-13 | 2018-06-22 | 北京无远弗届科技有限公司 | A kind of hand motion posture captures equipment, method and virtual reality interactive system |
-
2018
- 2018-09-11 CN CN201811055005.9A patent/CN109144273B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102930753A (en) * | 2012-10-17 | 2013-02-13 | 中国石油化工股份有限公司 | Gas station virtual training system and application |
US20170235377A1 (en) * | 2015-02-13 | 2017-08-17 | Leap Motion, Inc. | Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments |
CN104750397A (en) * | 2015-04-09 | 2015-07-01 | 重庆邮电大学 | Somatosensory-based natural interaction method for virtual mine |
CN108196686A (en) * | 2018-03-13 | 2018-06-22 | 北京无远弗届科技有限公司 | A kind of hand motion posture captures equipment, method and virtual reality interactive system |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109992107A (en) * | 2019-02-28 | 2019-07-09 | 济南大学 | Virtual control device and its control method |
CN109992107B (en) * | 2019-02-28 | 2023-02-24 | 济南大学 | Virtual control device and control method thereof |
CN109767668A (en) * | 2019-03-05 | 2019-05-17 | 郑州万特电气股份有限公司 | Virtual Fire Training device based on Unity3D |
CN109767668B (en) * | 2019-03-05 | 2021-04-20 | 郑州万特电气股份有限公司 | Unity 3D-based virtual fire-fighting training device |
CN110827414A (en) * | 2019-11-05 | 2020-02-21 | 江西服装学院 | Virtual digital library experience device based on VR technique |
CN111369854A (en) * | 2020-03-20 | 2020-07-03 | 广西生态工程职业技术学院 | Vr virtual reality laboratory operating system and method |
CN112102667A (en) * | 2020-09-27 | 2020-12-18 | 国家电网有限公司技术学院分公司 | Video teaching system and method based on VR interaction |
CN112835449A (en) * | 2021-02-03 | 2021-05-25 | 青岛航特教研科技有限公司 | Virtual reality and somatosensory device interaction-based safety somatosensory education system |
CN113223364A (en) * | 2021-06-29 | 2021-08-06 | 中国人民解放军海军工程大学 | Submarine cable diving buoy simulation training system |
CN115454240A (en) * | 2022-09-05 | 2022-12-09 | 无锡雪浪数制科技有限公司 | Meta universe virtual reality interaction experience system and method |
CN115454240B (en) * | 2022-09-05 | 2024-02-13 | 无锡雪浪数制科技有限公司 | Meta universe virtual reality interaction experience system and method |
Also Published As
Publication number | Publication date |
---|---|
CN109144273B (en) | 2021-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109144273A (en) | A kind of virtual fire-fighting experiential method based on VR technology | |
US8512043B2 (en) | Body motion training and qualification system and method | |
Zanbaka et al. | Comparison of path visualizations and cognitive measures relative to travel technique in a virtual environment | |
CA2554498C (en) | Body motion training and qualification system and method | |
CN110162163B (en) | Virtual fire-fighting exercise method and system based on somatosensory and VR technology | |
CN102331840B (en) | User selection and navigation based on looped motions | |
US20090258703A1 (en) | Motion Assessment Using a Game Controller | |
CN105252532A (en) | Method of cooperative flexible attitude control for motion capture robot | |
KR101389894B1 (en) | Virtual reality simulation apparatus and method using motion capture technology and | |
Pamplona et al. | The image-based data glove | |
CN108961910A (en) | A kind of VR fire drill device | |
WO2013111146A2 (en) | System and method of providing virtual human on human combat training operations | |
CN110148330A (en) | Around machine check training system before a kind of Aircraft based on virtual reality | |
CN110333776A (en) | A kind of military equipment operation training system and method based on wearable device | |
Li et al. | A Fire Drill Training System Based on VR and Kinect Somatosensory Technologies. | |
Tsai | Personal basketball coach: Tactic training through wireless virtual reality | |
TWI423114B (en) | Interactive device and operating method thereof | |
CN209417968U (en) | A kind of fire-fighting drill electronic sand table based on virtual reality | |
Pai et al. | Home Fitness and Rehabilitation Support System Implemented by Combining Deep Images and Machine Learning Using Unity Game Engine. | |
Colvin et al. | Multiple user motion capture and systems engineering | |
Kang et al. | Integrated augmented and virtual reality technologies for realistic fire drill training | |
CN108693964A (en) | Simulated environment display system and method | |
KR20140046197A (en) | An apparatus and method for providing gesture recognition and computer-readable medium having thereon program | |
TWI767633B (en) | Simulation virtual classroom | |
Adam | Towards more realism: Improving immersion of a virtual human-robot working cell and discussing the comparability with its real-world representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |