CN108170270A - A kind of gesture tracking method of VR helmets - Google Patents

A kind of gesture tracking method of VR helmets Download PDF

Info

Publication number
CN108170270A
CN108170270A CN201711458563.5A CN201711458563A CN108170270A CN 108170270 A CN108170270 A CN 108170270A CN 201711458563 A CN201711458563 A CN 201711458563A CN 108170270 A CN108170270 A CN 108170270A
Authority
CN
China
Prior art keywords
gesture
helmets
standard
gestures
operating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711458563.5A
Other languages
Chinese (zh)
Inventor
胡蔚萌
余贵飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Land Carving Technology Co Ltd
Original Assignee
Wuhan Land Carving Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Land Carving Technology Co Ltd filed Critical Wuhan Land Carving Technology Co Ltd
Priority to CN201711458563.5A priority Critical patent/CN108170270A/en
Publication of CN108170270A publication Critical patent/CN108170270A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of gesture tracking methods of VR helmets, include the following steps:SO1:Training images of gestures is acquired using camera, the trained images of gestures output is standard gesture by VR helmets;SO2:Repeat the step SO1, acquire different training images of gestures, establish standard gesture library;SO3:Using camera acquisition operations images of gestures, the operating gesture image is separated into operating gesture by VR helmets;SO4:Reading standard gesture library searches the standard gesture identical with the operating gesture;SO5:It is corresponding operational order by operating gesture output according to the operational order corresponding to the standard gesture found.The beneficial effects of the invention are as follows:By establishing standard gesture library, standard gesture feature information is preserved, the resolution speed of operating gesture track can be significantly improved, facilitate carry out gesture tracking, there is very strong flexibility and user's customization.

Description

A kind of gesture tracking method of VR helmets
Technical field
The present invention relates to VR equipment technical fields more particularly to a kind of gesture tracking methods of VR helmets.
Background technology
Virtual reality technology is an important branch direction of emulation technology.Virtual reality technology uses computer, utilizes The image and scene that the relevant technologies and the generation of soft and hardware tool are dynamic in real time, 3 D stereo and texture are true to nature, and make it can To imitate the various perception of the mankind, and interacted using sensor and user.From virtual reality technology rudiment in 1963 so far, The theory of virtual reality technology is fairly perfect, and in recent years, virtual reality technology is in Military Simulation, amusement game, doctor Extensive research and use is obtained in multiple industries such as treatment, building.
In the interactive process of existing virtual reality device, other than traditional button operation, also gesture Identification function, VR helmets are detached from image and are known by setting the image in the camera acquisition visual field in a device Other hand images carry out hand images Model Matching to differentiate gesture classification or tracking gesture coordinate.Wherein separation identification Action is established on the basis of model training and generally use convolutional neural networks carry out gesture data training.
During existing virtual reality device tracks gesture coordinate, typically acquire gesture depth data and utilize CNN carries out regression training and obtains model.And the training core of existing CNN networks is that feature is extracted to X-Y scheme convolution, Jin Jinli The depth data that uses gesture is trained, and the information obtained can be described as the information extracted from two dimensional surface, the three-dimensional of hand Stereoeffect is not utilized effectively substantially, since the collected information of CNN is plane information, so the training of CNN Difficulty is big, and obtained data error is big, the gesture coordinate of tracking is also not accurate enough.
Invention content
The technical problem to be solved by the present invention is to solve the above shortcomings of the prior art and to provide a kind of VR helmets Gesture tracking method.
The technical solution that the present invention solves above-mentioned technical problem is as follows:
A kind of gesture tracking method of VR helmets is provided, is included the following steps:
SO1:Training images of gestures is acquired using camera, the trained images of gestures output is standard by VR helmets Gesture;
SO2:Repeat the step SO1, acquire different training images of gestures, establish standard gesture library;
SO3:Using camera acquisition operations images of gestures, the operating gesture image is separated into operation by VR helmets Gesture;
SO4:Reading standard gesture library searches the standard gesture identical with the operating gesture;
SO5:It is corresponding by operating gesture output according to the operational order corresponding to the standard gesture found Operational order.
The beneficial effects of the invention are as follows:For the present invention by establishing standard gesture library, preservation standard gesture feature information can The resolution speed of operating gesture track is significantly improved, facilitates carry out gesture tracking, there is very strong flexibility and user's customization.
Based on the above technical solution, the present invention can also be improved as follows.
Further, the step SO1 specifically includes following steps:
SO1.1:Training images of gestures is acquired using camera;
SO1.2:The three-dimension gesture key point of the trained images of gestures is marked, and establish correspondence in VR helmets Three-dimensional system of coordinate;
SO1.3:Three-dimension gesture key point is subjected to coordinate label, and be optimized for three-dimension gesture curvilinear path;
SO1.4:Extract the gesture feature information in three-dimension gesture curvilinear path;
SO1.5:It is standard gesture, and be saved into server by gesture feature information output.
The advantageous effect of above-mentioned further scheme is:By establishing standard gesture library, standard gesture feature information, energy are preserved The resolution speed of operating gesture track is enough significantly improved, facilitates carry out gesture tracking.
Further, the three-dimensional system of coordinate that the step SO1.2 is established is CNN networks.
The advantageous effect of above-mentioned further scheme is:Operating gesture three-dimensional information is made full use of by CNN networks, is improved The training effect of convolutional neural networks, it is sufficient to meet the simulating scenes of pinpoint accuracy, expand the use scope of VR helmets.
Further, it is deep that the information extracted in the step SO1.4 further includes three-dimension gesture curve normal vector, curvature and gesture Spend the target feature vector information of image.
The advantageous effect of above-mentioned further scheme is:Under the premise of using analyzing efficiency is improved using characteristic value anticipation, The method also compared using the combination of various features Vectors matching, so as to can guarantee (using feature vector) and improve (multiple combinations) Operating gesture recognition accuracy.
Further, in the step SO5, if not finding the mark identical with the operating gesture in standard gesture library Quasi- gesture then optimizes the operating gesture, and is corresponding operational order by the operating gesture output after optimization.
Further, the camera is depth camera.
The advantageous effect of above-mentioned further scheme is:Expand the range of gesture identification parsing, improve VR helmet gestures The using effect of tracking.
Description of the drawings
Fig. 1 is a kind of flow chart of the gesture tracking method of VR helmets of the present invention.
Specific embodiment
The principle and features of the present invention will be described below with reference to the accompanying drawings, and the given examples are served only to explain the present invention, and It is non-to be used to limit the scope of the present invention.
A kind of gesture tracking method of VR helmets provided in this embodiment is retouched in detail below in conjunction with attached drawing It states.
As shown in Figure 1, a kind of gesture tracking method of VR helmets, includes the following steps:
SO1:Training images of gestures is acquired using camera, the trained images of gestures output is standard by VR helmets Gesture;
SO2:Repeat the step SO1, acquire different training images of gestures, establish standard gesture library;
SO3:Using camera acquisition operations images of gestures, the operating gesture image is separated into operation by VR helmets Gesture;
SO4:Reading standard gesture library searches the standard gesture identical with the operating gesture;
SO5:It is corresponding by operating gesture output according to the operational order corresponding to the standard gesture found Operational order.
In the step SO5, if not finding the standard gesture identical with the operating gesture in standard gesture library, Then the operating gesture is optimized, and is corresponding operational order by the operating gesture output after optimization.
The camera is depth camera.
The step SO1 specifically includes following steps:
SO1.1:Training images of gestures is acquired using camera;
SO1.2:The three-dimension gesture key point of the trained images of gestures is marked, and establish correspondence in VR helmets Three-dimensional system of coordinate;
SO1.3:Three-dimension gesture key point is subjected to coordinate label, and be optimized for three-dimension gesture curvilinear path;
SO1.4:Extract the gesture feature information in three-dimension gesture curvilinear path;
SO1.5:It is standard gesture, and be saved into server by gesture feature information output.
The depth image of gesture depth image block and background object that will be trained in images of gestures using random forests algorithm Separation, the gesture depth image block for extracting each corresponding three-dimension gesture image are right as gesture depth image, and further Gesture depth image carries out denoising, removes the noise on gesture depth image, completes image separation and culture.
By the hand depth image image normalization dimensionality reduction after noise reduction, the two dimensional image that resolution ratio is 256 × 256 is obtained. 256 × 256 pixel selection is based on to the process experience of VR generation images, this resolution ratio can keep scheming as far as possible for a long time As the complete of information and reduce the information processing capacity of subsequent image processing system.In the present embodiment, the final mesh of gesture is identified Be that can judge three-dimension gesture automatically, and using three-dimension gesture characteristics of image variation generate control signal carry out VR wear The next step operation of equipment.
The three-dimensional system of coordinate that the step SO1.2 is established is CNN networks.
The information extracted in the step SO1.4 further includes three-dimension gesture curve normal vector, curvature and gesture depth image Target feature vector information.
Selectable feature vector includes but are not limited to following:Starting point arrives it successively according to point set sequence in gesture path Vector that the inclination angle for the vector that its each point is formed is formed, the inclination angle of the vectors of 2 points of compositions of arbitrary neighborhood is formed in gesture path Vector.
Wherein, the selection of the feature vector of the gesture path follows following principle:Selected combination of eigenvectors must It is the characteristics of can must more fully expressing gesture path, remaining if removing some feature from the combination of eigenvectors of selection Combination of eigenvectors the characteristics of cannot more fully expressing gesture path.
A kind of preferred gesture feature vector combines:Starting point arrives other each points successively according to point set sequence in gesture path In vector and gesture path that the inclination angle of the vector of composition is formed the inclination angle of the vectors of 2 points of compositions of arbitrary neighborhood form to Amount;Wherein, each feature vector during the gesture recognition module combines the gesture feature vector is compressed.
Feature extractor of the CNN networks that training is completed as three-dimension gesture, real-time operation is acquired by depth camera The depth image of gesture motion, the feature extractor extract normal vector in the real-time operation gesture motion depth image, Curvature and gesture deep image information export the three-dimensional coordinate of three-dimension gesture in real-time operation gesture motion depth image, described VR helmets are to the three-dimension gesture that identifies into line trace.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all the present invention spirit and Within principle, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention.

Claims (6)

  1. A kind of 1. gesture tracking method of VR helmets, which is characterized in that include the following steps:
    SO1:Training images of gestures is acquired using camera, the trained images of gestures output is standard hand by VR helmets Gesture;
    SO2:Repeat the step SO1, acquire different training images of gestures, establish standard gesture library;
    SO3:Using camera acquisition operations images of gestures, the operating gesture image is separated into manipulator by VR helmets Gesture;
    SO4:Reading standard gesture library searches the standard gesture identical with the operating gesture;
    SO5:It is corresponding operation by operating gesture output according to the operational order corresponding to the standard gesture found Instruction.
  2. 2. a kind of gesture tracking method of VR helmets according to claim 1, it is characterised in that:The step SO1 tools Body includes the following steps:
    SO1.1:Training images of gestures is acquired using camera;
    SO1.2:The three-dimension gesture key point of the trained images of gestures is marked in VR helmets, and establishes corresponding three Dimension coordinate system;
    SO1.3:Three-dimension gesture key point is subjected to coordinate label, and be optimized for three-dimension gesture curvilinear path;
    SO1.4:Extract the gesture feature information in three-dimension gesture curvilinear path;
    SO1.5:It is standard gesture, and be saved into server by gesture feature information output.
  3. 3. a kind of gesture tracking method of VR helmets according to claim 2, it is characterised in that:The step SO1.2 The three-dimensional system of coordinate of foundation is CNN networks.
  4. 4. a kind of gesture tracking method of VR helmets according to claim 3, it is characterised in that:The step SO1.4 The information of middle extraction further includes the target feature vector information of three-dimension gesture curve normal vector, curvature and gesture depth image.
  5. 5. a kind of gesture tracking method of VR helmets according to claim 1, it is characterised in that:In the step SO5, If not finding the standard gesture identical with the operating gesture in standard gesture library, the operating gesture is carried out excellent Change, and be corresponding operational order by the operating gesture output after optimization.
  6. 6. a kind of gesture tracking method of VR helmets according to claim 1, it is characterised in that:The camera is deep Spend camera.
CN201711458563.5A 2017-12-28 2017-12-28 A kind of gesture tracking method of VR helmets Pending CN108170270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711458563.5A CN108170270A (en) 2017-12-28 2017-12-28 A kind of gesture tracking method of VR helmets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711458563.5A CN108170270A (en) 2017-12-28 2017-12-28 A kind of gesture tracking method of VR helmets

Publications (1)

Publication Number Publication Date
CN108170270A true CN108170270A (en) 2018-06-15

Family

ID=62518952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711458563.5A Pending CN108170270A (en) 2017-12-28 2017-12-28 A kind of gesture tracking method of VR helmets

Country Status (1)

Country Link
CN (1) CN108170270A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111596767A (en) * 2020-05-27 2020-08-28 广州市大湾区虚拟现实研究院 Gesture capturing method and device based on virtual reality

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111596767A (en) * 2020-05-27 2020-08-28 广州市大湾区虚拟现实研究院 Gesture capturing method and device based on virtual reality
CN111596767B (en) * 2020-05-27 2023-05-30 广州市大湾区虚拟现实研究院 Gesture capturing method and device based on virtual reality

Similar Documents

Publication Publication Date Title
CN106648103B (en) A kind of the gesture tracking method and VR helmet of VR helmet
CN109359538B (en) Training method of convolutional neural network, gesture recognition method, device and equipment
WO2020108362A1 (en) Body posture detection method, apparatus and device, and storage medium
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN104123545B (en) A kind of real-time human facial feature extraction and expression recognition method
CN104240277B (en) Augmented reality exchange method and system based on Face datection
CN109472198B (en) Gesture robust video smiling face recognition method
CN106778628A (en) A kind of facial expression method for catching based on TOF depth cameras
Feng et al. Depth-projection-map-based bag of contour fragments for robust hand gesture recognition
CN105528082A (en) Three-dimensional space and hand gesture recognition tracing interactive method, device and system
CN111696028A (en) Method and device for processing cartoon of real scene image, computer equipment and storage medium
CN102157007A (en) Performance-driven method and device for producing face animation
CN102184008A (en) Interactive projection system and method
CN111667005B (en) Human interactive system adopting RGBD visual sensing
CN108171133A (en) A kind of dynamic gesture identification method of feature based covariance matrix
CN107944459A (en) A kind of RGB D object identification methods
Baby et al. Dynamic vision sensors for human activity recognition
CN109543644B (en) Multi-modal gesture recognition method
Zhang et al. Multimodal spatiotemporal networks for sign language recognition
CN112232258A (en) Information processing method and device and computer readable storage medium
CN100487732C (en) Method for generating cartoon portrait based on photo of human face
Fortin et al. Handling occlusions in real-time augmented reality: dealing with movable real and virtual objects
Zhang et al. Activity object detection based on improved faster R-CNN
CN110188630A (en) A kind of face identification method and camera
CN108170270A (en) A kind of gesture tracking method of VR helmets

Legal Events

Date Code Title Description
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180615

WD01 Invention patent application deemed withdrawn after publication