CN112667087A - Unity-based gesture recognition operation interaction realization method and system - Google Patents

Unity-based gesture recognition operation interaction realization method and system Download PDF

Info

Publication number
CN112667087A
CN112667087A CN202110015484.7A CN202110015484A CN112667087A CN 112667087 A CN112667087 A CN 112667087A CN 202110015484 A CN202110015484 A CN 202110015484A CN 112667087 A CN112667087 A CN 112667087A
Authority
CN
China
Prior art keywords
gesture
interaction
unity
dimensional object
ray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110015484.7A
Other languages
Chinese (zh)
Inventor
康望才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Hankun Industrial Co Ltd
Original Assignee
Hunan Hankun Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Hankun Industrial Co Ltd filed Critical Hunan Hankun Industrial Co Ltd
Priority to CN202110015484.7A priority Critical patent/CN112667087A/en
Publication of CN112667087A publication Critical patent/CN112667087A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a gesture recognition operation interaction realization method and system based on Unity, wherein the method comprises the following steps: acquiring gesture data of a real gesture in a real space by using a gesture recognition algorithm, and extracting key points of the gesture data as feature vectors of the gesture; restoring the gesture according to the extracted feature vector of the gesture, and drawing a corresponding gesture model in Unity; the method comprises the steps of establishing interaction between a real gesture in a real space and a gesture model drawn in the Unity, controlling the movement of the gesture model through motion recognition of the real gesture, and controlling the interaction between the real gesture and a three-dimensional object in the Unity through an interaction algorithm by using a selected interaction operation mode. The invention has good interactivity and strong interest; the gesture recognition algorithm and the interaction algorithm are used for carrying out accurate and interactive control on the gesture, and the automation degree is high; the user is immersed and felt strong, can promote corresponding training effect in being applied to the education.

Description

Unity-based gesture recognition operation interaction realization method and system
Technical Field
The invention relates to the technical field of virtual reality, and particularly discloses a method and a system for realizing gesture recognition operation interaction based on Unity.
Background
With the rapid popularization of computers and the internet, human-computer interaction activities gradually become important components of people's daily lives. The research on the natural man-machine interaction mode can reduce the difficulty of operation, avoid the labor of a single part of the body and reduce the probability of diseases such as cervical spondylosis and lumbar disc herniation. Traditional human-computer interaction modes such as a keyboard, a mouse, a remote control, a touch screen and the like all need a human to adapt to a machine, and the human is required to operate according to preset specifications, but the development of the technology at present enables the human-computer interaction mode to have more choices. The human-computer interaction based on gesture recognition mainly adopts direct operation, so that the human-computer interaction technology is gradually transferred from taking a machine as a center to taking a person as a center, and the human-computer interaction habit is met. Therefore, gesture recognition is increasingly being developed and applied in engineering.
With the rapid development of human-computer interaction technology, gesture recognition technology has also met a wave of height, and in recent years, more or less body shadows of gesture recognition can be seen on consumer electronics exhibitions, digital exhibitions, home appliance exhibitions and even automobile exhibitions. Gesture recognition is a subject of computer science and language technology, with the aim of recognizing human gestures by mathematical algorithms. Gestures may originate from any body motion or state, but typically originate from the face or hands. Current focus in the art includes emotion recognition from facial and gesture recognition. The user can control or interact with the device using simple gestures without touching them. Recognition of posture, gait and human behavior are also subjects of gesture recognition technology. Gesture recognition can be viewed as a way of computationally solving human language, building a richer bridge between machines and humans than the original text User Interface or even GUI (Graphical User Interface). In the prior VR field, the principle of gesture recognition is to capture and model the shape of the hand by using various sensors such as infrared and camera, etc. to form a sequence frame of model information, and then convert the information sequence into corresponding instructions that can be recognized by the machine, such as opening, switching menu, moving, etc. to complete control, however, the gesture is single with the interactive hand of the object model object.
Therefore, in the existing VR field, the interaction means between the gesture and the object model object is single, which is a technical problem to be solved urgently.
Disclosure of Invention
The invention provides a Unity-based gesture recognition operation interaction realization method and a Unity-based gesture recognition operation interaction realization system, and aims to solve the technical problem that interaction means of gestures and object model objects in the prior VR field are single.
One aspect of the invention relates to a Unity-based gesture recognition operation interaction implementation method, which comprises the following steps:
acquiring gesture data of a real gesture in a real space by using a gesture recognition algorithm, and extracting key points of the gesture data as feature vectors of the gesture;
restoring the gesture according to the extracted feature vector of the gesture, and drawing a corresponding gesture model in Unity;
the method comprises the steps of establishing interaction between a real gesture in a real space and a gesture model drawn in the Unity, controlling movement of the gesture model through motion recognition of the real gesture, and controlling interaction between the real gesture and a three-dimensional object in the Unity through an interaction algorithm by using a selected interaction operation mode, wherein the interaction operation mode comprises a touch interaction operation mode and a movement interaction operation mode.
Further, the step of controlling the interaction of the real gesture and the three-dimensional object in Unity through an interaction algorithm by using the selected interactive operation mode comprises the following steps:
if the selected interactive operation mode is identified to be the touch interactive operation mode, judging whether the gesture model touches the three-dimensional object;
and if the gesture model touches the three-dimensional object, performing interactive calculation, and controlling the three-dimensional object to move along with the gesture model so as to realize touch interaction of gesture recognition.
Further, if a finger touches the three-dimensional object, interactive calculation is carried out, and the three-dimensional object is controlled to follow the gesture model to carry out touch interaction of gesture recognition, the steps include:
and performing corresponding contact judgment according to the collision distance between the gesture model and the three-dimensional object, wherein the coordinate point of the gesture model is assumed to be A (x)1,y1,z1) The coordinate point of the three-dimensional object is B (x)2,y2,z2) Then, the collision distance between the gesture model and the three-dimensional object is obtained by the following formula:
Figure BDA0002884809680000031
wherein d isA,BIs the collision distance, x, between the gesture model and the three-dimensional object1Is the abscissa, y, of coordinate point A of the gesture model1Is the ordinate, z, of coordinate point A of the gesture model1Vertical axis coordinates of a coordinate point A of the gesture model; x is the number of2Abscissa, y, of coordinate point B of a three-dimensional object2Ordinate, z, of a coordinate point B of a three-dimensional object2Is the vertical axis coordinate of coordinate point B of the three-dimensional object.
Further, the step of controlling the gesture model to interact with the three-dimensional object in Unity through an interaction algorithm by using the selected interactive operation mode comprises the following steps:
if the selected interactive operation mode is identified to be the mobile interactive operation mode, drawing a motion ray according to an interactive algorithm;
and judging whether the drawn motion ray hits the movable object, and if the motion ray hits the movable object, controlling the movable object to move along with the shot motion ray so as to realize the movement interaction of gesture recognition.
Further, the step of judging whether the drawn motion ray hits the movable object, and if the motion ray hits the movable object, controlling the movable object to move along with the shot motion ray to realize the movement interaction of gesture recognition includes:
suppose that point C (x, y, z) is the starting point of a ray and its direction vector is D (x)3, y3,z3) Then the motion ray is given by the following formula:
ray=new Ray(C,D·d)
where ray is the motion ray and d is the ray length.
Another aspect of the present invention relates to a Unity-based gesture recognition operation interaction implementation system, including:
the extraction module is used for acquiring gesture data of a real gesture in a real space by utilizing a gesture recognition algorithm and extracting key points of the gesture data as feature vectors of the gesture;
the drawing module is used for restoring the gesture according to the extracted feature vector of the gesture and drawing a corresponding gesture model in Unity;
the interaction module is used for establishing interaction between a real gesture in a real space and a gesture model drawn in the Unity, controlling the movement of the gesture model through motion recognition of the real gesture, and controlling interaction between the real gesture and a three-dimensional object in the Unity through an interaction algorithm by using a selected interaction operation mode, wherein the interaction operation mode comprises a touch interaction operation mode and a mobile interaction operation mode.
Further, the interaction module comprises:
the judging unit is used for judging whether the gesture model touches the three-dimensional object or not if the selected interactive operation mode is identified to be the touch interactive operation mode;
and the first interaction unit is used for performing interaction calculation if the gesture model touches the three-dimensional object, and controlling the three-dimensional object to move along with the gesture model so as to realize touch interaction of gesture recognition.
Further, the first interaction unit comprises a first calculation subunit,
a first calculating subunit, configured to perform corresponding contact determination according to a collision distance between the gesture model and the three-dimensional object, assuming that a coordinate point of the gesture model is a (x)1, y1,z1) The coordinate point of the three-dimensional object is B (x)2,y2,z2) Then, the collision distance between the gesture model and the three-dimensional object is obtained by the following formula:
Figure BDA0002884809680000041
wherein d isA,BIs the collision distance, x, between the gesture model and the three-dimensional object1Is the abscissa, y, of coordinate point A of the gesture model1Is the ordinate, z, of coordinate point A of the gesture model1Vertical axis coordinates of a coordinate point A of the gesture model; x is the number of2Abscissa, y, of coordinate point B of a three-dimensional object2Ordinate, z, of a coordinate point B of a three-dimensional object2Is the vertical axis coordinate of coordinate point B of the three-dimensional object.
Further, the interaction module comprises:
the drawing unit is used for drawing the motion ray according to an interactive algorithm if the selected interactive operation mode is identified as the mobile interactive operation mode;
and the second interaction unit is used for judging whether the drawn motion ray hits the movable object, and controlling the movable object to move along with the shot motion ray to realize the movement interaction of gesture recognition if the drawn motion ray hits the movable object.
Further, the second interaction unit comprises a second calculation subunit,
a second calculation subunit, configured to assume point C (x, y, z) as the starting point of the ray and its direction vector as D (x)3,y3,z3) Then the motion ray is given by the following formula:
ray=new Ray(C,D·d)
where ray is the motion ray and d is the ray length.
The beneficial effects obtained by the invention are as follows:
the invention provides a gesture recognition operation interaction realization method and system based on Unity, which are characterized in that gesture data of a real gesture in a real space is obtained by utilizing a gesture recognition algorithm, and key points of the gesture data are extracted as feature vectors of the gesture; restoring the gesture according to the extracted feature vector of the gesture, and drawing a corresponding gesture model in Unity; the method comprises the steps of establishing interaction between a real gesture in a real space and a gesture model drawn in the Unity, controlling the movement of the gesture model through motion recognition of the real gesture, and controlling the interaction between the real gesture and a three-dimensional object in the Unity through an interaction algorithm by using a selected interaction operation mode. The gesture recognition operation interaction implementation method and system based on Unity provided by the invention have the advantages of good interactivity and strong interestingness; the gesture recognition algorithm and the interaction algorithm are used for carrying out accurate and interactive control on the gesture, and the automation degree is high; the user is immersed and felt strong, can promote corresponding training effect in being applied to the education.
Drawings
FIG. 1 is a schematic flowchart illustrating an embodiment of a Unity-based gesture recognition operation interaction implementation method according to the present invention;
FIG. 2 is a flowchart illustrating an embodiment of the steps shown in FIG. 1 for establishing interaction between a real gesture in real space and a gesture model drawn in Unity, controlling movement of the gesture model by motion recognition of the real gesture, and controlling interaction between the real gesture and a three-dimensional object in Unity through an interaction algorithm using a selected interaction operation mode;
FIG. 3 is a flowchart illustrating an embodiment of the steps shown in FIG. 1 for establishing interaction between a real gesture in real space and a gesture model drawn in Unity, controlling movement of the gesture model by motion recognition of the real gesture, and controlling interaction between the real gesture and a three-dimensional object in Unity through an interaction algorithm using a selected interaction operation mode;
FIG. 4 is a functional block diagram of an embodiment of a Unity-based gesture recognition operation interaction implementation system according to the present invention;
FIG. 5 is a functional block diagram of a first embodiment of the interaction module shown in FIG. 4;
FIG. 6 is a functional block diagram of an embodiment of the first interactive unit shown in FIG. 5;
FIG. 7 is a functional block diagram of a second embodiment of the interaction module shown in FIG. 4;
fig. 8 is a functional block diagram of an embodiment of the second interaction unit shown in fig. 7.
The reference numbers illustrate:
10. an extraction module; 20. a drawing module; 30. an interaction module; 31. a judgment unit; 32. a first interaction unit; 321. a first calculation subunit; 33. a drawing unit; 34. a second interaction unit; 341. a second calculation subunit.
Detailed Description
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
As shown in fig. 1, a first embodiment of the present invention provides a Unity-based gesture recognition operation interaction implementing method, including the following steps:
s100, acquiring gesture data of a real gesture in a real space by using a gesture recognition algorithm, and extracting key points of the gesture data to serve as feature vectors of the gesture.
In the development environment of Unity3D, gesture data of a real gesture in a real space is acquired by a gesture recognition algorithm, and a key point of the gesture data is extracted as a feature vector of the gesture. Wherein the gesture data comprises a gesture type, a gesture posture, a gesture coordinate, a gesture moving direction and/or a gesture moving speed. The feature vectors of the gesture include gesture coordinates, gesture displacement, and/or gesture acceleration.
And S200, restoring the gesture according to the extracted feature vector of the gesture, and drawing a corresponding gesture model in Unity.
And restoring the gesture by using the feature vectors extracted from the key points of the gesture data, and drawing a corresponding gesture model in Unity. The gesture model is a three-dimensional gesture model. The gesture model includes a gesture type and/or a gesture posture. The drawing of the gesture model comprises gesture rendering, and the execution steps of the gesture rendering are as follows:
the rendering method is a piece of code that contains a series of graphics and decides how best to render these objects as pixels on the screen. The rendering method may employ any desired method. There is nothing to say that the rendering method must display the mesh graphics by uploading those vertices and triangles to the GPU. The voxel rendering method may first decide to voxel the grid and then display it using a custom voxel rendering system.
The main reason for the rendering approach is that different optimization strategies can be used to render the same concept, and these strategies have different benefits and trade-offs. Some policies may not fully support some functions, and in this embodiment, three rendering algorithms are provided.
1) The Baked render: this rendering method can only draw grid graphics. The strategy is to bake all graphics into a single grid so that all graphics can be drawn with one draw call.
2) The Dynamic render: this rendering method can only draw grid graphics. The strategy is to perform some custom grid processing to ensure that all graphics can be drawn using the same material regardless of the material used, and to attempt to ensure that all graphics can be batched automatically by dynamic batch processing at Unity.
3) The Dynamic Text render: this rendering method can only draw text graphics. The strategy is simply to use the built-in dynamic font generation of Unity to create a grid representing the text that needs to be displayed.
The rendering method is powerful in that it can view all graphics at once, which makes it possible to perform optimizations that would otherwise be difficult (or impossible) without this information.
It must render the corresponding model in Unity, create a new empty gameObject in the hierarchy, and append the graphicRenderer component to it. The graphic render is the main component, attaching management to all graphics. The "new group" button is pressed and a new baking group is created. This will create a new group that can render graphics. Since the "baking" option is selected when creating a new group, the group will use a baking rendering method. This group will display all the different options of the rendering method for you and list the list of added functions at the bottom;
now there is a group, and then some graphics are added for rendering, creating a new empty gameObject as a child of the renderer object. Then, the MeshGraphic component is added thereto. A trellis is assigned to the trellis field. At this point, it should be seen that the multi-dimensional dataset appears in the scene view and the game view.
The cube is now pure white. The vertex color of the mesh may be altered to color it to a certain color, but otherwise it will remain a solid color. This is because all renderers use by default a non-illuminated shader, which is not completely uniform with lighting.
Return to the graphics renderer and note the portion of the graphical Features at the bottom of the interface named. Pressing the + button to add a new function, then Texture selects an option from the drop down menu, which will see that a new element has been added to the list.
The graphics have just been given the ability to wrap the texture on its surface. Returning to the graph, an attribute field appears, to which a texture is assigned. The selection of the texture to be assigned to the mesh continues. Upon completion, a displayed button Refresh Atlas is seen. The same buttons are also present on the graphics renderer. Pressing this button will update all the textures used by the graphics renderer, which is necessary to observe any changes related to the textures. Now pressing the button sees the texture mapping onto the graphic.
At this time, the rendering of the gesture data to the graphic image is completed, and the set texture is given thereto.
Step S300, establishing interaction between a real gesture in a real space and a gesture model drawn in the Unity, controlling the movement of the gesture model through motion recognition of the real gesture, and controlling the interaction between the real gesture and a three-dimensional object in the Unity through an interaction algorithm by using a selected interaction operation mode, wherein the interaction operation mode comprises a touch interaction operation mode and a movement interaction operation mode.
The interaction between the real gesture in the real space and the gesture model drawn in the Unity is established, the movement of the drawn gesture model is controlled through the action recognition of the real gesture, and the interaction between the real gesture and the three-dimensional object in the Unity is controlled through an interaction algorithm by using the selected interaction operation mode.
Gesture recognition generally follows a view coordinate system, i.e., a right-hand coordinate system, which recognizes hand motion in mm (millimeters), while object objects in the Unity3D scene are displayed with reference to a left-hand coordinate system, which is in m (meters). Therefore, to realize the interaction between the real hand in the real space and the virtual hand in the virtual space created by Unity3D, the two hands are first subjected to coordinate transformation, and the formula is as follows:
(x,y,z)=S(x′,y′,z′)×R (1)
d=d′×R (2)
wherein, (x, y, z) and d represent coordinates and direction vectors in the Unity3D scene; (x ', y', z ') and d' represent coordinates and direction vectors recognized when the mobile phone is used. R is a coordinate transformation matrix, and the formula is as follows:
Figure BDA0002884809680000081
s is the corresponding scale, representing the scale change of the coordinate transformation (set according to the system scene needs), thus moving Sm in the Unity scene every lmm.
Firstly, the touch interactive operation mode mainly processes a manner of executing the touch of the rendering model of the gesture in the Unity and the interactive object, and the executing steps of the touch interactive operation mode are as follows:
loading an InteractionManager component in a hand model generated in Unity, wherein the InteractionManager component is important for monitoring the management of gestures and the like, and secondly, loading a universal collision component of Unity such as Rigibody and Collider for collision contact monitoring, wherein the processing on the gestures is finished at the moment;
the main interactive part is placed on an interactive object, at the moment, the interactive object also needs to load a part of components of Unity, Rigbody, BoxCollier and the like, then a most main listening script InteractionButton is mounted and mainly used for monitoring execution, then the state of the interactive object and a gesture model is monitored in Update, the contact of the interactive object and the gesture model can be finished by monitoring the contact of the interactive object through methods such as OnCollisionEnter (CollisionCollisions) and OnCollisionExit (Collisions collectionExit), the state of the interactive object is recorded when the interactive object is contacted, and the state of the interactive object is restored when the interactive object is separated.
Secondly, the mobile interactive operation mode comprises gesture model assisted ray generation and ray confirmation, and the execution steps are as follows:
1. gesture model assisted ray generation
The starting point and the ending point or the starting point and the direction of a ray are basically known for drawing the ray in the unit, so that corresponding picture drawing can be performed according to specific coordinate information, and the auxiliary ray is also the same, and corresponding known information needs to be obtained through corresponding calculation.
Specifically acquiring the coordinate of the current VR camera and the coordinate of the gesture model in Unity; firstly, converting a y value of a rotation Euler angle of a camera into four elements, and recording a deviation angle of the y value; secondly, acquiring a current frame rendered by a current picture, and acquiring gesture rendering information in the current frame;
curFrame=DataProvider.CurrentFrame.TransformedCopy(Transform .Identity);
the start coordinates are again calculated from the camera position and the product of the previously recorded deviation angle and the cheap amount.
ProjectionOrigin=Camera.main.transform.position+CurrentRotation *-RightOrigin;
Again find its offset
Offset=curFrame.Hands[whichHand].Fingers[1].Bone
(Bone.BoneType.TYPE_METACARPAL).Center.ToVector3() -ProjectionOrigin;
Wherein Fingers [1] represents index finger and TYPE _ METACARPAL represents metacarpal bone.
And finally, the center positions of the thumb and the metacarpal bone of the hand whose starting point is rendered by the current frame can be obtained.
originPos=curFrame.Hands[whichHand].Fingers[0]
.Bone(Bone.BoneType.TYPE_METACARPAL).Center.ToVector3();
The direction is as follows:
Direction=ProjectionOrigin+Offset*MotionScalingFactor;
the motionscaling factor represents the amount of movement.
2. Ray confirmation
If the displacement is needed, the gesture is required to perform an operation for a set time to confirm, and the gesture of the limit OK is taken as a judgment condition of the movement each time. The other operations need to add a script HandStateChecker for gesture determination in the scene for auxiliary determination, an external call interface is provided here, according to whether the actual determination is an OK gesture, the calculation is to traverse five fingers of the hand in the method, determine whether the length in the vector direction of the fingertips of the thumb and the index finger is smaller than a specified value, such as 5cm or 10cm, and determine that the gesture is an OK gesture if the difference value of the direction vectors of the fingertips of the other two adjacent fingers is not smaller than the value. After the gesture is acquired, whether the set time is continued or not is judged outside, and then confirmation interaction can be carried out.
Further, please refer to fig. 2, fig. 2 is a detailed flowchart of a first embodiment of step S300 shown in fig. 1, in this embodiment, step S300 includes:
step S310, if the selected interactive operation mode is the touch interactive operation mode, judging whether the gesture model touches the three-dimensional object.
And if the selected interactive operation mode is recognized to be the touch interactive operation mode, further judging whether the drawn gesture model touches the three-dimensional object.
And S320, if the gesture model touches the three-dimensional object, performing interactive calculation, and controlling the three-dimensional object to move along with the gesture model so as to realize touch interaction of gesture recognition.
And if the gesture recognition model touches the three-dimensional object, interactive calculation is carried out on the gesture model and the three-dimensional object, and the three-dimensional object is controlled to move along with the gesture model so as to realize touch interaction of gesture recognition.
And performing corresponding contact judgment according to the collision distance between the gesture model and the three-dimensional object, judging the distance between the gesture model and the model on the interactive object, namely solving the distance between two points in the three-dimensional space, and performing corresponding contact judgment according to the collision distance.
Assume that the coordinate point of the gesture model is A (x)1,y1,z1) The coordinate point of the three-dimensional object is B (x)2,y2,z2) Then, the collision distance between the gesture model and the three-dimensional object is obtained by the following formula:
Figure BDA0002884809680000111
Figure BDA0002884809680000112
Figure BDA0002884809680000113
wherein d isA,BIs the collision distance, x, between the gesture model and the three-dimensional object1Is the abscissa, y, of coordinate point A of the gesture model1Is the ordinate, z, of coordinate point A of the gesture model1Vertical axis coordinates of a coordinate point A of the gesture model; x is the number of2Abscissa, y, of coordinate point B of a three-dimensional object2Ordinate, z, of a coordinate point B of a three-dimensional object2Is the vertical axis coordinate of coordinate point B of the three-dimensional object.
Preferably, please refer to fig. 3, fig. 3 is a detailed flowchart of a second embodiment of step S300 shown in fig. 1, and on the basis of the first embodiment, the step S300 includes:
and step S330, if the selected interactive operation mode is identified to be the mobile interactive operation mode, drawing a motion ray according to an interactive algorithm.
And step S340, judging whether the drawn motion ray hits the movable object, and if the motion ray hits the movable object, controlling the movable object to move along with the shot motion ray so as to realize the movement interaction of gesture recognition.
The movement interaction based on gesture recognition of Unity includes gesture confirmation recognition operations and ray generation calculations.
The need to use gestures for movement operations necessitates an auxiliary tool for real-time monitoring of the moving point and a confirmed gesture for movement determination, and the following contents are specifically related:
first, gesture confirmation recognition operation
After the ray objects generated by the corresponding gesture models in VR headset + Unity3D, the movement interaction needs to be done by a confirmed gesture. In this embodiment, the movement determination is performed by setting one OK gesture.
The identification process is mainly divided into the following steps:
step S1, acquiring gesture actions through a camera or related hardware.
Monitoring gesture information of a user in an effective monitoring range of a camera or related gesture hardware, and continuously shooting for 2-3 seconds to keep gesture acquisition. For gesture recognition times of 2-3 seconds, it is a range that is acceptable to the general public both physically and psychologically.
Step S2 is a basic judgment of the gesture motion.
The number of the palms appearing in the visual field range is judged according to the number of the three-dimensional coordinates of the shot hand center points, so that whether the user performs one-hand operation or two-hand operation is judged, wherein the one-hand operation is that only one hand exists when the gesture recognition operation is performed in the visual field range, and the two hands can exist when the gesture recognition operation is performed in the visual field range.
And step S3, OK gesture recognition.
With S1~S5Represents 5 fingers, wherein S1Represents the thumb, S5Representing the thumb, so as to recur; then on this basis, in combination with the gesture, it can be determined that:
(right hand) S in the coordinate system of the camera or related gesture hardware recognition space1And S2Absolute coordinates in X-axis and Y-axisThe value is less than S3、S4And S5And S on the Z axis1And S2Greater than S3、S4And S5
②S1And S2The fingertips are touching, so their fingertip euclidean distance must be less than the threshold d;
③S3and S4Angle alpha therebetween34And S4And S5Angle alpha therebetween45Must be greater than the set angle α';
④S3、S4and S5Is in an open state, so S3、S4And S5And the included angle beta' with the vertical direction of the palm is set.
Calculating S by formula1And S2And the Euclidean distance d of the three-dimensional space position coordinates of the finger tip. Then respectively calculate S3And S4Angle therebetween and S4And S5Angle alpha therebetween45By vector V1Sum vector V2Respectively representing the directions of two adjacent fingers, the included angle alpha of the two fingertips can be calculated by the company (5), and the corresponding included angle beta of the perpendicular vector between the finger and the palm can also be obtained by the company (5).
Figure BDA0002884809680000131
Figure BDA0002884809680000132
Ray generation calculation
Originally planning to send a ray through the palm of the hand recognized by the gesture or the position between the thumb and the index finger, and monitor the moving position through the collision detection of the ray, but this method has a problem that when the real gesture is suspended in the real space, there is slight shake and slight rotation, the distance may be several millimeters, but the mapping to the specific Unity virtual reality space may be a difference of tens of millimeters, and finally the effect of the real on the ray is that the end of the far-end ray is shaken all the time.
Finally, the problem is avoided through a corresponding design, specifically, an auxiliary information is introduced, the position and the orientation of the current rendering camera are combined, and the position of the gesture and the orientation of the gesture are combined to rotate; a triangle can be roughly drawn, so that the problem of hop disorder can be better avoided.
The method is specifically designed to the following contents:
the euler angle is converted into four elements, the Rotation related to the space is expressed by quaternions and a Rotation matrix is pushed, the quaternion of the euler angle structure is the same as the Rotation matrix of the euler angle structure, namely, three basic rotations Elemental Rotation are combined together. Then the expression for the rotating quaternion q (x, y, z, w) is as follows: here, an example is given in the order of XYZ.
Figure BDA0002884809680000141
It is noted here that this is not a matrix operation but a multiplication of quaternions, i.e. given an euler rotation (X, Y, Z degrees around the x, y and z axes respectively), the corresponding quaternion is: q ═ x, y, z, w, where:
Figure BDA0002884809680000142
secondly, acquiring the starting position of the start of the ray, multiplying three-dimensional coordinates by four elements, which relates to the rotation and translation operations of the matrix, and the specific formula is as follows:
matrix rotation:
Figure BDA0002884809680000143
Figure BDA0002884809680000151
Figure BDA0002884809680000152
matrix translation:
Figure BDA0002884809680000153
finally, the ray generation is involved, the vector operation is abstracted into mathematics, the direction and the length of the vector are confirmed at two points in space in mathematical logic, the starting point and the direction are obtained in the generation mode, the length in the direction is needed to be converted into the mathematical mode, and the default length can be assumed to be d, when the collision object is detected and the length of the collision object is changed into d'.
Suppose that point C (x, y, z) is the starting point of a ray and its direction vector is D (x)3, y3,z3) Then the motion ray is given by the following formula:
ray=new Ray(C,D·d) (15)
where ray is the motion ray and d is the ray length.
Its mathematical vector is represented as:
Figure BDA0002884809680000154
wherein the content of the first and second substances,
Figure BDA0002884809680000155
in the form of a mathematical vector, the vector,
Figure BDA0002884809680000156
is a direction vector;
Figure BDA0002884809680000157
is the initial vector.
According to the Unity-based gesture recognition operation interaction implementation method provided by the embodiment, gesture data of a real gesture in a real space is acquired by using a gesture recognition algorithm, and key points of the gesture data are extracted as feature vectors of the gesture; restoring the gesture according to the extracted feature vector of the gesture, and drawing a corresponding gesture model in Unity; the method comprises the steps of establishing interaction between a real gesture in a real space and a gesture model drawn in the Unity, controlling the movement of the gesture model through motion recognition of the real gesture, and controlling interaction between the real gesture and a three-dimensional object in the Unity through an interaction algorithm by using a selected interaction operation mode. The Unity-based gesture recognition operation interaction implementation method provided by the embodiment has good interactivity and strong interestingness; the gesture recognition algorithm and the interaction algorithm are used for carrying out accurate and interactive control on the gesture, and the automation degree is high; the user is immersed and felt strong, can promote corresponding training effect in being applied to the education.
As shown in fig. 4, fig. 4 is a functional block diagram of an embodiment of a Unity-based gesture recognition operation interaction implementation system provided by the present invention, in this embodiment, the system includes an extraction module 10, a drawing module 20, and an interaction module 30, where the extraction module 10 is configured to obtain gesture data of a real gesture in a real space by using a gesture recognition algorithm, and extract a key point of the gesture data as a feature vector of the gesture; the drawing module 20 is used for restoring the gesture according to the extracted feature vector of the gesture and drawing a corresponding gesture model in Unity; and the interaction module 30 is used for establishing interaction between a real gesture in a real space and a gesture model drawn in the Unity, controlling the movement of the gesture model through motion recognition of the real gesture, and controlling the interaction between the real gesture and the three-dimensional object in the Unity through an interaction algorithm by using a selected interaction operation mode, wherein the interaction operation mode comprises a touch interaction operation mode and a movement interaction operation mode.
In the development environment of Unity3D, gesture data of a real gesture in a real space is acquired by a gesture recognition algorithm, and a key point of the gesture data is extracted as a feature vector of the gesture. Wherein the gesture data comprises a gesture type, a gesture posture, a gesture coordinate, a gesture moving direction and/or a gesture moving speed. The feature vectors of the gesture include gesture coordinates, gesture displacement, and/or gesture acceleration.
And restoring the gesture by using the feature vectors extracted from the key points of the gesture data, and drawing a corresponding gesture model in Unity. The gesture model is a three-dimensional gesture model. The gesture model includes a gesture type and/or a gesture posture. The drawing of the gesture model comprises gesture rendering, and the execution steps of the gesture rendering are as follows:
the rendering method is a piece of code that contains a series of graphics and decides how best to render these objects as pixels on the screen. The rendering method may employ any desired method. There is nothing to say that the rendering method must display the mesh graphics by uploading those vertices and triangles to the GPU. The voxel rendering method may first decide to voxel the grid and then display it using a custom voxel rendering system.
The main reason for the rendering approach is that different optimization strategies can be used to render the same concept, and these strategies have different benefits and trade-offs. Some policies may not fully support some functions, and in this embodiment, three rendering algorithms are provided.
1) The Baked render: this rendering method can only draw grid graphics. The strategy is to bake all graphics into a single grid so that all graphics can be drawn with one draw call.
2) The Dynamic render: this rendering method can only draw grid graphics. The strategy is to perform some custom grid processing to ensure that all graphics can be drawn using the same material regardless of the material used, and to attempt to ensure that all graphics can be batched automatically by dynamic batch processing at Unity.
3) The Dynamic Text render: this rendering method can only draw text graphics. The strategy is simply to use the built-in dynamic font generation of Unity to create a grid representing the text that needs to be displayed.
The rendering method is powerful in that it can view all graphics at once, which makes it possible to perform optimizations that would otherwise be difficult (or impossible) without this information.
It must render the corresponding model in Unity, create a new empty gameObject in the hierarchy, and append the graphicRenderer component to it. The graphic render is the main component, attaching management to all graphics. The "new group" button is pressed and a new baking group is created. This will create a new group that can render graphics. Since the "baking" option is selected when creating a new group, the group will use a baking rendering method. This group will display all the different options of the rendering method for you and list the list of added functions at the bottom;
now there is a group, and then some graphics are added for rendering, creating a new empty gameObject as a child of the renderer object. Then, the MeshGraphic component is added thereto. A trellis is assigned to the trellis field. At this point, it should be seen that the multi-dimensional dataset appears in the scene view and the game view.
The cube is now pure white. The vertex color of the mesh may be altered to color it to a certain color, but otherwise it will remain a solid color. This is because all renderers use by default a non-illuminated shader, which is not completely uniform with lighting.
Return to the graphics renderer and note the portion of the graphical Features at the bottom of the interface named. Pressing the + button to add a new function, then Texture selects an option from the drop down menu, which will see that a new element has been added to the list.
The graphics have just been given the ability to wrap the texture on its surface. Returning to the graph, an attribute field appears, to which a texture is assigned. The selection of the texture to be assigned to the mesh continues. Upon completion, a displayed button Refresh Atlas is seen. The same buttons are also present on the graphics renderer. Pressing this button will update all the textures used by the graphics renderer, which is necessary to observe any changes related to the textures. Now pressing the button sees the texture mapping onto the graphic.
At this time, the rendering of the gesture data to the graphic image is completed, and the set texture is given thereto.
The interaction between the real gesture in the real space and the gesture model drawn in the Unity is established, the movement of the drawn gesture model is controlled through the action recognition of the real gesture, and the interaction between the real gesture and the three-dimensional object in the Unity is controlled through an interaction algorithm by using the selected interaction operation mode.
Gesture recognition generally follows a view coordinate system, i.e., a right-hand coordinate system, which recognizes hand motion in mm (millimeters), while object objects in the Unity3D scene are displayed with reference to a left-hand coordinate system, which is in m (meters). Therefore, to realize the interaction between the real hand in the real space and the virtual hand in the virtual space created by Unity3D, the two hands are first subjected to coordinate transformation, and the formula is as follows:
(x,y,z)=S(x′,y′,z′)×R (17)
d=d′×R (18)
wherein, (x, y, z) and d represent coordinates and direction vectors in the Unity3D scene; (x ', y', z ') and d' represent coordinates and direction vectors recognized when the mobile phone is used. R is a coordinate transformation matrix, and the formula is as follows:
Figure BDA0002884809680000191
s is a corresponding scale, representing the scale change of the coordinate transformation (set according to the system scene needs), thus every 1mm, Sm is moved in the Unity scene.
Firstly, the touch interactive operation mode mainly processes a manner of executing the touch of the rendering model of the gesture in the Unity and the interactive object, and the executing steps of the touch interactive operation mode are as follows:
loading an InteractionManager component in a hand model generated in Unity, wherein the InteractionManager component is important for monitoring the management of gestures and the like, and secondly, loading a universal collision component of Unity such as Rigibody and Collider for collision contact monitoring, wherein the processing on the gestures is finished at the moment;
the main interactive part is placed on an interactive object, at the moment, the interactive object also needs to load a part of components of Unity, Rigbody, BoxCollier and the like, then a most main listening script InteractionButton is mounted and mainly used for monitoring execution, then the state of the interactive object and a gesture model is monitored in Update, the contact of the interactive object and the gesture model can be finished by monitoring the contact of the interactive object through methods such as OnCollisionEnter (CollisionCollisions) and OnCollisionExit (Collisions collectionExit), the state of the interactive object is recorded when the interactive object is contacted, and the state of the interactive object is restored when the interactive object is separated.
Secondly, the mobile interactive operation mode comprises gesture model assisted ray generation and ray confirmation, and the execution steps are as follows:
1. gesture model assisted ray generation
The starting point and the ending point or the starting point and the direction of a ray are basically known for drawing the ray in the unit, so that corresponding picture drawing can be performed according to specific coordinate information, and the auxiliary ray is also the same, and corresponding known information needs to be obtained through corresponding calculation.
Specifically acquiring the coordinate of the current VR camera and the coordinate of the gesture model in Unity; firstly, converting a y value of a rotation Euler angle of a camera into four elements, and recording a deviation angle of the y value; secondly, acquiring a current frame rendered by a current picture, and acquiring gesture rendering information in the current frame;
curFrame=DataProvider.CurrentFrame.TransformedCopy(Transform .Identity);
the start coordinates are again calculated from the camera position and the product of the previously recorded deviation angle and the cheap amount.
ProjectionOrigin=Camera.main.transform.position+CurrentRotation *-RightOrigin;
Again find its offset
Offset=curFrame.Hands[whichHand].Fingers[1].Bone
(Bone.BoneType.TYPE_METACARPAL).Center.ToVector3() -ProjectionOrigin;
Wherein Fingers [1] represents index finger and TYPE _ METACARPAL represents metacarpal bone.
And finally, the center positions of the thumb and the metacarpal bone of the hand whose starting point is rendered by the current frame can be obtained.
originPos=curFrame.Hands[whichHand].Fingers[0]
.Bone(Bone.BoneType.TYPE_METACARPAL).Center.ToVector3();
The direction is as follows:
Direction=ProjectionOrigin+Offset*MotionScalingFactor;
the motionscaling factor represents the amount of movement.
2. Ray confirmation
If the displacement is needed, the gesture is required to perform an operation for a set time to confirm, and the gesture of the limit OK is taken as a judgment condition of the movement each time. The other operations need to add a script HandStateChecker for gesture determination in the scene for auxiliary determination, an external call interface is provided here, according to whether the actual determination is an OK gesture, the calculation is to traverse five fingers of the hand in the method, determine whether the length in the vector direction of the fingertips of the thumb and the index finger is smaller than a specified value, such as 5cm or 10cm, and determine that the gesture is an OK gesture if the difference value of the direction vectors of the fingertips of the other two adjacent fingers is not smaller than the value. After the gesture is acquired, whether the set time is continued or not is judged outside, and then confirmation interaction can be carried out.
Further, please refer to fig. 5, fig. 5 is a functional module schematic diagram of a first embodiment of the interaction module shown in fig. 4, in this embodiment, the interaction module 30 includes a determining unit 31 and a first interaction unit 32, where the determining unit 31 is configured to determine whether the gesture model touches the three-dimensional object if it is recognized that the selected interaction operation mode is the touch interaction operation mode; the first interaction unit 32 is configured to perform interaction calculation when the gesture model touches the three-dimensional object, and control the three-dimensional object to move along with the gesture model to implement touch interaction of gesture recognition.
PreferablyPlease refer to fig. 6, fig. 6 is a functional module schematic diagram of an embodiment of the first interaction unit shown in fig. 5, in the embodiment, the first interaction unit 32 includes a first calculating subunit 321, where the first calculating subunit 321 is configured to perform corresponding contact determination according to a collision distance between the gesture model and the three-dimensional object, and assume that a coordinate point of the gesture model is a (x) and a coordinate point of the gesture model is a1,y1,z1) The coordinate point of the three-dimensional object is B (x)2, y2,z2) Then, the collision distance between the gesture model and the three-dimensional object is obtained by the following formula:
Figure BDA0002884809680000211
wherein d isA,BIs the collision distance, x, between the gesture model and the three-dimensional object1Is the abscissa, y, of coordinate point A of the gesture model1Is the ordinate, z, of coordinate point A of the gesture model1Vertical axis coordinates of a coordinate point A of the gesture model; x is the number of2Abscissa, y, of coordinate point B of a three-dimensional object2Ordinate, z, of a coordinate point B of a three-dimensional object2Is the vertical axis coordinate of coordinate point B of the three-dimensional object.
Further, please refer to fig. 7, fig. 7 is a functional module schematic diagram of a second embodiment of the interaction module shown in fig. 4, and on the basis of the first embodiment, the interaction module 30 includes a drawing unit 33 and a second interaction unit 34, where the drawing unit 33 is configured to draw a motion ray according to an interaction algorithm if it is recognized that the selected interaction operation mode is the mobile interaction operation mode; and the second interaction unit 34 is configured to determine whether the drawn motion ray hits the movable object, and if the drawn motion ray hits the movable object, control the movable object to move along with the shot motion ray to implement the movement interaction of gesture recognition.
Preferably, referring to fig. 8, fig. 8 is a functional module schematic diagram of an embodiment of the second interaction unit shown in fig. 7, in the embodiment, the second interaction unit34 comprises a second calculation subunit 341, wherein the second calculation subunit 341 is configured to assume that the point C (x, y, z) is the starting point of the ray and its direction vector is D (x, y, z)3,y3,z3) Then the motion ray is given by the following formula:
ray=new Ray(C,D·d) (21)
where ray is the motion ray and d is the ray length.
According to the Unity-based gesture recognition operation interaction implementation system provided by the embodiment, gesture data of a real gesture in a real space is acquired by using a gesture recognition algorithm, and key points of the gesture data are extracted as feature vectors of the gesture; restoring the gesture according to the extracted feature vector of the gesture, and drawing a corresponding gesture model in Unity; the method comprises the steps of establishing interaction between a real gesture in a real space and a gesture model drawn in the Unity, controlling the movement of the gesture model through motion recognition of the real gesture, and controlling interaction between the real gesture and a three-dimensional object in the Unity through an interaction algorithm by using a selected interaction operation mode. The Unity-based gesture recognition operation interaction implementation system provided by the embodiment has good interactivity and strong interestingness; the gesture recognition algorithm and the interaction algorithm are used for carrying out accurate and interactive control on the gesture, and the automation degree is high; the user is immersed and felt strong, can promote corresponding training effect in being applied to the education.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A gesture recognition operation interaction implementation method based on Unity is characterized by comprising the following steps:
acquiring gesture data of a real gesture in a real space by using a gesture recognition algorithm, and extracting key points of the gesture data as feature vectors of the gesture;
restoring the gesture according to the extracted feature vector of the gesture, and drawing a corresponding gesture model in Unity;
the method comprises the steps of establishing interaction between a real gesture in a real space and a gesture model drawn in the Unity, controlling the movement of the gesture model through motion recognition of the real gesture, and controlling interaction between the real gesture and a three-dimensional object in the Unity through an interaction algorithm by using a selected interaction operation mode, wherein the interaction operation mode comprises a touch interaction operation mode and a mobile interaction operation mode.
2. The Unity-based gesture recognition operation interaction implementation method of claim 1,
the step of controlling the interaction of the real gesture with the three-dimensional object in Unity through an interaction algorithm using the selected interactive operation mode comprises:
if the selected interactive operation mode is recognized to be the touch interactive operation mode, judging whether the gesture model touches the three-dimensional object;
and if the gesture model touches the three-dimensional object, performing interactive calculation, and controlling the three-dimensional object to move along with the gesture model so as to realize touch interaction of gesture recognition.
3. The Unity-based gesture recognition operation interaction implementation method of claim 2,
the step of performing interactive calculation if the finger touches the three-dimensional object and controlling the three-dimensional object to follow the gesture model to perform touch interaction of gesture recognition comprises the following steps:
performing correspondence according to collision distance between a gesture model and the three-dimensional objectAssuming that the coordinate point of the gesture model is A (x)1,y1,z1) The coordinate point of the three-dimensional object is B (x)2,y2,z2) Then, the collision distance between the gesture model and the three-dimensional object is obtained by the following formula:
Figure FDA0002884809670000011
wherein d isA,BIs the collision distance, x, between the gesture model and the three-dimensional object1Is the abscissa, y, of coordinate point A of the gesture model1Is the ordinate, z, of coordinate point A of the gesture model1Vertical axis coordinates of a coordinate point A of the gesture model; x is the number of2Abscissa, y, of coordinate point B of a three-dimensional object2Ordinate, z, of a coordinate point B of a three-dimensional object2Is the vertical axis coordinate of coordinate point B of the three-dimensional object.
4. The Unity-based gesture recognition operation interaction implementation method of claim 1,
the step of controlling the gesture model to interact with the three-dimensional object in Unity through an interaction algorithm by using the selected interactive operation mode comprises the following steps:
if the selected interactive operation mode is identified to be the mobile interactive operation mode, drawing a motion ray according to an interactive algorithm;
and judging whether the drawn motion ray hits the movable object, and if the drawn motion ray hits the movable object, controlling the movable object to move along with the shot motion ray so as to realize the movement interaction of gesture recognition.
5. The Unity-based gesture recognition operation interaction implementation method of claim 4,
the step of judging whether the drawn motion ray hits the movable object, and if the drawn motion ray hits the movable object, controlling the movable object to move along with the shot motion ray so as to realize the movement interaction of gesture recognition comprises the following steps of:
suppose that point C (x, y, z) is the starting point of a ray and its direction vector is D (x)3,y3,z3) Then the motion ray is obtained by the following formula:
ray=new Ray(C,D·d)
where ray is the motion ray and d is the ray length.
6. A Unity-based gesture recognition operation interaction implementation system is characterized by comprising:
the extraction module (10) is used for acquiring gesture data of a real gesture in a real space by utilizing a gesture recognition algorithm and extracting key points of the gesture data as feature vectors of the gesture;
a drawing module (20) for restoring the gesture according to the extracted feature vector of the gesture and drawing a corresponding gesture model in Unity;
the interaction module (30) is used for establishing interaction between a real gesture in a real space and a gesture model drawn in the Unity, controlling movement of the gesture model through motion recognition of the real gesture, and controlling interaction between the real gesture and a three-dimensional object in the Unity through an interaction algorithm by using a selected interaction operation mode, wherein the interaction operation mode comprises a touch interaction operation mode and a movement interaction operation mode.
7. The Unity-based gesture recognition operation interaction implementation system of claim 6,
the interaction module (30) comprises:
the judging unit (31) is used for judging whether the gesture model touches the three-dimensional object or not if the selected interactive operation mode is recognized as the touch interactive operation mode;
and the first interaction unit (32) is used for performing interaction calculation when the gesture model touches the three-dimensional object, and controlling the three-dimensional object to move along with the gesture model so as to realize touch interaction of gesture recognition.
8. The Unity-based gesture recognition operation interaction implementation system of claim 7,
the first interaction unit (32) comprises a first calculation subunit (321),
the first calculating subunit (321) is used for carrying out corresponding contact judgment according to the collision distance between the gesture model and the three-dimensional object, and the coordinate point of the gesture model is assumed to be A (x)1,y1,z1) The coordinate point of the three-dimensional object is B (x)2,y2,z2) Then, the collision distance between the gesture model and the three-dimensional object is obtained by the following formula:
Figure FDA0002884809670000031
wherein d isA,BIs the collision distance, x, between the gesture model and the three-dimensional object1Is the abscissa, y, of coordinate point A of the gesture model1Is the ordinate, z, of coordinate point A of the gesture model1Vertical axis coordinates of a coordinate point A of the gesture model; x is the number of2Abscissa, y, of coordinate point B of a three-dimensional object2Ordinate, z, of a coordinate point B of a three-dimensional object2Is the vertical axis coordinate of coordinate point B of the three-dimensional object.
9. The Unity-based gesture recognition operation interaction implementation system of claim 6,
the interaction module (30) comprises:
a drawing unit (33) for drawing a motion ray according to an interaction algorithm if the selected interaction operation mode is identified as the mobile interaction operation mode;
and the second interaction unit (34) is used for judging whether the drawn motion ray hits the movable object, and controlling the movable object to move along with the shot motion ray to realize the movement interaction of gesture recognition if the motion ray hits the movable object.
10. The Unity-based gesture recognition operation interaction implementation system of claim 9,
the second interaction unit (34) comprises a second calculation subunit (341),
the second calculation subunit (341) is configured to assume that the point C (x, y, z) is a starting point of a ray and that the direction vector thereof is D (x)3,y3,z3) Then the motion ray is obtained by the following formula:
ray=new Ray(C,D·d)
where ray is the motion ray and d is the ray length.
CN202110015484.7A 2021-01-06 2021-01-06 Unity-based gesture recognition operation interaction realization method and system Pending CN112667087A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110015484.7A CN112667087A (en) 2021-01-06 2021-01-06 Unity-based gesture recognition operation interaction realization method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110015484.7A CN112667087A (en) 2021-01-06 2021-01-06 Unity-based gesture recognition operation interaction realization method and system

Publications (1)

Publication Number Publication Date
CN112667087A true CN112667087A (en) 2021-04-16

Family

ID=75413302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110015484.7A Pending CN112667087A (en) 2021-01-06 2021-01-06 Unity-based gesture recognition operation interaction realization method and system

Country Status (1)

Country Link
CN (1) CN112667087A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110515455A (en) * 2019-07-25 2019-11-29 山东科技大学 It is a kind of based on the dummy assembly method cooperateed in Leap Motion and local area network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110515455A (en) * 2019-07-25 2019-11-29 山东科技大学 It is a kind of based on the dummy assembly method cooperateed in Leap Motion and local area network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵欣仪等: "基于Leap Motion的虚拟物体操纵技术研究", 《机电工程技术》 *

Similar Documents

Publication Publication Date Title
US20230025269A1 (en) User-Defined Virtual Interaction Space and Manipulation of Virtual Cameras with Vectors
US10761612B2 (en) Gesture recognition techniques
Cabral et al. On the usability of gesture interfaces in virtual reality environments
Rautaray Real time hand gesture recognition system for dynamic applications
JP3777830B2 (en) Computer program generation apparatus and computer program generation method
JP3777650B2 (en) Interface equipment
Rautaray et al. Real time multiple hand gesture recognition system for human computer interaction
JP2023169411A (en) Information processing device, control method of information processing device, and program
US20080062169A1 (en) Method Of Enabling To Model Virtual Objects
Smith et al. Digital foam interaction techniques for 3D modeling
CN111643899A (en) Virtual article display method and device, electronic equipment and storage medium
Choi et al. 3D hand pose estimation on conventional capacitive touchscreens
Lang et al. A multimodal smartwatch-based interaction concept for immersive environments
KR101525011B1 (en) tangible virtual reality display control device based on NUI, and method thereof
CN112667087A (en) Unity-based gesture recognition operation interaction realization method and system
Liu et al. COMTIS: Customizable touchless interaction system for large screen visualization
Humberston et al. Hands on: interactive animation of precision manipulation and contact
CN113703577A (en) Drawing method and device, computer equipment and storage medium
Varga et al. Survey and investigation of hand motion processing technologies for compliance with shape conceptualization
Tran et al. A hand gesture recognition library for a 3D viewer supported by kinect's depth sensor
Shaikh et al. Hand Gesture Recognition Using Open CV
TWI435280B (en) Gesture recognition interaction system
Piumsomboon Natural hand interaction for augmented reality.
Wenzhen et al. A Master-Slave Hand System for Virtual reality Interaction
Oshita et al. Character motion control by hands and principal component analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210416

RJ01 Rejection of invention patent application after publication