CN113569635A - Gesture recognition method and system - Google Patents

Gesture recognition method and system Download PDF

Info

Publication number
CN113569635A
CN113569635A CN202110689187.0A CN202110689187A CN113569635A CN 113569635 A CN113569635 A CN 113569635A CN 202110689187 A CN202110689187 A CN 202110689187A CN 113569635 A CN113569635 A CN 113569635A
Authority
CN
China
Prior art keywords
gesture
screen
cloud data
gesture recognition
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110689187.0A
Other languages
Chinese (zh)
Other versions
CN113569635B (en
Inventor
杨毅
龙杰
王品
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Eai Technology Co ltd
Original Assignee
Huizhou Yuedeng Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou Yuedeng Intelligent Technology Co ltd filed Critical Huizhou Yuedeng Intelligent Technology Co ltd
Priority to CN202110689187.0A priority Critical patent/CN113569635B/en
Publication of CN113569635A publication Critical patent/CN113569635A/en
Application granted granted Critical
Publication of CN113569635B publication Critical patent/CN113569635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • G08C17/02Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a gesture recognition method and a system, wherein the gesture recognition method comprises the following steps: acquiring environmental point cloud data in real time, and constructing a virtual two-dimensional plane according to the environmental point cloud data; constructing a screen interaction plane corresponding to a display device screen by adopting a preset point marking method; collecting motion point cloud data of a gesture on a screen interaction plane, and performing gesture recognition according to the motion point cloud data; and generating a corresponding control instruction according to the recognized gesture, and sending the control instruction to the terminal equipment through wireless transmission. The invention can create a screen interactive screen with equal proportion according to the screen of the display equipment, so that the gesture is accurately matched with the screen of the display equipment, and the multi-equipment portable non-contact interaction is realized.

Description

Gesture recognition method and system
Technical Field
The invention relates to the technical field of information processing, in particular to a gesture recognition method and system.
Background
With the development of multimedia technology, the interactive technology applied therein is also rapidly developed, and higher requirements are also provided for the reliability, usability, non-sensibility and use experience of the interactive technology. The man-machine interaction technology is not limited to simple key operations such as keyboard and mouse, and more interaction technologies such as gesture interaction, expression interaction and gesture interaction are proposed and applied.
The air-spaced gesture is a non-contact air gesture which enables a user to operate in a free-hand mode, and the essence of the air gesture is a natural human-computer interaction mode which does not bring any invariance to the user gesture interaction. The separated gesture naturally utilizes fingers, wrist and arm action to express interactive intention through the user, mainly includes the finger, waves the hand, makes a fist, and the palm rotates etc. has characteristics such as interactive space better, higher flexibility.
As small-screen computing devices for smart phones and computing bracelets, rings, and watches continue to increase, most of the existing screens of users interact with the screen through remote controllers or WeChat applets, and the operation is not easy. Furthermore, these display devices require a variety of operational gestures, and there is a problem that some gestures may conflict with previously defined gestures.
Disclosure of Invention
In view of the above technical problems, embodiments of the present invention provide a gesture recognition method and system capable of recognizing a specific gesture to control a terminal device.
A first aspect of an embodiment of the present invention provides a gesture recognition method, where the method includes: acquiring environmental point cloud data in real time, and constructing a virtual two-dimensional plane according to the environmental point cloud data; constructing a screen interaction plane corresponding to a display device screen by adopting a preset point marking method; collecting motion point cloud data of a gesture on a screen interaction plane, and performing gesture recognition according to the motion point cloud data; and generating a corresponding control instruction according to the recognized gesture, and sending the control instruction to the terminal equipment through wireless transmission.
Optionally, the preset point labeling method includes a five-point labeling method, where the five points include four vertices and a center point of the display device, and the step of constructing the screen interaction plane corresponding to the screen of the display device by using the preset point labeling method includes: and constructing a screen interaction plane corresponding to the screen of the display equipment according to the position information of the five mark points.
Optionally, the preset point marking method includes a two-point marking method, and the step of constructing the screen interaction plane corresponding to the display device screen by using the preset point marking method includes: the method comprises the steps of sending parameter information of a screen of the display equipment to a laser radar, determining two mark points according to the parameter information, and constructing a screen interaction plane corresponding to the screen of the display equipment according to position information of the two mark points.
Optionally, the step of generating a corresponding control instruction according to the recognized gesture includes: searching a preset mapping relation according to the recognized gesture, and acquiring a control instruction corresponding to the gesture, wherein the preset mapping relation comprises the corresponding relation between the gesture and the control instruction.
Optionally, the method further comprises: saving parameter information of a screen interaction plane and setting a preset awakening gesture; monitoring a wake-up gesture through a laser radar, and acquiring position information of a preset wake-up gesture; and constructing a second screen interaction plane according to the position information and the parameter information.
Optionally, the method further comprises: and if the gesture is not recognized within the preset time, monitoring the awakening gesture through the laser radar, and acquiring the position information of the preset awakening gesture.
Optionally, the step of performing gesture recognition according to the motion point cloud data includes: and acquiring the coordinate position of the gesture on the screen interaction plane according to the motion point cloud data, and recognizing the gesture according to the coordinate position.
Optionally, the step of performing gesture recognition according to the motion point cloud data includes: and establishing a deep neural network, inputting the collected motion point cloud data of the gesture into the deep neural network for training, and performing gesture recognition according to the features extracted by the deep neural network.
Optionally, the method further comprises: and when the awakening gesture is monitored, judging whether the display equipment is started, and if not, sending a starting instruction to the display equipment.
The second aspect of the embodiment of the invention provides a gesture recognition system, which comprises a laser radar capable of moving freely and a terminal device communicated with the laser radar, wherein the laser radar is used for sending radar signals, receiving reflected signals, acquiring environmental point cloud data according to the reflected signals and executing any one of the gesture recognition methods.
According to the technical scheme, the screen interaction plane corresponding to the screen of the display device is constructed according to the preset point marking method, so that the gesture is accurately matched with the screen of the display device, and the non-contact interaction of the portable display device is realized. Meanwhile, different awakening gestures are set for different display devices, non-contact interaction of the display devices can be accurately achieved, and operation is more humanized.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating a gesture recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of establishing a screen interaction plane according to the present invention;
FIG. 3 is a flowchart illustrating a gesture recognition method according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides a gesture recognition method, which includes the following steps:
and step S10, acquiring environmental point cloud data in real time, and constructing a virtual two-dimensional plane according to the environmental point cloud data.
According to the method, the environmental point cloud data is obtained in real time through the laser radar, the environmental point cloud data can draw a two-dimensional plane by taking the laser radar as a circle center, and the two-dimensional plane is an invisible virtual plane. Preferably, the two-dimensional plane is a sector plane.
And step S20, constructing a screen interaction plane corresponding to the screen of the display device by adopting a preset point marking method.
In one embodiment of the present invention, the preset point marking method includes a five-point marking method, where the five points include four vertices and a center point of the display device, and when the display device is used, the display device can display the five marking points by clicking an application program loaded on a using end. Referring to fig. 2, the lidar on the table emits laser light, and a user may sequentially click five mark points, for example, four vertices 1, 2, 4, 5 and a center point 3 of the display device plane, on a virtual two-dimensional plane constructed by the lidar according to digital prompts sequentially displayed on a real display device screen, and construct a screen interaction plane corresponding to the display device screen according to position information of the five mark points. Because the display equipment is fixed randomly, when the mark point is clicked through the virtual two-dimensional plane, the equal-proportion position of the screen of the display equipment on the virtual plane can be defined, and therefore multi-equipment and portable interaction can be achieved.
In another embodiment of the present invention, the preset point marking method includes a two-point marking method, and the step of constructing the screen interaction plane corresponding to the screen of the display device by using the preset point marking method includes: the method comprises the steps of sending parameter information of a screen of the display equipment to a laser radar, determining two mark points according to the parameter information, and constructing a screen interaction plane corresponding to the screen of the display equipment according to position information of the two mark points.
Specifically, the display device can send parameter information such as the shape and the shape proportion of the screen of the display device to the laser radar, the laser radar determines two marking points according to the received parameter information, a user determines the length, the width or the diagonal line of the screen interaction plane through the two marking points, the laser radar obtains the length, the width and the diagonal line of the two points of the screen interaction plane, compares the length, the width and the diagonal line with the parameters such as the shape and the shape proportion of the screen of the display device, and calculates the proportion parameter of the screen of the display device and the screen interaction plane. Therefore, the embodiment can create a screen interaction plane with equal proportion to the screen of the display device according to the position information of the two points.
For example, the display device screen may be a square, a rectangle, or a circle, or may be other types of display device screens, and taking a circular display device screen as an example for explanation, a user only needs to confirm two points, namely a dot and any frame point, and the laser radar can confirm the size of the virtual circular screen according to the frame size sent by the display device. Therefore, the user can create a screen interaction plane with accurate proportion only by calibrating few calibration points.
And step S30, collecting the motion point cloud data of the gesture on the screen interaction plane, and performing gesture recognition according to the motion point cloud data.
In one embodiment, motion point cloud data of a gesture on a screen interaction plane is collected, a coordinate position of the gesture on the screen interaction plane is obtained according to the motion point cloud data, and gesture recognition is performed according to the coordinate position. Specifically, the laser radar monitors motion point cloud data of the gesture on the virtual plane, calculates relative distances of the point cloud data relative to the virtual plane in the y-axis direction and the z-axis direction, calculates a coordinate position of the gesture on the virtual plane according to the calculated relative distances, and performs gesture recognition according to the coordinate position.
In the invention, because the laser radar cannot be placed on a horizontal plane, and when the laser radar is not placed on the horizontal plane, the acquired data may have unstable conditions, a gyroscope is also arranged in the laser radar, and the attitude correction is carried out on the obtained information points in the modes of quaternion and the like, so that the stability and the correctness of the data are ensured.
In one embodiment, a deep neural network can be established, the collected motion point cloud data of the gesture is input into the deep neural network for training, and gesture recognition is performed according to features extracted by the deep neural network. Specifically, gesture data of a large number of different hand operations are input into a deep neural network for deep training, characteristic parameters are extracted, and the deep neural network can adopt a deep neural network mature in the prior art. And the trained gesture recognition model is loaded into the laser radar, so that the recognition rate and accuracy of the laser radar are improved.
Because the different motion point clouds that produce on the virtual plane of gesture are different, when the user waves different gestures, for example strike, drag or slide, can set up out the gesture of various differences in virtual plane, be convenient for realize comparatively complicated gesture operation. The gesture is obtained through analysis of the motion point cloud, the gesture is confirmed according to the sliding angle and distance of the hand, the motion point clouds at different moments can be separated, the motion point clouds at multiple continuous moments are spliced, the gesture is identified through analysis, non-contact gesture identification is further achieved, and human-computer interaction is completed.
For example, when the gesture is a first gesture, the laser radar acquires environment point cloud data, a virtual two-dimensional plane is constructed according to the environment point cloud data by taking the laser radar as a coordinate origin, motion point cloud data of the gesture, namely the first gesture, on a screen interaction plane are collected, when the laser radar receives continuous point clouds, the actual abscissa of each point cloud data is changed, the ordinate is not changed, the actual angle of the gesture, namely the first gesture, is calculated through fitting the point cloud data, is not changed or is considered to be zero, and the gesture is considered to be a motion gesture and is a control gesture, for example, a power-on command is controlled. Or when the laser radar receives continuous point clouds, the actual abscissa and ordinate of each point cloud data are changed, the point cloud data are fitted to calculate that the actual first half angle of a gesture 'one' changes from 0 to 180 degrees, and the actual second half angle changes from 180 degrees to 360 degrees, the gesture is considered to be a wave line, and the gesture is considered to be a moving gesture, and the control is started, for example, a shutdown command.
And step S40, generating a corresponding control instruction according to the recognized gesture, and sending the control instruction to the terminal equipment through wireless transmission.
According to the gesture recognition method and device, a preset mapping relation is searched according to recognized gestures, and control instructions corresponding to the gestures are obtained, wherein the preset mapping relation comprises the corresponding relation between the gestures and the control instructions. Specifically, after the laser radar recognizes the gesture, the laser radar analyzes the corresponding gesture, obtains a corresponding control instruction according to a preset mapping relation, codes the control instruction in a coding mode, combines the control instruction into a small data packet, and sends the data packet to the intelligent terminal device in a wireless transmission mode such as Bluetooth or WIFI.
For example, when the laser radar recognizes a knocking gesture, a control instruction corresponding to the knocking gesture is found in a mapping relation pre-maintained in a laser radar memory, and the laser radar sends the control instruction to the terminal device through a communication protocol to control the terminal device, for example, to control the terminal device to be started.
The terminal device of the present invention may be an intelligent terminal device, and before step S10, the laser radar is in communication connection with the intelligent terminal device in advance, for example, the laser radar is connected with the intelligent terminal device through bluetooth, WIFI, USB, or the like, where the intelligent terminal device includes an intelligent robot, a mechanical arm, or the like, or may be a computer, an intelligent television, a projector, or various display devices.
When the intelligent terminal equipment receives the data sent by the laser radar, the received data are decrypted and converted into a control instruction which can be recognized by the intelligent terminal equipment. The invention can create a screen interactive screen with equal proportion according to the screen of the display equipment, so that the gesture is accurately matched with the screen of the display equipment, and the multi-equipment portable non-contact interaction is realized.
In one embodiment of the present invention, the created screen interaction screen is virtual, and the user may know the approximate position of the screen interaction screen when the user just starts to create the screen interaction plane, but may forget the specific position after a long time, so that, in order to enable the user to quickly obtain the position of the screen interaction plane, please refer to fig. 3, the present invention further includes the following method:
and S100, saving the parameter information of the screen interaction plane and setting a preset awakening gesture.
S200, monitoring a wake-up gesture through a laser radar, and acquiring position information of a preset wake-up gesture;
and step S300, constructing a second screen interaction plane according to the position information and the parameter information.
After the laser radar acquires a screen interaction plane, storing parameter information of the screen interaction plane, and setting a preset awakening gesture, wherein the awakening gesture is a sector of the laser radar with three fingers touching the screen simultaneously, after that, the laser radar monitors the awakening gesture by emitting laser, when the awakening gesture is monitored, position information of the awakening gesture is acquired, a second screen interaction plane is established through the parameter information of the screen interaction plane stored in the laser radar and the position information of the awakening gesture, the second screen interaction plane is not different from the originally established screen interaction plane in nature, and only the position parameter is changed. The user can interact with the display device through operations such as knocking, pulling, sliding and the like of the second screen interaction plane. In addition, the wake-up gesture is not limited to three fingers touching the sector of the lidar at the same time for wake-up, but may also be performed in other wake-up gesture manners (e.g., four fingers simultaneously or five fingers making a fist, etc.). Preferably, when a plurality of display devices exist, a special wake-up gesture can be allocated to each display device screen, that is, an ID is allocated to each display device screen, so that a user can wake up a screen interaction plane which the user wants to operate through an ID number, and also can simultaneously operate the plurality of display devices on the same laser radar sector scanning plane through different ID numbers.
In the above embodiment, if the laser radar does not recognize the gesture within the preset time, the laser is used to monitor the wake-up gesture, and the position information of the preset wake-up gesture is obtained. Specifically, if the lidar does not detect any gesture within a certain time, the lidar will not detect any gesture on the screen interaction plane any more, but will monitor the wake-up gesture from the beginning. The creation of the awakening gesture can be collected before or after the screen interaction plane calibration point is firstly carried out, and the user can be prompted to carry out the collection operation of multiple gestures through the screen of the display device during collection, so that the humanization of the operation is improved.
When the laser radar creates a screen interaction plane through the wake-up gesture, a screen interaction plane which is the same as the original screen interaction plane is created by taking the position of the wake-up gesture as an origin, or the screen interaction plane can be created by taking the position of the laser radar as a frame, which is not limited herein. In order to facilitate user operation and improve human-computer interaction humanization, information prompt such as a mouse icon is carried out on a display screen when a screen interaction plane is created, so that a user can operate the screen similar to a notebook touch panel.
Preferably, after the laser radar monitors that the wake-up gesture creates a new screen interaction plane, the laser radar only collects data in the screen interaction plane and detects the wake-up gesture, and other noise data are filtered out, so that the identification accuracy of the laser radar is improved, and the processing speed of the laser radar is increased.
When the laser radar recognizes the awakening gesture, whether the display device is started or not can be judged preferentially, if the display device is not started, a command is sent to start the display device, and therefore the purpose that the display device is started in a gesture mode is achieved.
Compared with other laser radar equipment for gesture recognition, the laser radar adopted by the invention overcomes the defects that the traditional mode is difficult to calibrate, cannot be combined by self and the like when in use. When in use, only the laser radar needs to be fixed, and the device adapted to the equipment is various, and meanwhile, the fixing mode is free without special limitation.
The invention also provides a gesture recognition system which comprises a laser radar capable of moving freely and a terminal device communicated with the laser radar, wherein the laser radar is used for sending radar signals, receiving reflected signals, acquiring environmental point cloud data according to the reflected signals and executing the gesture recognition method in the embodiment.
The system and the method have the advantages that the virtual two-dimensional plane formed by the laser radar is used for acquiring the motion point cloud data of the user, the coordinate position of the gesture is acquired according to the acquired motion point cloud data, the gesture is recognized according to the coordinate position, and the recognized gesture is wirelessly transmitted to the terminal equipment, so that the non-contact interaction effect with the intelligent terminal equipment is realized.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of gesture recognition, the method comprising:
acquiring environmental point cloud data in real time, and constructing a virtual two-dimensional plane according to the environmental point cloud data;
constructing a screen interaction plane corresponding to a display device screen by adopting a preset point marking method;
collecting motion point cloud data of a gesture on a screen interaction plane, and performing gesture recognition according to the motion point cloud data;
and generating a corresponding control instruction according to the recognized gesture, and sending the control instruction to the terminal equipment through wireless transmission.
2. The gesture recognition method according to claim 1, wherein the preset point labeling method comprises a five-point labeling method, the five points comprise four vertices and a center point of the display device, and the step of constructing the screen interaction plane corresponding to the screen of the display device by using the preset point labeling method comprises:
and constructing a screen interaction plane corresponding to the screen of the display equipment according to the position information of the five mark points.
3. The gesture recognition method according to claim 1, wherein the preset point marking method comprises a two-point marking method, and the step of constructing the screen interaction plane corresponding to the screen of the display device by using the preset point marking method comprises:
the method comprises the steps of sending parameter information of a screen of the display equipment to a laser radar, determining two mark points according to the parameter information, and constructing a screen interaction plane corresponding to the screen of the display equipment according to position information of the two mark points.
4. The gesture recognition method according to claim 1, wherein the step of generating the corresponding control instruction according to the recognized gesture comprises:
searching a preset mapping relation according to the recognized gesture, and acquiring a control instruction corresponding to the gesture, wherein the preset mapping relation comprises the corresponding relation between the gesture and the control instruction.
5. The gesture recognition method according to claim 1, further comprising:
saving parameter information of a screen interaction plane and setting a preset awakening gesture;
monitoring a wake-up gesture through a laser radar, and acquiring position information of a preset wake-up gesture;
and constructing a second screen interaction plane according to the position information and the parameter information.
6. The gesture recognition method according to claim 5, further comprising:
and if the gesture is not recognized within the preset time, monitoring the awakening gesture through the laser radar, and acquiring the position information of the preset awakening gesture.
7. The gesture recognition method according to claim 1, wherein the step of performing gesture recognition according to the motion point cloud data comprises:
and acquiring the coordinate position of the gesture on the screen interaction plane according to the motion point cloud data, and recognizing the gesture according to the coordinate position.
8. The gesture recognition method according to claim 1, wherein the step of performing gesture recognition according to the motion point cloud data comprises:
and establishing a deep neural network, inputting the collected motion point cloud data of the gesture into the deep neural network for training, and performing gesture recognition according to the features extracted by the deep neural network.
9. The gesture recognition method according to claim 5, further comprising:
and when the awakening gesture is monitored, judging whether the display equipment is started, and if not, sending a starting instruction to the display equipment.
10. A gesture recognition system, characterized in that the system comprises a freely movable lidar, a terminal device in communication with the lidar, the lidar is used for transmitting radar signals and receiving reflected signals, acquiring environment point cloud data according to the reflected signals, and executing the gesture recognition method according to any one of claims 1-9.
CN202110689187.0A 2021-06-22 2021-06-22 Gesture recognition method and system Active CN113569635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110689187.0A CN113569635B (en) 2021-06-22 2021-06-22 Gesture recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110689187.0A CN113569635B (en) 2021-06-22 2021-06-22 Gesture recognition method and system

Publications (2)

Publication Number Publication Date
CN113569635A true CN113569635A (en) 2021-10-29
CN113569635B CN113569635B (en) 2024-07-16

Family

ID=78162384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110689187.0A Active CN113569635B (en) 2021-06-22 2021-06-22 Gesture recognition method and system

Country Status (1)

Country Link
CN (1) CN113569635B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114343617A (en) * 2021-12-10 2022-04-15 中国科学院深圳先进技术研究院 Patient gait real-time prediction method based on edge cloud cooperation
CN114978333A (en) * 2022-05-25 2022-08-30 深圳玩智商科技有限公司 Identification equipment, system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080120577A1 (en) * 2006-11-20 2008-05-22 Samsung Electronics Co., Ltd. Method and apparatus for controlling user interface of electronic device using virtual plane
CN102473041A (en) * 2009-08-12 2012-05-23 岛根县 Image recognition device, operation determination method, and program
KR20120070133A (en) * 2010-12-21 2012-06-29 한국전자통신연구원 Apparatus for providing virtual touch interface using camera and method thereof
CN107390922A (en) * 2017-06-30 2017-11-24 广东欧珀移动通信有限公司 virtual touch control method, device, storage medium and terminal
CN108932060A (en) * 2018-09-07 2018-12-04 深圳众赢时代科技有限公司 Gesture three-dimensional interaction shadow casting technique
CN112418089A (en) * 2020-11-23 2021-02-26 森思泰克河北科技有限公司 Gesture recognition method and device and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080120577A1 (en) * 2006-11-20 2008-05-22 Samsung Electronics Co., Ltd. Method and apparatus for controlling user interface of electronic device using virtual plane
CN102473041A (en) * 2009-08-12 2012-05-23 岛根县 Image recognition device, operation determination method, and program
KR20120070133A (en) * 2010-12-21 2012-06-29 한국전자통신연구원 Apparatus for providing virtual touch interface using camera and method thereof
CN107390922A (en) * 2017-06-30 2017-11-24 广东欧珀移动通信有限公司 virtual touch control method, device, storage medium and terminal
CN108932060A (en) * 2018-09-07 2018-12-04 深圳众赢时代科技有限公司 Gesture three-dimensional interaction shadow casting technique
CN112418089A (en) * 2020-11-23 2021-02-26 森思泰克河北科技有限公司 Gesture recognition method and device and terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114343617A (en) * 2021-12-10 2022-04-15 中国科学院深圳先进技术研究院 Patient gait real-time prediction method based on edge cloud cooperation
CN114978333A (en) * 2022-05-25 2022-08-30 深圳玩智商科技有限公司 Identification equipment, system and method
CN114978333B (en) * 2022-05-25 2024-01-23 深圳玩智商科技有限公司 Identification equipment, system and method

Also Published As

Publication number Publication date
CN113569635B (en) 2024-07-16

Similar Documents

Publication Publication Date Title
US11914792B2 (en) Systems and methods of tracking moving hands and recognizing gestural interactions
US20230205151A1 (en) Systems and methods of gestural interaction in a pervasive computing environment
US20220374092A1 (en) Multi-function stylus with sensor controller
CN104102335B (en) A kind of gestural control method, device and system
US11366528B2 (en) Gesture movement recognition method, apparatus, and device
US20130194173A1 (en) Touch free control of electronic systems and associated methods
US20120038652A1 (en) Accepting motion-based character input on mobile computing devices
Kang et al. Minuet: Multimodal interaction with an internet of things
WO2015100146A1 (en) Multiple hover point gestures
CN113569635B (en) Gesture recognition method and system
CN102789312A (en) User interaction system and method
CN103324348A (en) Windows desktop control method based on intelligent mobile terminals
CN106933385B (en) A kind of implementation method of the low-power consumption sky mouse pen based on three-dimensional ultrasonic positioning
CN105260022B (en) A kind of method and device based on gesture control sectional drawing
CN108027648A (en) The gesture input method and wearable device of a kind of wearable device
CN106598422B (en) hybrid control method, control system and electronic equipment
CN112328158A (en) Interactive method, display device, transmitting device, interactive system and storage medium
KR20150118377A (en) Information inputting system and method by movements of finger
EP3779645B1 (en) Electronic device determining method and system, computer system, and readable storage medium
CN110083226B (en) Virtual space positioning method and device
CN111399728B (en) Setting method, electronic device and storage medium
CN112540683B (en) Intelligent ring, handwritten character recognition method and electronic equipment
US20240103625A1 (en) Interaction method and apparatus, electronic device, storage medium, and computer program product
CN215219638U (en) Man-machine interaction system based on gesture recognition
US20240070994A1 (en) One-handed zoom operation for ar/vr devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220721

Address after: 518000 room 1602, block a, building 7, Shenzhen International Innovation Valley, Dashi Road, Xili community, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: SHENZHEN EAI TECHNOLOGY Co.,Ltd.

Address before: 516000 room 2317-04, 23rd floor, innovation building, 106 Dongxin Avenue, Dongxing District, Dongjiang Industrial Park, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant before: Huizhou yuedeng Intelligent Technology Co.,Ltd.

GR01 Patent grant