CN111709384A - AR gesture recognition method and device, electronic equipment and storage medium - Google Patents

AR gesture recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111709384A
CN111709384A CN202010572370.8A CN202010572370A CN111709384A CN 111709384 A CN111709384 A CN 111709384A CN 202010572370 A CN202010572370 A CN 202010572370A CN 111709384 A CN111709384 A CN 111709384A
Authority
CN
China
Prior art keywords
image
joint
hand
gesture
black
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010572370.8A
Other languages
Chinese (zh)
Other versions
CN111709384B (en
Inventor
裴增阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Si Tech Information Technology Co Ltd
Original Assignee
Beijing Si Tech Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Si Tech Information Technology Co Ltd filed Critical Beijing Si Tech Information Technology Co Ltd
Priority to CN202010572370.8A priority Critical patent/CN111709384B/en
Publication of CN111709384A publication Critical patent/CN111709384A/en
Application granted granted Critical
Publication of CN111709384B publication Critical patent/CN111709384B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an AR gesture recognition method and device, electronic equipment and a storage medium. The AR gesture recognition method comprises the following steps: acquiring and recording a first image of a user gesture; identifying first contour information of a hand in the first image; acquiring first position information of a joint point of the hand according to the first contour information; performing binarization processing on the first image according to first position information of the joint point to obtain a joint black-and-white image, wherein the joint point and the background have the same color; and inputting the joint black-and-white image into a preset neural network model, and identifying the gesture type of the user. According to the gesture type recognition method and device, the gesture type recognition can be completed, and the accuracy and efficiency of the gesture type recognition can be improved.

Description

AR gesture recognition method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of computer networks, in particular to an AR gesture recognition method and device, electronic equipment and a storage medium.
Background
With the advent of 5G, AR technology has brought great convenience to life, and particularly, in combination with the application in a 5G scenario, the combined marketing scenario of AR is becoming more and more prominent.
In order to enable a user to have an in-person experience on 5G, the user can freely contact with the robot at a zero distance, and the user can win free gifts by utilizing the interaction of the participated game, attract more user experience, and achieve the purposes of renewing the user and publicizing the central capacity of the card and the ticket. However, the efficiency of gesture recognition in the prior art is low.
In view of the above problems, no effective technical solution exists at present.
Disclosure of Invention
An object of the embodiments of the present application is to provide an AR gesture recognition method, apparatus, electronic device, and storage medium, which can improve recognition accuracy and efficiency.
In a first aspect, an embodiment of the present application provides an AR gesture recognition method, including the following steps:
acquiring and recording a first image of a user gesture;
identifying first contour information of a hand in the first image;
acquiring first position information of a joint point of the hand according to the first contour information;
performing binarization processing on the first image according to first position information of the joint point to obtain a joint black-and-white image, wherein the joint point and the background have the same color;
and inputting the joint black-and-white image into a preset neural network model, and identifying the gesture type of the user.
According to the AR gesture recognition method provided by the embodiment of the application, a first image of a user gesture is collected and recorded; identifying first contour information of a hand in the first image; acquiring first position information of a joint point of the hand according to the first contour information; performing binarization processing on the first image according to first position information of the joint point to obtain a joint black-and-white image, wherein the joint point and the background have the same color; inputting the joint black-and-white image into a preset neural network model, and identifying the gesture type of a user; therefore, the gesture type recognition is completed, and the accuracy and efficiency of the gesture type recognition can be improved.
Optionally, in the AR gesture recognition method according to the embodiment of the present application, the method further includes:
acquiring a sample data set, wherein the sample data set comprises a plurality of sample data, and each sample data comprises a sample joint black-and-white image and a corresponding gesture type;
and inputting the plurality of sample data into the cnn convolutional neural network model for training to obtain a preset convolutional neural network model.
Optionally, in the AR gesture recognition method according to an embodiment of the present application, the step of recognizing the first contour information of the hand in the first image includes:
acquiring an image of a user when the hand does not appear in a shot as a background image;
and acquiring first contour information of the hand in the first image according to the background image.
Optionally, in the AR gesture recognition method according to an embodiment of the present application, the step of obtaining first position information of a joint point of the hand according to the first contour information includes:
and inputting the first contour information into a first neural network model to acquire first position information of each joint point of the hand.
Optionally, in the AR gesture recognition method according to the embodiment of the present application, the step of performing binarization processing on the first image according to the first position information of the joint point to obtain a black-and-white joint image includes:
carrying out binarization processing on a background area in the first image to convert the background area into black;
and converting the color of each joint point in the hand region corresponding to the first outline information in the first image into black, thereby obtaining a joint black-and-white image.
In a second aspect, an embodiment of the present application further provides an AR gesture recognition apparatus, including:
the acquisition module is used for acquiring and recording a first image of a user gesture;
a first recognition module that recognizes first contour information of a hand in the first image;
the first acquisition module is used for acquiring first position information of joint points of the hand according to the first contour information;
the binarization module is used for carrying out binarization processing on the first image according to the first position information of the joint point to obtain a joint black-white image, wherein the joint point and the background have the same color;
and the second identification module is used for inputting the joint black-and-white image into a preset neural network model and identifying the gesture type of the user.
Optionally, in the AR gesture recognition apparatus according to this embodiment of the present application, the apparatus further includes:
the second acquisition module is used for acquiring a sample data set, wherein the sample data set comprises a plurality of sample data, and each sample data comprises a sample joint black-and-white image and a corresponding gesture type;
and the training module is used for inputting the plurality of sample data into the cnn convolutional neural network model for training so as to obtain a preset convolutional neural network model.
Optionally, in the AR gesture recognition apparatus according to an embodiment of the present application, the first recognition module includes:
a first acquisition unit configured to acquire an image when a hand of a user does not appear in a shot as a background image;
and the identification unit is used for acquiring first contour information of the hand in the first image according to the background image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the steps in the method as provided in the first aspect are executed.
In a fourth aspect, embodiments of the present application provide a storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps in the method as provided in the first aspect.
As can be seen from the above, the AR gesture recognition method, apparatus, electronic device and storage medium provided in the embodiments of the present application record a first image of a user gesture by acquiring; identifying first contour information of a hand in the first image; acquiring first position information of a joint point of the hand according to the first contour information; performing binarization processing on the first image according to first position information of the joint point to obtain a joint black-and-white image, wherein the joint point and the background have the same color; inputting the joint black-and-white image into a preset neural network model, and identifying the gesture type of a user; therefore, the gesture type recognition is completed, and the accuracy and efficiency of the gesture type recognition can be improved.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a flowchart of an AR gesture recognition method according to an embodiment of the present disclosure.
Fig. 2 is a schematic structural diagram of an AR gesture recognition apparatus according to an embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a flowchart of an AR gesture recognition method used in a robot or a terminal device in some embodiments of the present application. The robot method comprises the following steps:
s101, collecting and recording a first image of the gesture of the user.
S102, identifying first contour information of the hand in the first image.
S103, acquiring first position information of the joint point of the hand according to the first contour information.
And S104, carrying out binarization processing on the first image according to the first position information of the joint point to obtain a joint black-and-white image, wherein the joint point and the background have the same color.
And S105, inputting the joint black-and-white image into a preset neural network model, and identifying the gesture type of the user.
In step S101, a first image of a user for recording a gesture of the user may be captured by a camera provided on the robot or the terminal device. After the robot makes a game prompt or the display screen displays the game prompt, the gesture made by the user starts to be collected. Such as scissors, stones, cloth, etc., or other finger-guessing games.
In step S102, when the first contour information of the hand in the first image is recognized, a pre-trained neural network structure may be used for the recognition. The original background image can be combined to identify the background image in the first image, so as to identify the first contour of the hand, and further acquire the first contour information of the hand.
Wherein, in some embodiments, the step S102 comprises the following sub-steps:
s1021, acquiring an image when the hand of the user does not appear in the lens as a background image; and S1022, acquiring first contour information of the hand in the first image according to the background image. In step S1021, in order to improve accuracy, a smaller semi-closed region may be set in front of the lens, so as to ensure that an image acquired by the camera is a fixed image, that is, an original background image, when the hand of the user does not appear in the closed region. In step S1022, the background image and the first image are adjusted to the same size specification, and then the portion of the first image that is increased relative to the background image is identified, so as to extract the image of the hand region, thereby obtaining the first contour information.
In step S103, an algorithm commonly used in the prior art may be adopted to identify the first position information of each joint point, for example, the first contour information may be input into a first neural network model to obtain the first position information of each joint point of the hand. Of course, it is not limited thereto.
In step S104, it is understood that the background area, the joint points of the hand area, and the other positions of the hand area are black in the joint monochrome image. It is to be understood that, in some embodiments, this step S104 includes: carrying out binarization processing on a background area in the first image to convert the background area into black; and converting the color of each joint point in the hand region corresponding to the first outline information in the first image into black, thereby obtaining a joint black-and-white image.
In step S105, the preset neural network model may adopt a model commonly used in the prior art, or may be trained by itself. And training by adopting the cnn convolutional neural network model to obtain a preset convolutional neural network model. Specifically, in some embodiments, the method further comprises the step of training the model, which comprises: s106, obtaining a sample data set, wherein the sample data set comprises a plurality of sample data, and each sample data comprises a sample joint black-and-white image and a corresponding gesture type; and S107, inputting the plurality of sample data into the cnn convolutional neural network model for training to obtain a preset convolutional neural network model. During training, all sample data does not need to be trained, and the training can be stopped as long as the accuracy achieved after the cnn convolutional neural network model is trained exceeds a preset threshold.
As can be seen from the above, in the embodiment of the application, the first image of the user gesture is recorded by acquiring; identifying first contour information of a hand in the first image; acquiring first position information of a joint point of the hand according to the first contour information; performing binarization processing on the first image according to first position information of the joint point to obtain a joint black-and-white image, wherein the joint point and the background have the same color; inputting the joint black-and-white image into a preset neural network model, and identifying the gesture type of a user; therefore, the accuracy and efficiency of gesture type identification can be improved by gesture type identification.
Referring to fig. 2, fig. 2 is a structural diagram of an AR gesture recognition apparatus according to an embodiment of the present disclosure, where the AR gesture recognition apparatus includes: the system comprises an acquisition module 201, a first identification module 202, a first acquisition module 203, a binarization module 204 and a second identification module 205.
The acquisition module 201 is configured to acquire a first image for recording a gesture of a user; the first image of the user for recording the gesture of the user can be collected through a camera arranged on the robot or the terminal equipment. After the robot makes a game prompt or the display screen displays the game prompt, the gesture made by the user starts to be collected. Such as scissors, stones, cloth, etc., or other finger-guessing games.
The first identification module 202 is configured to identify first contour information of a hand in the first image; in recognizing the first contour information of the hand in the first image, the recognition may be performed using a neural network structure trained in advance. The original background image can be combined to identify the background image in the first image, so as to identify the first contour of the hand, and further acquire the first contour information of the hand. The first identification module 202 includes: a first acquisition unit configured to acquire an image when a hand of a user does not appear in a shot as a background image; and the identification unit is used for acquiring first contour information of the hand in the first image according to the background image. In order to improve accuracy, a smaller semi-closed area can be arranged in front of the lens, so that when the hands of a user do not appear in the closed area, the image acquired by the camera is a fixed image, namely an original background image. The background image and the first image can be adjusted to the same size specification, and then the first image is identified.
The first obtaining module 203 is configured to obtain first position information of a joint point of the hand according to the first contour information; the first position information of each joint point can be identified by using an algorithm common in the prior art, for example, the first contour information can be input into a first neural network model to obtain the first position information of each joint point of the hand. Of course, it is not limited thereto.
The binarization module 204 is configured to perform binarization processing on the first image according to first position information of a joint point to obtain a black-and-white joint image, where the joint point and a background have the same color; it can be understood that in the joint black-and-white image, the background area is black, the joint points of the hand area are black, and other positions of the hand area are white. It is understood that, in some embodiments, the binarization module 204 is configured to binarize the background region in the first image to make it turn into black; and converting the color of each joint point in the hand region corresponding to the first outline information in the first image into black, thereby obtaining a joint black-and-white image.
The second recognition module 205 is configured to input the black-and-white joint image into a preset neural network model, so as to recognize a gesture type of the user. The preset neural network model can adopt a common model in the prior art and can also be trained by self. And training by adopting the cnn convolutional neural network model to obtain a preset convolutional neural network model.
In some embodiments, the apparatus further comprises: the second acquisition module is used for acquiring a sample data set, wherein the sample data set comprises a plurality of sample data, and each sample data comprises a sample joint black-and-white image and a corresponding gesture type; and the training module is used for inputting the plurality of sample data into the cnn convolutional neural network model for training so as to obtain a preset convolutional neural network model.
As can be seen from the above, in the embodiment of the application, the first image of the user gesture is recorded by acquiring; identifying first contour information of a hand in the first image; acquiring first position information of a joint point of the hand according to the first contour information; performing binarization processing on the first image according to first position information of the joint point to obtain a joint black-and-white image, wherein the joint point and the background have the same color; inputting the joint black-and-white image into a preset neural network model, and identifying the gesture type of a user; therefore, the accuracy and efficiency of gesture type identification can be improved by gesture type identification.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the present disclosure provides an electronic device 3, including: the processor 301 and the memory 302, the processor 301 and the memory 302 being interconnected and communicating with each other via a communication bus 303 and/or other form of connection mechanism (not shown), the memory 302 storing a computer program executable by the processor 301, the computer program being executable by the processor 301 when the computing device is running to perform the method in any of the alternative implementations of the above embodiments when the processor 301 executes the computer program to perform the following functions: acquiring and recording a first image of a user gesture; identifying first contour information of a hand in the first image; acquiring first position information of a joint point of the hand according to the first contour information; performing binarization processing on the first image according to first position information of the joint point to obtain a joint black-and-white image, wherein the joint point and the background have the same color; and inputting the joint black-and-white image into a preset neural network model, and identifying the gesture type of the user.
An embodiment of the present application provides a storage medium, and when being executed by a processor, the computer program performs a method in any optional implementation manner of the foregoing embodiment to implement the following functions: acquiring and recording a first image of a user gesture; identifying first contour information of a hand in the first image; acquiring first position information of a joint point of the hand according to the first contour information; performing binarization processing on the first image according to first position information of the joint point to obtain a joint black-and-white image, wherein the joint point and the background have the same color; and inputting the joint black-and-white image into a preset neural network model, and identifying the gesture type of the user.
The storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An AR gesture recognition method is characterized by comprising the following steps:
acquiring and recording a first image of a user gesture;
identifying first contour information of a hand in the first image;
acquiring first position information of a joint point of the hand according to the first contour information;
performing binarization processing on the first image according to first position information of the joint point to obtain a joint black-and-white image, wherein the joint point and the background have the same color;
and inputting the joint black-and-white image into a preset neural network model, and identifying the gesture type of the user.
2. The AR gesture recognition method of claim 1, further comprising:
acquiring a sample data set, wherein the sample data set comprises a plurality of sample data, and each sample data comprises a sample joint black-and-white image and a corresponding gesture type;
and inputting the plurality of sample data into the cnn convolutional neural network model for training to obtain a preset convolutional neural network model.
3. The AR gesture recognition method of claim 1, wherein the step of recognizing the first contour information of the hand in the first image comprises:
acquiring an image of a user when the hand does not appear in a shot as a background image;
and acquiring first contour information of the hand in the first image according to the background image.
4. The AR gesture recognition method according to claim 1, wherein the step of obtaining first position information of joint points of the hand according to the first contour information comprises:
and inputting the first contour information into a first neural network model to acquire first position information of each joint point of the hand.
5. The AR gesture recognition method according to claim 1, wherein the step of binarizing the first image according to the first position information of the joint point to obtain the joint black and white image comprises:
carrying out binarization processing on a background area in the first image to convert the background area into black;
and converting the color of each joint point in the hand region corresponding to the first outline information in the first image into black, thereby obtaining a joint black-and-white image.
6. An AR gesture recognition apparatus, comprising:
the acquisition module is used for acquiring and recording a first image of a user gesture;
a first recognition module that recognizes first contour information of a hand in the first image;
the first acquisition module is used for acquiring first position information of joint points of the hand according to the first contour information;
the binarization module is used for carrying out binarization processing on the first image according to the first position information of the joint point to obtain a joint black-white image, wherein the joint point and the background have the same color;
and the second identification module is used for inputting the joint black-and-white image into a preset neural network model and identifying the gesture type of the user.
7. The AR gesture recognition device of claim 6, wherein the device further comprises:
the second acquisition module is used for acquiring a sample data set, wherein the sample data set comprises a plurality of sample data, and each sample data comprises a sample joint black-and-white image and a corresponding gesture type;
and the training module is used for inputting the plurality of sample data into the cnn convolutional neural network model for training so as to obtain a preset convolutional neural network model.
8. The AR gesture recognition device of claim 6, wherein the first recognition module comprises:
a first acquisition unit configured to acquire an image when a hand of a user does not appear in a shot as a background image;
and the identification unit is used for acquiring first contour information of the hand in the first image according to the background image.
9. An electronic device comprising a processor and a memory, said memory storing computer readable instructions which, when executed by said processor, perform the steps of the method of any of claims 1-5.
10. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method according to any one of claims 1-5.
CN202010572370.8A 2020-06-22 2020-06-22 AR gesture recognition method and device, electronic equipment and storage medium Active CN111709384B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010572370.8A CN111709384B (en) 2020-06-22 2020-06-22 AR gesture recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010572370.8A CN111709384B (en) 2020-06-22 2020-06-22 AR gesture recognition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111709384A true CN111709384A (en) 2020-09-25
CN111709384B CN111709384B (en) 2023-06-30

Family

ID=72542060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010572370.8A Active CN111709384B (en) 2020-06-22 2020-06-22 AR gesture recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111709384B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627265A (en) * 2021-07-13 2021-11-09 深圳市创客火科技有限公司 Unmanned aerial vehicle control method and device and computer readable storage medium
CN113840177A (en) * 2021-09-22 2021-12-24 广州博冠信息科技有限公司 Live broadcast interaction method and device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573226A (en) * 2018-04-08 2018-09-25 浙江大学 The drosophila larvae body segment key independent positioning method returned based on cascade posture
CN108734194A (en) * 2018-04-09 2018-11-02 浙江工业大学 A kind of human joint points recognition methods based on single depth map of Virtual reality
CN109190559A (en) * 2018-08-31 2019-01-11 深圳先进技术研究院 A kind of gesture identification method, gesture identifying device and electronic equipment
CN109359514A (en) * 2018-08-30 2019-02-19 浙江工业大学 A kind of gesture tracking identification federation policies method towards deskVR
CN111178170A (en) * 2019-12-12 2020-05-19 青岛小鸟看看科技有限公司 Gesture recognition method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573226A (en) * 2018-04-08 2018-09-25 浙江大学 The drosophila larvae body segment key independent positioning method returned based on cascade posture
CN108734194A (en) * 2018-04-09 2018-11-02 浙江工业大学 A kind of human joint points recognition methods based on single depth map of Virtual reality
CN109359514A (en) * 2018-08-30 2019-02-19 浙江工业大学 A kind of gesture tracking identification federation policies method towards deskVR
CN109190559A (en) * 2018-08-31 2019-01-11 深圳先进技术研究院 A kind of gesture identification method, gesture identifying device and electronic equipment
CN111178170A (en) * 2019-12-12 2020-05-19 青岛小鸟看看科技有限公司 Gesture recognition method and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KUNA! CHANDA等: "A New Hand Gesture Recognition Scheme for Similarity Measurement in a Vision Based Barehanded Approach", 《IEEE》 *
田元等: "基于Kinect的实时手势识别方法", 《计算机工程与设计》, no. 06, pages 229 - 234 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627265A (en) * 2021-07-13 2021-11-09 深圳市创客火科技有限公司 Unmanned aerial vehicle control method and device and computer readable storage medium
CN113840177A (en) * 2021-09-22 2021-12-24 广州博冠信息科技有限公司 Live broadcast interaction method and device, storage medium and electronic equipment
CN113840177B (en) * 2021-09-22 2024-04-30 广州博冠信息科技有限公司 Live interaction method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN111709384B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
US9922239B2 (en) System, method, and program for identifying person in portrait
CN103455542B (en) Multiclass evaluator and multiclass recognition methods
EP3869385B1 (en) Method for extracting structural data from image, apparatus and device
CN107368827B (en) Character recognition method and device, user equipment and server
CN105431867A (en) Extracting card data using card art
CN107871314B (en) Sensitive image identification method and device
CN111709384B (en) AR gesture recognition method and device, electronic equipment and storage medium
CN108812407B (en) Animal health state monitoring method, equipment and storage medium
CN111783882B (en) Key point detection method and device, electronic equipment and storage medium
CN109034583A (en) Abnormal transaction identification method, apparatus and electronic equipment
CN104462530A (en) Method and device for analyzing user preferences and electronic equipment
CN108009536A (en) Scan method to go over files and system
JP2014132453A (en) Word detection for optical character recognition constant to local scaling, rotation and display position of character in document
CN103168314A (en) Method and apparatus for recognizing an emotion of an individual based on facial action units
CN110046085B (en) Method and device for identifying application program control displayed on terminal equipment
CN113627402B (en) Image identification method and related device
CN109934229A (en) Image processing method, device, medium and calculating equipment
CN110033388B (en) Social relationship establishing method and device and server
CN107844742A (en) Facial image glasses minimizing technology, device and storage medium
CN112149570A (en) Multi-person living body detection method and device, electronic equipment and storage medium
CN107992872B (en) Method for carrying out text recognition on picture and mobile terminal
CN113971831A (en) Dynamically updated face recognition method and device and electronic equipment
CN112560718A (en) Method and device for acquiring material information, storage medium and electronic device
CN110658929B (en) Control method and device for intelligent pen
CN116189200A (en) Hard disk identity character recognition method, system, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant