CN116766213B - Bionic hand control method, system and equipment based on image processing - Google Patents

Bionic hand control method, system and equipment based on image processing Download PDF

Info

Publication number
CN116766213B
CN116766213B CN202311068254.2A CN202311068254A CN116766213B CN 116766213 B CN116766213 B CN 116766213B CN 202311068254 A CN202311068254 A CN 202311068254A CN 116766213 B CN116766213 B CN 116766213B
Authority
CN
China
Prior art keywords
anchor point
hand
information
acquiring
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311068254.2A
Other languages
Chinese (zh)
Other versions
CN116766213A (en
Inventor
刘兆伟
李明亮
姜丰
苏航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yantai University
Original Assignee
Yantai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yantai University filed Critical Yantai University
Priority to CN202311068254.2A priority Critical patent/CN116766213B/en
Publication of CN116766213A publication Critical patent/CN116766213A/en
Application granted granted Critical
Publication of CN116766213B publication Critical patent/CN116766213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the technical field of image processing, in particular to a bionic hand control method, a bionic hand control system and bionic hand control equipment based on image processing, wherein the control method is used for determining a joint region by acquiring joint information in a hand image, and an obtained initial anchor point region set map is subjected to feature processing, variance screening, regression detection and screening processing to obtain an optimal anchor point set of a hand image to be tested currently; obtaining a key hand node set of the current hand graph to be tested after weighted average processing of the optimal anchor point information of the current hand graph to be tested and the optimal anchor point information of the hand graph of the adjacent frame; determining a gesture recognition result through the coordinate information of the key hand node set; the gesture recognition result is transmitted to the bionic hand in a communication way, the bionic hand is simulated, and corresponding interaction results are given, so that the bionic hand can be accurately and flexibly controlled; and key information is screened out from a large amount of image information to calculate gesture recognition results, so that the calculation efficiency is improved on the basis of ensuring the accuracy of control results.

Description

Bionic hand control method, system and equipment based on image processing
Technical Field
The application relates to the technical field of image processing, in particular to a bionic hand control method, a bionic hand control system and bionic hand control equipment based on image processing.
Background
The current bionic hand control method is mainly realized by a contact type equipment method and a non-contact type equipment method. When the contact type equipment is used for realizing the gesture recognition of the bionic hand, the glove with the sensor is worn to transmit hand data, but the method is high in cost and is easily influenced by external factors (such as dropping, soaking, sensor failure and the like). The non-contact equipment method can solve the problem of high cost of the contact equipment, and the acquired picture gesture recognition result is communicated and transmitted to the bionic hand by using an image processing technology, so that the control process of the bionic hand is realized.
The gesture recognition technology based on the image processing technology realizes the smart control of the bionic hand, but the prior art still has some problems. The similarity, occlusion and diversity of gestures in the image bring great challenges to gesture recognition, and the gesture recognition accuracy is low; in addition, the image contains a large amount of gesture data, and the processing of a large amount of image data causes low gesture recognition efficiency.
Disclosure of Invention
The application aims to provide a bionic hand control method, a bionic hand control system and bionic hand control equipment based on image processing.
The technical scheme of the application is as follows:
a bionic hand control method based on image processing comprises the following operations:
s1, acquiring joint information in a hand diagram to be detected, and marking a corresponding area of the joint information to obtain a joint area; the joint region is subjected to anchor point extraction processing to obtain an initial anchor point region set map;
s2, carrying out feature extraction processing on the initial anchor point region set map to obtain a multi-scale feature map;
obtaining a preferred anchor point set based on the multi-scale feature map;
acquiring the confidence coefficient of each anchor point in each joint region in the optimal anchor point set, and carrying out regression detection and screening treatment to obtain an optimal anchor point set;
s3, acquiring a plurality of adjacent frame hand diagrams of the hand diagram to be tested, and acquiring a key hand node set of the hand diagram to be tested based on an optimal anchor point set in the plurality of adjacent frame hand diagrams;
s4, obtaining finger angle information based on the key hand node set; the finger angle information is matched with a preset gesture recognition rule, and a gesture recognition result is obtained;
s5, inputting the gesture recognition result into the bionic hand, and outputting a gesture interaction result according to a preset gesture interaction rule.
The bionic hand control method as described above, before the feature extraction processing is performed on the initial anchor point region set map in S2, further includes: acquiring the confidence coefficient of each anchor point in each joint region in the initial anchor point region set map, deleting the anchor point with the lowest confidence coefficient in each anchor point region, and obtaining a first anchor point region set map; the first anchor point region set map is used for executing the feature extraction processing in S2.
According to the bionic hand control method, the operation of obtaining the multi-scale feature map through feature extraction processing of the initial anchor point region set map in the S2 specifically comprises the following steps: acquiring a low-level characteristic layer and a high-level characteristic layer of the initial anchor point region set map; the low-level feature layer and the high-level feature layer are respectively subjected to up-sampling treatment and then are fused to obtain a fusion feature map; and the fusion feature map is subjected to downsampling treatment to obtain the multi-scale feature map.
In the bionic hand control method, the operation of obtaining the preferred anchor point set based on the multi-scale feature map in the step S2 specifically comprises the following steps: and acquiring the neighborhood pixel value variance of each anchor point in the multi-scale feature map, deleting the anchor points with neighborhood pixel value variances exceeding a variance threshold value, and obtaining the optimal anchor point set.
In the bionic hand control method as described above, the operation of obtaining the key hand node set of the hand graph to be measured based on the optimal anchor point set in the hand graph of the plurality of adjacent frames in S3 specifically includes:
the operations of S1 and S2 are executed by the plurality of adjacent frame hand diagrams to obtain an optimal anchor point set in the plurality of adjacent frame hand diagrams, and a first anchor point set is obtained;
acquiring coordinate values and confidence degrees of anchor points in a first anchor point set to obtain a first information set; acquiring coordinate values and confidence degrees of an optimal anchor point set of the hand graph to be detected, and acquiring a second information set;
and carrying out weighted average processing on the first information set and the second information set to obtain a key hand node set of the hand graph to be tested.
In the bionic hand control method described above, the operation of obtaining the finger angle information based on the key hand node in S4 specifically includes: acquiring two-dimensional vector coordinates of each key hand node in the key hand node set to obtain a two-dimensional vector coordinate set; and obtaining the bending angle of each finger based on the two-dimensional vector coordinates, and obtaining the finger angle information.
According to the bionic hand control method, the preset gesture recognition rule in S4 includes:
if the thumb angle is larger than the thumb closing angle threshold in the finger angle information, the index finger angle, the middle finger angle, the ring finger angle and the little finger angle are larger than the finger closing angle threshold, and the gesture recognition result is a stone gesture;
if the thumb angle is larger than the thumb closing angle threshold in the finger angle information, the index finger angle, the middle finger angle, the ring finger angle and the little finger angle are smaller than the finger opening angle threshold, and the gesture recognition result is a scissors gesture;
if the thumb angle, the index finger angle, the middle finger angle, the ring finger angle and the little finger angle in the finger angle information are smaller than the finger opening angle threshold value, the gesture recognition result is a cloth gesture.
A biomimetic hand control system based on image processing, comprising:
the initial anchor point region set diagram generating module is used for acquiring joint information in the hand diagram to be detected, marking a corresponding region of the joint information and obtaining a joint region; the joint region is subjected to anchor point extraction processing to obtain an initial anchor point region set map;
the optimal anchor point set generation module is used for obtaining a multi-scale feature map through feature extraction processing of the initial anchor point region set map; obtaining a preferred anchor point set based on the multi-scale feature map; acquiring the confidence coefficient of each anchor point in each joint region in the optimal anchor point set, and carrying out regression detection and screening treatment to obtain an optimal anchor point set;
the key hand node set generating module is used for acquiring a plurality of adjacent frame hand diagrams of the hand diagram to be tested and obtaining a key hand node set of the hand diagram to be tested based on an optimal anchor point set in the adjacent frame hand diagrams;
the gesture recognition result generation module is used for obtaining finger angle information based on the key hand node set; the finger angle information is matched with a preset gesture recognition rule, and a gesture recognition result is obtained;
and the gesture interaction result output module is used for inputting the gesture recognition result to the bionic hand and outputting a gesture interaction result according to a preset gesture interaction rule.
The bionic hand control equipment based on image processing comprises a processor and a memory, wherein the bionic hand control method based on the image processing is realized when the processor executes a computer program stored in the memory.
A computer readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements the bionic hand control method based on image processing described above.
The application has the beneficial effects that:
the application provides a bionic hand control method based on image processing, which comprises the steps of determining a joint region by acquiring joint information in a hand image, and selecting an anchor point in the joint region to obtain an initial anchor point region collection chart; the initial anchor point region set diagram is subjected to feature processing, variance screening, regression detection and screening processing to obtain an optimal anchor point set of the current hand diagram to be detected; obtaining a key hand node set of the current hand graph to be tested after weighted average processing of the optimal anchor point information of the current hand graph to be tested and the optimal anchor point information of the hand graph of the adjacent frame; acquiring finger angles through coordinate information of the key hand node set, and determining gesture recognition results; the gesture recognition result is transmitted to the bionic hand in a communication way, the bionic hand is simulated, and corresponding interaction results are given, so that the bionic hand can be accurately and flexibly controlled; and key information is screened out from a large amount of image information to calculate gesture recognition results, so that the calculation efficiency is improved on the basis of ensuring the accuracy of control results.
Drawings
The aspects and advantages of the present application will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application.
In the drawings:
FIG. 1 is a schematic diagram of key hand nodes in an embodiment;
FIG. 2 is a process diagram of a gesture recognition result transmitted to a simulated hand in an embodiment;
fig. 3 is a diagram of a bionic hand interaction process in an embodiment.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings.
The embodiment provides a bionic hand control method based on image processing, which comprises the following operations:
s1, acquiring joint information in a hand diagram to be detected, and marking a corresponding area of the joint information to obtain a joint area; the joint region is subjected to anchor point extraction processing to obtain an initial anchor point region set map;
s2, carrying out feature extraction processing on the initial anchor point region set map to obtain a multi-scale feature map;
obtaining a preferred anchor point set based on the multi-scale feature map;
acquiring the confidence coefficient of each anchor point in each region in the optimal anchor point set, and carrying out regression detection and screening treatment to obtain an optimal anchor point set;
s3, acquiring a plurality of adjacent frame hand diagrams of the hand diagram to be tested, and acquiring a key hand node set of the hand diagram to be tested based on an optimal anchor point set in the plurality of adjacent frame hand diagrams;
s4, obtaining finger angle information based on the key hand node set; the finger angle information is matched with a preset gesture recognition rule, and a gesture recognition result is obtained;
s5, inputting the gesture recognition result into the bionic hand, and outputting a gesture interaction result according to a preset gesture interaction rule.
S1, acquiring joint information in a hand diagram to be detected, and marking a corresponding region of the joint information to obtain a joint region; and the joint region is subjected to anchor point extraction processing to obtain an initial anchor point region set map.
Shooting a hand image (which can be a palm image or a back hand image), obtaining a hand image to be detected, marking corresponding areas according to joint characteristic information of the hand, obtaining joint areas, selecting a plurality of anchor points in each joint area, uniformly distributing the anchor points in the corresponding joint areas, and obtaining an initial anchor point area collection image as key joint nodes to be selected.
The joint region is the region where each finger joint is located (specifically, the region where the joint lines are located), the region at the fingertip (the dorsum of the hand is the nail region), and the midpoint region between the joint of the thumb and the palm and the wrist. Taking the middle finger as an example, the middle finger is provided with a fingertip, a first joint, a second joint and a third joint connected with the palm, and the joint area on the middle finger comprises a fingertip area, a first joint area, a second joint area and a third joint area. The area size of the joint area can be set according to the requirement.
In this embodiment, the operation of capturing the hand image is implemented by using an OpenCV call device RGB camera, and the operation of obtaining the joint region is implemented by using a BlazFace model.
S2, carrying out feature extraction processing on the initial anchor point region set map to obtain a multi-scale feature map; obtaining a preferred anchor point set based on the multi-scale feature map; and obtaining the confidence coefficient of each anchor point in each region in the optimal anchor point set, and obtaining the optimal anchor point set through regression detection and screening treatment.
In order to improve the accuracy of the image processing result, more proper anchor points are conveniently selected in each joint region, and before the initial anchor point region set diagram is subjected to feature extraction processing, the method further comprises the following steps: acquiring the confidence coefficient of each anchor point in each joint region in the initial anchor point region set map, deleting the anchor point with the lowest confidence coefficient in each joint region, and obtaining a first anchor point region set map; the first anchor point region set map is used to perform the feature extraction processing in S2.
The calculation formula for obtaining the confidence of the anchor point is as follows:wherein, the method comprises the steps of, wherein,the confidence of the p-th anchor point is directly reflected as the duty ratio of the p-th anchor point characteristic information in the total similar characteristics,for the characteristic information of the p-th anchor point,is characteristic information known to be similar to the p-th anchor point. And then, screening the anchor points according to the confidence degrees, deleting the anchor points with the minimum confidence degrees and the minimum reliability in each joint area, and leaving the anchor points with higher residual confidence degrees for subsequent processing.
In order to ensure the accuracy of the calculation result and improve the calculation efficiency, the operation of obtaining the confidence coefficient of the anchor point and screening the anchor point is realized by using a non-maximum value suppression algorithm.
The operation of obtaining the multi-scale feature map through feature extraction processing of the initial anchor point region set map or the first anchor point region set map comprises the following steps: acquiring a low-level characteristic layer and a high-level characteristic layer of an initial anchor point region set diagram or a first anchor point region set diagram; the low-level feature layer and the high-level feature layer are respectively subjected to up-sampling treatment and then fused to obtain a fused feature map; and carrying out downsampling treatment on the fusion feature map to obtain a multi-scale feature map.
Extracting a low-level characteristic layer and a high-level characteristic layer of the initial anchor point region set map, up-sampling the high-level characteristic layer and the low-level characteristic layer by using an encoder through a deconvolution method, fusing characteristics, and down-sampling the fused characteristic map by using a decoder through a convolution method to obtain a multi-scale characteristic map. In order to ensure the accuracy of the calculation result and improve the calculation efficiency, the embodiment selects the operation of realizing the feature extraction processing by using the ResNet network.
Based on the multi-scale feature map, the operation of obtaining the preferred anchor point set is as follows: and acquiring the neighborhood pixel value variance of each anchor point in each joint region in the multi-scale feature map, and deleting the anchor point with the largest neighborhood pixel value variance in each joint region to obtain a preferred anchor point set. Specifically, after the multi-scale feature map is obtained, for the anchor points in each joint region, calculating the variance of surrounding pixels to obtain a corresponding variance image, deleting the anchor points with the neighborhood pixel value variance exceeding a variance threshold value, and forming a preferred anchor point set by the residual anchor points.
And then, acquiring the confidence coefficient of each anchor point in the optimal anchor point set by adopting a convolutional neural network, taking the target boundary frame coordinate information in the standard anchor point data as a regression target label and a regression task target, adopting a regression network to learn the mapping relation between the anchor point characteristics in the optimal anchor point set and the standard key hand nodes, and using a trained regression model to reserve the anchor point with the confidence coefficient closest to the standard value in each joint region, thereby reserving the anchor point closest to the standard key hand node characteristics and obtaining the optimal anchor point set.
S3, acquiring a plurality of adjacent frame hand images of the hand image to be detected, and acquiring a key hand node set of the hand image to be detected based on an optimal anchor point set in the plurality of adjacent frame hand images.
Based on the optimal anchor point set in the hand diagrams of a plurality of adjacent frames, the operation of obtaining the key hand node set of the hand diagram to be tested is as follows: s1 and S2 are executed by the plurality of adjacent frame hand diagrams to obtain optimal anchor point sets in the plurality of adjacent frame hand diagrams, and a first anchor point set is obtained; acquiring coordinate values and confidence degrees of anchor points in a first anchor point set to obtain a first information set; acquiring coordinate values and confidence coefficients of an optimal anchor point set of the hand graph to be detected, and acquiring a second information set; and carrying out weighted average processing on the first information set and the second information set to obtain a key hand node set of the hand graph to be tested.
Specifically, taking the acquisition time of an image to be detected as an intermediate value, acquiring a plurality of hand images within a period of time before and after the acquisition time to obtain a plurality of adjacent frame hand images, and executing the operations of S1 and S2 on the plurality of adjacent frame hand images to obtain respective optimal anchor point sets to form a first anchor point set; then, respectively acquiring coordinate values and confidence coefficients of the anchor points in the first anchor point set to form a first information set, and acquiring coordinate values and confidence coefficients of an optimal anchor point set of the hand graph to be detected to form a second information set; and taking the confidence coefficient as a weight value, carrying out weighted average on anchor point coordinate values in the same key region in the first information set and the second information set, and sequentially reducing errors of the positions of the key hand nodes in a dynamic environment, so that coordinate values of the key hand nodes of a stable and accurate hand image to be detected can be obtained, and the key hand node set of the hand image to be detected is formed.
The coordinate value sets H of all anchor points in the first information set and the second information set are as follows:
wherein, the method comprises the steps of, wherein,for the coordinate value of the p-th anchor point, the weight of each anchor point is composed into a matrixRepresenting the confidence level of the p anchor point, setting the total number of images needing weighted average as j, and using the weighted average resultTo express, then
The positions of 21 key hand nodes in the key hand node set of the hand graph to be measured obtained based on the method are shown in fig. 1.
S4, obtaining finger angle information based on the key hand node set; and matching the finger angle information with a preset gesture recognition rule to obtain a gesture recognition result.
Based on the key hand node set, the operation of obtaining the finger angle information is as follows: acquiring a two-dimensional vector coordinate set of each key hand node in the key hand node set to obtain the two-dimensional vector coordinate set; and obtaining the bending angle of each finger based on the two-dimensional vector coordinates, and obtaining the finger angle information.
In this embodiment, the key hand node of the wrist or ulna region is used as the origin, for example, the number 0 key hand node in fig. 1 is used as the origin of coordinates, two-dimensional vector coordinates of other key hand nodes are obtained, and the angle information of the fingers is obtained by calculating the cosine value of the included angle of each finger. Wherein the two-dimensional vector coordinate set A of each key hand node set is expressed asThe two vector included angle cosine value calculation formulas are expressed as follows:
wherein the method comprises the steps ofAndis vector quantityAnd so on to other vector components.
Thus, the thumb angle is obtainedAngle of index fingerAngle of middle fingerAngle of ring fingerAngle with little fingerThe calculation formula of (2) is as follows:
based on a large amount of experimental data, a finger closure threshold value capable of being used for quickly obtaining gesture recognition results is obtainedThumb closure angle thresholdThreshold value of finger opening angleAnd obtaining a preset gesture recognition rule based on the threshold.
TABLE 1 preset gesture recognition rule List
The preset gesture recognition rules include (see table 1): if the thumb angle is larger than the thumb closing angle threshold in the finger angle information, the index finger angle, the middle finger angle, the ring finger angle and the little finger angle are larger than the finger closing angle threshold, and the gesture recognition result is a stone gesture; if the thumb angle is larger than the thumb closing angle threshold in the finger angle information, the index finger angle, the middle finger angle, the ring finger angle and the little finger angle are smaller than the finger opening angle threshold, and the gesture recognition result is a scissors gesture; if the thumb angle, the index finger angle, the middle finger angle, the ring finger angle and the little finger angle in the finger angle information are smaller than the finger opening angle threshold value, the gesture recognition result is a cloth gesture.
According to the preset gesture recognition rule, a gesture recognition result can be accurately obtained.
S5, inputting the gesture recognition result into the bionic hand, and outputting the gesture interaction result according to a preset gesture interaction rule.
The gesture recognition result can be sent to the server communication of the bionic hand based on the UDP protocol, and the process is shown in FIG. 2. In addition, the simulated hand selects a game conclusion matched with the current gesture recognition result according to a preset gesture interaction rule, and outputs a corresponding gesture interaction result, and gesture interaction is shown in fig. 3. The interaction rule between the UDP protocol and the preset gesture is the prior art, and the user can adjust according to the self-requirement, so that the user is not excessively described here for saving the space.
The embodiment also provides a bionic hand control system based on image processing, which comprises:
the initial anchor point region set diagram generating module is used for acquiring joint information in the hand diagram to be detected, marking a corresponding region of the joint information and obtaining a joint region; the joint region is subjected to anchor point extraction processing to obtain an initial anchor point region set map;
the optimal anchor point set generation module is used for obtaining a multi-scale feature map through feature extraction processing of the initial anchor point region set map; obtaining a preferred anchor point set based on the multi-scale feature map; acquiring confidence coefficient of each anchor point in each joint region in a preferred anchor point set, and obtaining an optimal anchor point set through regression detection and screening treatment;
the key hand node set generating module is used for acquiring a plurality of adjacent frame hand diagrams of the hand diagram to be detected and acquiring a key hand node set of the hand diagram to be detected based on an optimal anchor point set in the adjacent frame hand diagrams;
the gesture recognition result generation module is used for obtaining finger angle information based on the key hand node set; matching the finger angle information with a preset gesture recognition rule to obtain a gesture recognition result;
the gesture interaction result output module is used for inputting the gesture recognition result to the bionic hand and outputting the gesture interaction result according to the preset gesture interaction rule.
The embodiment also provides bionic hand control equipment based on image processing, which comprises a processor and a memory, wherein the bionic hand control method based on the image processing is realized when the processor executes a computer program stored in the memory.
The present embodiment also provides a computer readable storage medium for storing a computer program, where the computer program when executed by a processor implements the above bionic hand control method based on image processing.
The embodiment provides a bionic hand control method based on image processing, which comprises the steps of determining a joint region by acquiring joint information in a hand image, and selecting an anchor point in the joint region to obtain an initial anchor point region set map; the initial anchor point region set diagram is subjected to feature processing, variance screening, regression detection and screening processing to obtain an optimal anchor point set of the current hand diagram to be detected; obtaining a key hand node set of the current hand graph to be tested after weighted average processing of the optimal anchor point information of the current hand graph to be tested and the optimal anchor point information of the hand graph of the adjacent frame; acquiring finger angles through coordinate information of the key hand node set, and determining gesture recognition results; the gesture recognition result is transmitted to the bionic hand in a communication way, the bionic hand is simulated, and corresponding interaction results are given, so that the bionic hand can be accurately and flexibly controlled; and key information is screened out from a large amount of image information to calculate gesture recognition results, so that the calculation efficiency is improved on the basis of ensuring the accuracy of control results.

Claims (8)

1. The bionic hand control method based on image processing is characterized by comprising the following operations:
s1, acquiring joint information in a hand diagram to be detected, and marking a corresponding area of the joint information to obtain a joint area; the joint region is subjected to anchor point extraction processing to obtain an initial anchor point region set map;
s2, carrying out feature extraction processing on the initial anchor point region set map to obtain a multi-scale feature map;
obtaining a preferred anchor point set based on the multi-scale feature map;
acquiring the confidence coefficient of each anchor point in each joint region in the optimal anchor point set, and carrying out regression detection and screening treatment to obtain an optimal anchor point set;
before the feature extraction processing, the initial anchor point region set map in S2 further includes: acquiring the confidence coefficient of each anchor point in each joint region in the initial anchor point region set map, deleting the anchor point with the lowest confidence coefficient in each anchor point region, and obtaining a first anchor point region set map; the first anchor point region set map is used for executing the feature extraction processing in the step S2;
s3, acquiring a plurality of adjacent frame hand diagrams of the hand diagram to be tested, and acquiring a key hand node set of the hand diagram to be tested based on an optimal anchor point set in the plurality of adjacent frame hand diagrams; the method comprises the following steps: the operations of S1 and S2 are executed by the plurality of adjacent frame hand diagrams to obtain an optimal anchor point set in the plurality of adjacent frame hand diagrams, and a first anchor point set is obtained; acquiring coordinate values and confidence degrees of anchor points in a first anchor point set to obtain a first information set; acquiring coordinate values and confidence degrees of an optimal anchor point set of the hand graph to be detected, and acquiring a second information set; the first information set and the second information set are subjected to weighted average processing to obtain a key hand node set of the hand graph to be tested;
s4, obtaining finger angle information based on the key hand node set; the finger angle information is matched with a preset gesture recognition rule, and a gesture recognition result is obtained;
s5, inputting the gesture recognition result into the bionic hand, and outputting a gesture interaction result according to a preset gesture interaction rule.
2. The bionic hand control method according to claim 1, wherein the operation of obtaining the multi-scale feature map by feature extraction processing of the initial anchor point region set map in S2 is specifically:
acquiring a low-level characteristic layer and a high-level characteristic layer of the initial anchor point region set map;
the low-level feature layer and the high-level feature layer are respectively subjected to up-sampling treatment and then are fused to obtain a fusion feature map;
and the fusion feature map is subjected to downsampling treatment to obtain the multi-scale feature map.
3. The bionic hand control method according to claim 1, wherein the operation of obtaining the preferred anchor point set based on the multi-scale feature map in S2 is specifically:
and acquiring the neighborhood pixel value variance of each anchor point in the multi-scale feature map, and deleting the anchor points of which the neighborhood pixel value variance exceeds a variance threshold value to obtain the optimal anchor point set.
4. The bionic hand control method according to claim 1, wherein the operation of obtaining the finger angle information based on the key hand node in S4 is specifically:
acquiring two-dimensional vector coordinates of each key hand node in the key hand node set to obtain a two-dimensional vector coordinate set; and obtaining the bending angle of each finger based on the two-dimensional vector coordinates, and obtaining the finger angle information.
5. The bionic hand control method according to claim 1, wherein the preset gesture recognition rule in S4 includes:
if the thumb angle is larger than the thumb closing angle threshold in the finger angle information, the index finger angle, the middle finger angle, the ring finger angle and the little finger angle are larger than the finger closing angle threshold, and the gesture recognition result is a stone gesture;
if the thumb angle is larger than the thumb closing angle threshold in the finger angle information, the index finger angle, the middle finger angle, the ring finger angle and the little finger angle are smaller than the finger opening angle threshold, and the gesture recognition result is a scissors gesture;
if the thumb angle, the index finger angle, the middle finger angle, the ring finger angle and the little finger angle in the finger angle information are smaller than the finger opening angle threshold value, the gesture recognition result is a cloth gesture.
6. A bionic hand control system based on image processing, comprising:
the initial anchor point region set diagram generating module is used for acquiring joint information in the hand diagram to be detected, marking a corresponding region of the joint information and obtaining a joint region; the joint region is subjected to anchor point extraction processing to obtain an initial anchor point region set map;
the optimal anchor point set generation module is used for obtaining a multi-scale feature map through feature extraction processing of the initial anchor point region set map; obtaining a preferred anchor point set based on the multi-scale feature map; acquiring the confidence coefficient of each anchor point in each joint region in the optimal anchor point set, and carrying out regression detection and screening treatment to obtain an optimal anchor point set, wherein the initial anchor point region set diagram in the optimal anchor point set generation module further comprises the following steps before the feature extraction treatment: acquiring the confidence coefficient of each anchor point in each joint region in the initial anchor point region set map, deleting the anchor point with the lowest confidence coefficient in each anchor point region, and obtaining a first anchor point region set map; the first anchor point region set diagram is used for executing feature extraction processing in the optimal anchor point set generation module;
the key hand node set generating module is used for acquiring a plurality of adjacent frame hand diagrams of the hand diagram to be tested and obtaining a key hand node set of the hand diagram to be tested based on an optimal anchor point set in the adjacent frame hand diagrams; the method comprises the following steps: the operation of the initial anchor point region set graph generating module and the optimal anchor point set generating module is executed by the plurality of adjacent frame hand graphs to obtain an optimal anchor point set in the plurality of adjacent frame hand graphs, and a first anchor point set is obtained; acquiring coordinate values and confidence degrees of anchor points in a first anchor point set to obtain a first information set; acquiring coordinate values and confidence degrees of an optimal anchor point set of the hand graph to be detected, and acquiring a second information set; the first information set and the second information set are subjected to weighted average processing to obtain a key hand node set of the hand graph to be tested;
the gesture recognition result generation module is used for obtaining finger angle information based on the key hand node set; the finger angle information is matched with a preset gesture recognition rule, and a gesture recognition result is obtained;
and the gesture interaction result output module is used for inputting the gesture recognition result to the bionic hand and outputting a gesture interaction result according to a preset gesture interaction rule.
7. An image processing-based bionic hand control apparatus comprising a processor and a memory, wherein the processor implements the image processing-based bionic hand control method according to any one of claims 1-5 when executing a computer program stored in the memory.
8. A computer readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements the image processing based biomimetic hand control method according to any one of claims 1-5.
CN202311068254.2A 2023-08-24 2023-08-24 Bionic hand control method, system and equipment based on image processing Active CN116766213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311068254.2A CN116766213B (en) 2023-08-24 2023-08-24 Bionic hand control method, system and equipment based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311068254.2A CN116766213B (en) 2023-08-24 2023-08-24 Bionic hand control method, system and equipment based on image processing

Publications (2)

Publication Number Publication Date
CN116766213A CN116766213A (en) 2023-09-19
CN116766213B true CN116766213B (en) 2023-11-03

Family

ID=87989896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311068254.2A Active CN116766213B (en) 2023-08-24 2023-08-24 Bionic hand control method, system and equipment based on image processing

Country Status (1)

Country Link
CN (1) CN116766213B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117784941A (en) * 2024-02-23 2024-03-29 浙江强脑科技有限公司 Gesture control method of bionic hand, storage medium, control device and bionic hand

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086659A (en) * 2018-06-13 2018-12-25 深圳市感动智能科技有限公司 A kind of Human bodys' response method and apparatus based on multimode road Fusion Features
WO2019055701A1 (en) * 2017-09-13 2019-03-21 Vanderbilt University Continuum robots with multi-scale motion through equilibrium modulation
WO2019080203A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Gesture recognition method and system for robot, and robot
CN112016370A (en) * 2019-05-31 2020-12-01 北京易讯理想科技有限公司 Static gesture recognition method based on morphology
CN112784810A (en) * 2021-02-08 2021-05-11 风变科技(深圳)有限公司 Gesture recognition method and device, computer equipment and storage medium
WO2021098587A1 (en) * 2019-11-20 2021-05-27 Oppo广东移动通信有限公司 Gesture analysis method, apparatus and device, and computer-readable storage medium
CN113183133A (en) * 2021-04-28 2021-07-30 华南理工大学 Gesture interaction method, system, device and medium for multi-degree-of-freedom robot
CA3136990A1 (en) * 2020-10-29 2022-04-29 10353744 Canada Ltd. A human body key point detection method, apparatus, computer device and storage medium
CN114494728A (en) * 2022-02-10 2022-05-13 北京工业大学 Small target detection method based on deep learning
CN114549809A (en) * 2022-02-23 2022-05-27 深圳Tcl新技术有限公司 Gesture recognition method and related equipment
WO2022166243A1 (en) * 2021-02-07 2022-08-11 青岛小鸟看看科技有限公司 Method, apparatus and system for detecting and identifying pinching gesture
CN114898464A (en) * 2022-05-09 2022-08-12 南通大学 Lightweight accurate finger language intelligent algorithm identification method based on machine vision
CN115063836A (en) * 2022-06-10 2022-09-16 烟台大学 Pedestrian tracking and re-identification method based on deep learning
AU2021240188B1 (en) * 2021-09-16 2023-02-23 Sensetime International Pte. Ltd. Face-hand correlation degree detection method and apparatus, device and storage medium
CN115951783A (en) * 2023-02-04 2023-04-11 西安电子科技大学 Computer man-machine interaction method based on gesture recognition
CN116188695A (en) * 2023-02-28 2023-05-30 华中科技大学 Construction method of three-dimensional hand gesture model and three-dimensional hand gesture estimation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI790764B (en) * 2021-09-30 2023-01-21 宏碁股份有限公司 Three-dimensional gesture detection device and three-dimensional gesture detection method

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019055701A1 (en) * 2017-09-13 2019-03-21 Vanderbilt University Continuum robots with multi-scale motion through equilibrium modulation
WO2019080203A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Gesture recognition method and system for robot, and robot
CN109086659A (en) * 2018-06-13 2018-12-25 深圳市感动智能科技有限公司 A kind of Human bodys' response method and apparatus based on multimode road Fusion Features
CN112016370A (en) * 2019-05-31 2020-12-01 北京易讯理想科技有限公司 Static gesture recognition method based on morphology
WO2021098587A1 (en) * 2019-11-20 2021-05-27 Oppo广东移动通信有限公司 Gesture analysis method, apparatus and device, and computer-readable storage medium
CA3136990A1 (en) * 2020-10-29 2022-04-29 10353744 Canada Ltd. A human body key point detection method, apparatus, computer device and storage medium
WO2022166243A1 (en) * 2021-02-07 2022-08-11 青岛小鸟看看科技有限公司 Method, apparatus and system for detecting and identifying pinching gesture
CN112784810A (en) * 2021-02-08 2021-05-11 风变科技(深圳)有限公司 Gesture recognition method and device, computer equipment and storage medium
CN113183133A (en) * 2021-04-28 2021-07-30 华南理工大学 Gesture interaction method, system, device and medium for multi-degree-of-freedom robot
AU2021240188B1 (en) * 2021-09-16 2023-02-23 Sensetime International Pte. Ltd. Face-hand correlation degree detection method and apparatus, device and storage medium
CN114494728A (en) * 2022-02-10 2022-05-13 北京工业大学 Small target detection method based on deep learning
CN114549809A (en) * 2022-02-23 2022-05-27 深圳Tcl新技术有限公司 Gesture recognition method and related equipment
CN114898464A (en) * 2022-05-09 2022-08-12 南通大学 Lightweight accurate finger language intelligent algorithm identification method based on machine vision
CN115063836A (en) * 2022-06-10 2022-09-16 烟台大学 Pedestrian tracking and re-identification method based on deep learning
CN115951783A (en) * 2023-02-04 2023-04-11 西安电子科技大学 Computer man-machine interaction method based on gesture recognition
CN116188695A (en) * 2023-02-28 2023-05-30 华中科技大学 Construction method of three-dimensional hand gesture model and three-dimensional hand gesture estimation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于椭圆模型的手势识别与仿真;方奎等;计算机仿真(第03期);第275-278页 *
手势识别与机械手掌仿真装置;徐彪等;计算机产品与流通(第10期);第146-147页 *

Also Published As

Publication number Publication date
CN116766213A (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN110532984B (en) Key point detection method, gesture recognition method, device and system
CN111488824B (en) Motion prompting method, device, electronic equipment and storage medium
CN109448090B (en) Image processing method, device, electronic equipment and storage medium
WO2021103648A1 (en) Hand key point detection method, gesture recognition method, and related devices
US10043308B2 (en) Image processing method and apparatus for three-dimensional reconstruction
CN108595008B (en) Human-computer interaction method based on eye movement control
CN112819947A (en) Three-dimensional face reconstruction method and device, electronic equipment and storage medium
CN111062263B (en) Method, apparatus, computer apparatus and storage medium for hand gesture estimation
CN109919971B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116766213B (en) Bionic hand control method, system and equipment based on image processing
CN112926423A (en) Kneading gesture detection and recognition method, device and system
CN111401318B (en) Action recognition method and device
CN107832736B (en) Real-time human body action recognition method and real-time human body action recognition device
CN110163864B (en) Image segmentation method and device, computer equipment and storage medium
US20220351405A1 (en) Pose determination method and device and non-transitory storage medium
CN112651380A (en) Face recognition method, face recognition device, terminal equipment and storage medium
CN111103981B (en) Control instruction generation method and device
JP2017037424A (en) Learning device, recognition device, learning program and recognition program
CN111199169A (en) Image processing method and device
KR20230080938A (en) Method and apparatus of gesture recognition and classification using convolutional block attention module
WO2019180666A1 (en) Computer vision training using paired image data
CN110728172B (en) Point cloud-based face key point detection method, device and system and storage medium
CN114202554A (en) Mark generation method, model training method, mark generation device, model training device, mark method, mark device, storage medium and equipment
CN114581535B (en) Method, device, storage medium and equipment for marking key points of user bones in image
CN108876713B (en) Mapping method and device of two-dimensional template image, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant