CN113449600B - Two-hand gesture segmentation algorithm based on 3D data - Google Patents

Two-hand gesture segmentation algorithm based on 3D data Download PDF

Info

Publication number
CN113449600B
CN113449600B CN202110591002.2A CN202110591002A CN113449600B CN 113449600 B CN113449600 B CN 113449600B CN 202110591002 A CN202110591002 A CN 202110591002A CN 113449600 B CN113449600 B CN 113449600B
Authority
CN
China
Prior art keywords
point
outer contour
points
arm
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110591002.2A
Other languages
Chinese (zh)
Other versions
CN113449600A (en
Inventor
王臣豪
吴珩
张一彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Chunjian Electronic Technology Co ltd
Original Assignee
Ningbo Chunjian Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Chunjian Electronic Technology Co ltd filed Critical Ningbo Chunjian Electronic Technology Co ltd
Priority to CN202110591002.2A priority Critical patent/CN113449600B/en
Publication of CN113449600A publication Critical patent/CN113449600A/en
Application granted granted Critical
Publication of CN113449600B publication Critical patent/CN113449600B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a two-hand gesture segmentation algorithm based on 3D data, which relates to the technical field of gesture recognition and comprises the following steps: converting the depth value of the pixel point into a gray image; constructing a gray level histogram and determining a threshold segmentation interval; preliminary segmentation is carried out through self-adaptive threshold segmentation, so that a plurality of communication areas are obtained; screening and removing the communication areas according to the number and the area of the outer contour folding points of the communication areas to obtain two arm communication areas of the two hands; determining the hand shaft directions of the palm, the palm center point, the finger tip point and the hands; and determining a cutting point on each side of the hand shaft, accurately cutting the arm communication domain by taking the straight line where the two cutting points are positioned as a cutting line, and reserving the fingers and the palm part. The method is characterized in that the image is subjected to preliminary segmentation based on the depth data of the image, the gray histogram and the threshold segmentation are combined, then the arm part is accurately segmented and cut off, the gesture segmentation of the two hands is finally realized, and the accuracy of the image segmentation is improved.

Description

Two-hand gesture segmentation algorithm based on 3D data
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a two-hand gesture segmentation algorithm based on 3D data.
Background
Gesture recognition is a way of solving human language by computational mechanism, with the aim of recognizing human gestures by mathematical algorithms. The user can control or interact with the device using simple gestures without touching them. Gesture segmentation is a key step in the gesture recognition process, and the effect of gesture segmentation directly influences the gesture analysis and final gesture recognition of the next step.
At present, the existing gesture segmentation algorithm aims at single-hand gesture segmentation, double-hand gesture segmentation cannot be performed, corresponding man-machine interaction processes can only recognize single-hand gestures, and diversity of interaction processes is lacking. The single-hand gesture segmentation algorithm adopts more algorithms such as foreground object extraction and the like, and is based on three-dimensional image data acquired by a binocular camera or a TOF (Time of flight) camera. However, in a static scene, the foreground object extraction algorithm cannot segment the image, and in a complex dynamic scene, the foreground and the background are identified by the algorithm, so that the segmentation of the foreground and the background cannot be accurately completed.
Therefore, how to solve the problems that the gesture segmentation algorithm in the prior art can only perform gesture segmentation with one hand and has low accuracy on image segmentation becomes an important technical problem to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a two-hand gesture segmentation algorithm based on 3D data, which aims to solve the problems that the gesture segmentation algorithm in the prior art can only carry out one-hand gesture segmentation and has lower accuracy on image segmentation. The preferred technical solutions of the technical solutions provided by the present invention can produce a plurality of technical effects described below.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the invention provides a two-hand gesture segmentation algorithm based on 3D data, which comprises the following steps:
converting the depth value of the input pixel point into a gray level image according to a conversion equation of the depth information and the gray level value of the image;
constructing a gray level histogram according to the gray level image, and determining a threshold segmentation interval;
performing primary segmentation through self-adaptive threshold segmentation, and dividing a pixel point cluster after threshold segmentation into a plurality of connected areas;
calculating the number and the area of outer contour folding points of each communication region, and screening and removing the communication regions according to the number and the area of the outer contour folding points to obtain two arm communication regions of two hands;
respectively determining maximum inscribed circles of all arm connected domains through pixel points of iterative arm connected domains so as to determine palm and palm center points;
determining the highest point of each arm connected domain, and setting the highest point as a pointing point;
determining the hand shaft direction of each hand through the finger tip point and the palm center point;
and determining a cutting point on each side of the hand shaft, accurately cutting the arm communication domain by taking the straight line where the two cutting points are positioned as a cutting line, and reserving the finger and the palm part, wherein the cutting point is a point positioned at the joint position of the palm and the arm in the outer contour point of the arm communication domain.
Preferably, the point of each arm connected domain farthest from the fingertip point is determined and set as an arm endpoint, the outer contour point which falls in a circle range with a palm center point as a circle center and R as a radius is extracted from the outer contour points of each arm connected domain, the extracted outer contour point is divided into two outer contour point sets with a hand shaft as a boundary line, and the cutting point is the outer contour point closest to the arm endpoint in each outer contour point set, wherein R is more than R, and R is the radius of the maximum inscribed circle of the arm connected domain.
Preferably, the value range of R is: r is more than or equal to 1.2R and less than or equal to 1.8R.
Preferably, the conversion equation of the depth information and the gray value of the image is:
gray i,j =(currentDist i,j -nearDist)/(farDist-nearDist)*255
wherein currentDist is a compound i,j For the depth value of the current pixel point, the nearest dist is the minimum working distance of the depth camera, the far dist is the maximum working distance of the depth camera, and the gray i,j Is the gray value after the depth value conversion.
Preferably, the width of the threshold segmentation section is determined according to the physical thickness of a human hand, and the corresponding calculation formula of the gray value is as follows:
(width-nearDist)/(farDist-nearDist)*255
where width is the width of the threshold segmentation interval, nearDist is the minimum working distance of the depth camera, farDist is the maximum working distance of the depth camera.
Preferably, the width of the threshold segmentation interval is in the range of 8cm-10cm.
Preferably, the number of the eliminating nodes for counting the number of the outer contour folding points is T, the area of the eliminating nodes is B, and when the number of the outer contour folding points of the communication area is greater than T and/or the area of the communication area is smaller than B, the communication area is screened out and eliminated.
Preferably, the direction calculation formula of the hand shaft is:
Figure BDA0003089301740000031
wherein m1 is a palm center point, m2 is a fingertip point, θ is an inclination angle of a hand axis, and x and y are an x-axis coordinate and a y-axis coordinate of the point, respectively.
Preferably, before determining the threshold segmentation interval, the gray level histogram is smoothed to remove burrs.
Preferably, the depth information of the image is acquired by the TOF module.
According to the technical scheme provided by the invention, a depth value of a pixel point is converted into a gray value by a two-hand gesture segmentation algorithm based on 3D data, so that a gray image and a gray histogram are obtained; performing primary segmentation by adopting self-adaptive threshold segmentation, and dividing a pixel point cluster after threshold segmentation into a plurality of connected areas; calculating the number and the area of outer contour folding points of each communication region, and screening and removing to obtain two arm communication regions of the two hands; determining palm and palm center points through iterating pixel points of the arm connected domain; determining the finger tip and the hand axis direction of each hand; and determining the outer contour points of the arm communicating region at the joint position of the palm and the arm at the two sides of the hand shaft as cutting points, precisely dividing the arm communicating region by taking the straight line where the two cutting points are positioned as a cutting line, and reserving the fingers and the palm part. The arrangement is that the communication areas corresponding to the body and other noise areas with the same depth are removed according to the number and the area of the folding points of the outer contour, and the two arm communication areas corresponding to the hands are reserved, so that gesture cutting of the hands is realized; in addition, the invention converts the depth data of the image into the gray value, completes the primary segmentation of the image by combining the gray histogram and the threshold segmentation, extracts the segmentation points near the junction of the palm and the arm for precise segmentation, and improves the accuracy of image segmentation.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a grayscale image of a two-hand gesture segmentation algorithm based on 3D data in an embodiment of the invention;
FIG. 2 is a gray level histogram of a two-hand gesture segmentation algorithm based on 3D data in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a threshold segmentation result of a two-hand gesture segmentation algorithm based on 3D data in an embodiment of the present invention;
FIG. 4 is a schematic diagram of an arm connected domain of a two-hand gesture segmentation algorithm based on 3D data according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a palm determined by a two-hand gesture segmentation algorithm based on 3D data in an embodiment of the present invention;
FIG. 6 is a schematic diagram of fingertip points and arm points determined by a two-hand gesture segmentation algorithm based on 3D data in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a cut point determined by a two-hand gesture segmentation algorithm based on 3D data in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, based on the examples herein, which are within the scope of the invention as defined by the claims, will be within the scope of the invention as defined by the claims.
The purpose of the present embodiment is to provide a two-hand gesture segmentation algorithm based on 3D data, so as to solve the problems that in the prior art, the gesture segmentation algorithm can only perform one-hand gesture segmentation, and the accuracy of image segmentation is low.
Hereinafter, embodiments will be described with reference to the drawings. Furthermore, the embodiments shown below do not limit the summary of the invention described in the claims. The whole contents of the constitution shown in the following examples are not limited to the solution of the invention described in the claims.
Referring to fig. 1-7, the two-hand gesture segmentation algorithm based on 3D data provided by the present invention includes the following steps:
s01: and converting the depth value of the input pixel point into a gray image according to a conversion equation of the depth information and the gray value of the image. The conversion equation of the depth information and the gray value of the image is:
gray i,j =(currentDist i,j -nearDist)/(farDist-nearDist)*255
wherein currentDist is a compound i,j For the depth value of the current pixel point, the nearest dist is the minimum working distance of the depth camera, the far dist is the maximum working distance of the depth camera, and the gray i,j Is the gray value after the depth value conversion.
In a specific embodiment, the depth information of the image includes depth values of pixel points, the depth information of the image is obtained through a TOF module, the TOF module is composed of a light pulse transmitting and receiving unit, a camera unit and the like, the transmitting unit continuously transmits light pulses to the target object, the light pulses are reflected after encountering the target object and are received by the receiving unit, the distance of the target object is calculated by calculating the transmitting and receiving time difference or phase difference, so that the depth information is generated, and the target object is displayed in an image mode by combining with the camera unit.
S02: and constructing a gray level histogram according to the gray level image converted by the depth information of the image, and determining a threshold segmentation interval. And calculating an effective peak index and a height value in the gray level histogram according to the constructed gray level histogram, wherein the peak index corresponds to the gray level value and the height value corresponds to the number of pixel points. In a specific embodiment, the threshold segmentation section may be determined according to the physical thickness of a human hand, which is typically 2cm-3cm, and the width of the threshold segmentation section is set to 8cm-10cm in consideration of the influence of noise in the image and the inclination of the hand. The width of the threshold segmentation section is determined by combining an empirical threshold with the physical thickness of a human hand, and the set basis is to reserve more arm areas on the premise of completely reserving the arm areas and completely eliminating the body areas so as to determine arm points in the subsequent steps.
In a specific embodiment, before determining the threshold segmentation interval, the gray level histogram is further required to be subjected to smoothing treatment, burrs are removed, and accuracy of the peak index and the height value is improved.
The calculation formula of the gray value corresponding to the width of the threshold segmentation interval is as follows:
(width-nearDist)/(farDist-nearDist)*255
where width is the width of the threshold segmentation interval, nearDist is the minimum working distance of the depth camera, farDist is the maximum working distance of the depth camera.
S03: and performing primary segmentation by self-adaptive threshold segmentation, wherein the pixel point cluster after the threshold segmentation is divided into a plurality of connected areas, and the primary segmentation is based on gray values corresponding to the width of the threshold segmentation section. The communication area includes the arm area of the hands, including the fingers, palms and arms, and possibly the body area and other noise areas of the same depth.
S04: and calculating the number and the area of outer contour folding points of each communication region, and screening and removing the communication regions according to the number and the area of the outer contour folding points to obtain two arm communication regions of two hands. In a specific embodiment, due to errors of the TOF module and the non-smooth characteristic of the body area, the areas of the body area and the arm area after threshold segmentation and the number of outer contour folding points are greatly different, the communication areas with more outer contour folding points and too small areas are screened out and removed, and the left arm communication areas of two hands are left. When screening and rejecting, the rejecting node counting the number of the outer contour folding points is T, the rejecting node counting the area is B, and if the number of the outer contour folding points of the communication area is greater than T and/or the area of the communication area is smaller than B, the communication area is screened out and rejected. That is, the connected region is eliminated by satisfying at least one of the two elimination conditions that the number of the outer contour folding points is greater than T and the area is smaller than B.
The number of the outer contour break points and the area of the outer contour break points can be obtained through a combination test, so that the purposes of distinguishing a body area from an arm communication area and distinguishing a noise area from the arm communication area are achieved.
S05: and respectively determining the maximum inscribed circles of the arm connected domains through the pixel points of the iterative arm connected domains so as to determine palm and palm center points. The hand segmentation is performed based on the depth information of the image, and the arms are inevitably present in the image, so that the arm portions need to be cut away, leaving the portions where the fingers and palm are located. The palm can be regarded as an approximate circle, and the maximum inscribed circle in the arm communication domain is found through iterating the pixel points of the arm communication domain, wherein the position of the inscribed circle is the palm, and the center of the inscribed circle is the palm center point. Because the algorithm for determining the palm points is applicable to only a single connected region, the arm connected regions of both hands need to be separately processed in the subsequent step. In a specific embodiment, the outer contour of the arm connected domain is optimized before the iterative computation.
S06: the highest point of each arm connected domain is determined and set as the pointing point. The pointing point allows the general direction of the hand to be determined, so as to determine the direction of the arm and subsequently determine the point of the arm, and the algorithm is applicable to the general direction of the arm being the vertical direction.
S07: the hand axis direction of each hand is determined by the point of the tip and the point of the palm. The direction calculation formula of the hand shaft is:
Figure BDA0003089301740000081
wherein m1 is a palm center point, m2 is a fingertip point, θ is an inclination angle of a hand axis, and x and y are an x-axis coordinate and a y-axis coordinate of the point, respectively.
S08: and determining a cutting point on each side of the hand shaft, precisely cutting the arm communication domain by taking the straight line where the two cutting points are positioned as a cutting line, and reserving the finger and the palm part, wherein the cutting point is a point positioned at the joint position of the palm and the arm in the outer contour point of the arm communication domain. The arm communicating region is divided into two parts by the cutting line, and the finger and palm parts are the parts where the palm center points are located.
In a specific embodiment, the method for determining the cutting point includes: firstly, determining an arm endpoint, wherein the arm endpoint is the point farthest from a fingertip point in an arm communication domain; secondly, determining a circle with a centre of palm point as a circle center and R as a radius, wherein R is more than R, and R is the radius of the maximum inscribed circle of the arm region; extracting outer contour points falling in a circular area with R as a radius from the outer contour points of the arm communication area; finally, the extracted outer contour points are divided into two outer contour point sets by taking the hand shaft as a boundary line, the cutting points are outer contour points which are closest to the arm end points in the outer contour point sets, namely, outer contour points which are positioned on different sides of the hand shaft and closest to the arm end points are respectively found in the extracted outer contour points, and the two outer contour points are the cutting points. The two cutting points are correspondingly found for the arm connected domain of each hand so as to cut the corresponding arm connected domain. The value range of R is as follows: r is 1.2 r.ltoreq.R.ltoreq.1.8R, and in a specific example, R is given a value of 1.5R.
Since the length of the palm part of most people in the hand axis direction is slightly larger than the width of the palm, no intersection point exists between the boundary line between the palm and the arm and the maximum inscribed circle of the arm communicating region, and the cutting line determined by setting R to be 1.5R is more in line with the real boundary line between the human palm and the arm.
S09: after the parts of the fingers and the palm are obtained, defining the left hand on the left side of the image and the right hand on the right side of the image, and filling colors into the left hand and the right hand so as to facilitate the subsequent gesture recognition of the two hands.
According to the technical scheme provided by the invention, depth information of a three-dimensional image is obtained through a TOF module, the depth information is converted into a gray value, a gray image comprising a plurality of connected areas such as a hand area, a body area and a noise area is obtained by combining a gray histogram and threshold segmentation, then the number and the area of outer contour break points of the connected areas are calculated, and the connected areas with more outer contour break points and too small area are removed to obtain arm connected areas of the two hands, so that the preliminary segmentation of the hands of the two hands is completed; and extracting outer contour points of the arm communicating regions in concentric circles with the radius 1.5 times of the maximum inscribed circle of the arm communicating regions, respectively determining points closest to arm endpoints in the extracted outer contour wide points on two sides of the hand shaft to obtain two cutting points, cutting the palm and the arm, wherein the region formed by the finger and the palm is the required region, each arm communicating region is divided into two parts by a cutting line, and one part with the palm center point is the region formed by the finger and the palm, so that the accurate segmentation of the hands of the two hands is completed.
Through the arrangement, the arm connected domain of the two hands is obtained by eliminating the connected region which does not meet the conditions, and then the gesture segmentation of the two hands is realized, so that the technical barrier that the gesture segmentation of the two hands is only carried out in the past is broken through.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to. The schemes provided by the invention comprise the basic schemes of the schemes, are independent of each other and are not mutually restricted, but can be combined with each other under the condition of no conflict, so that a plurality of effects are realized together.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A two-hand gesture segmentation method based on 3D data is characterized by comprising the following steps:
converting the depth value of the input pixel point into a gray level image according to a conversion equation of the depth information and the gray level value of the image;
constructing a gray level histogram according to the gray level image, and determining a threshold segmentation interval;
performing primary segmentation through self-adaptive threshold segmentation, and dividing a pixel point cluster after threshold segmentation into a plurality of connected areas; calculating the number and the area of outer contour folding points of each communication region, and screening and removing the communication regions according to the number and the area of the outer contour folding points to obtain two arm communication regions of two hands; dividing hands by self-adaptive threshold values, wherein the rest pixel parts comprise hand parts of the hands, body parts and other noise parts with the same depth, dividing pixel point clusters after threshold value division into a plurality of connected areas, calculating the number and the area of outer contour break points of the connected areas, screening the connected areas, and eliminating the areas with more break points and too small area, wherein the remaining areas are two connected areas of the hands;
respectively determining maximum inscribed circles of all arm connected domains through pixel points of iterative arm connected domains so as to determine palm and palm center points;
determining the highest point of each arm connected domain, and setting the highest point as a pointing point;
determining the hand shaft direction of each hand through the finger tip point and the palm center point;
determining a cutting point on each side of the hand shaft, accurately cutting the hand arm communication domain by taking a straight line where the two cutting points are located as a cutting line, and reserving the finger and the palm part, wherein the cutting point is a point located at the joint position of the palm and the arm in the outer contour point of the hand arm communication domain; the method for determining the cutting point comprises the following steps: (1) Determining an arm endpoint, wherein the arm endpoint is the point farthest from the fingertip point in the arm communication domain; (2) Determining a circle taking a palm center point as a circle center and R as a radius, wherein R is more than R, and R is the radius of the maximum inscribed circle of the arm region; (3) Extracting outer contour points falling in a circular area with R as a radius from the outer contour points of the arm communication area; (4) The hand shaft is taken as a boundary line to divide the extracted outer contour points into two outer contour point sets, the cutting points are outer contour points which are closest to the arm end points in the outer contour point sets, namely, outer contour points which are positioned on different sides of the hand shaft and closest to the arm end points are respectively found in the extracted outer contour points, and the two outer contour points are the cutting points.
2. The method for dividing a hand gesture according to claim 1, wherein a point farthest from a fingertip point in each arm connected domain is determined and set as an arm end point, an outer contour point falling within a circle range with a palm center point as a circle center and R as a radius is extracted from outer contour points in the arm connected domain, the extracted outer contour point is divided into two outer contour point sets with a hand axis as a boundary line, and a cutting point is an outer contour point closest to the arm end point in each outer contour point set, wherein R > R, R is a radius of a maximum inscribed circle of the arm connected domain.
3. The method of two-hand gesture segmentation of claim 2, wherein the range of values for R is: r is more than or equal to 1.2R and less than or equal to 1.8R.
4. The method of two-hand gesture segmentation according to claim 1, wherein the conversion equation of the depth information and the gray value of the image is:
gray i,j =(currentDist i,j -nearDist)/(farDist-nearDist)*255
wherein currentDist is a compound i,j For the depth value of the current pixel point, the nearest dist is the minimum working distance of the depth camera, the far dist is the maximum working distance of the depth camera, and the gray i,j Is the gray value after the depth value conversion.
5. The method for segmenting a two-hand gesture according to claim 1, wherein the width of the threshold segmentation interval is determined according to the physical thickness of a human hand, and the corresponding gray value is calculated according to the formula:
(width-nearDist)/(farDist-nearDist)*255
where width is the width of the threshold segmentation interval, nearDist is the minimum working distance of the depth camera, farDist is the maximum working distance of the depth camera.
6. The method of two-hand gesture segmentation of claim 5, wherein the width of the threshold segmentation interval ranges from 8cm to 10cm.
7. The method for dividing the gesture of two hands according to claim 1, wherein the number of the elimination nodes for counting the number of the outer contour folding points is T, the area of the elimination nodes is B, and when the number of the outer contour folding points of the communication area is greater than T and/or the area of the communication area is smaller than B, the communication area is screened out and eliminated.
8. The method of dividing a hand gesture according to claim 1, wherein the direction of the hand axis is calculated as:
Figure QLYQS_1
wherein m1 is a palm center point, m2 is a fingertip point, θ is an inclination angle of a hand axis, and x and y are an x-axis coordinate and a y-axis coordinate of the point, respectively.
9. The method of two-hand gesture segmentation according to claim 1, wherein the gray level histogram is smoothed to remove burrs before determining the threshold segmentation bins.
10. The method of claim 1, wherein the depth information of the image is acquired by a TOF module.
CN202110591002.2A 2021-05-28 2021-05-28 Two-hand gesture segmentation algorithm based on 3D data Active CN113449600B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110591002.2A CN113449600B (en) 2021-05-28 2021-05-28 Two-hand gesture segmentation algorithm based on 3D data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110591002.2A CN113449600B (en) 2021-05-28 2021-05-28 Two-hand gesture segmentation algorithm based on 3D data

Publications (2)

Publication Number Publication Date
CN113449600A CN113449600A (en) 2021-09-28
CN113449600B true CN113449600B (en) 2023-07-04

Family

ID=77810405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110591002.2A Active CN113449600B (en) 2021-05-28 2021-05-28 Two-hand gesture segmentation algorithm based on 3D data

Country Status (1)

Country Link
CN (1) CN113449600B (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834922B (en) * 2015-05-27 2017-11-21 电子科技大学 Gesture identification method based on hybrid neural networks
CN112116604A (en) * 2020-09-15 2020-12-22 无锡威莱斯电子有限公司 Automatic gesture segmentation algorithm based on 3D data

Also Published As

Publication number Publication date
CN113449600A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
US20220383535A1 (en) Object Tracking Method and Device, Electronic Device, and Computer-Readable Storage Medium
CN103226387B (en) Video fingertip localization method based on Kinect
US10679146B2 (en) Touch classification
US10430951B2 (en) Method and device for straight line detection and image processing
CN104834922B (en) Gesture identification method based on hybrid neural networks
US20150253864A1 (en) Image Processor Comprising Gesture Recognition System with Finger Detection and Tracking Functionality
CN109919039B (en) Static gesture recognition method based on palm and finger characteristics
EP0722149B1 (en) Hough transform with fuzzy gradient and fuzzy voting
WO2016023264A1 (en) Fingerprint identification method and fingerprint identification device
CN106446773B (en) Full-automatic robust three-dimensional face detection method
CN111414837A (en) Gesture recognition method and device, computer equipment and storage medium
CN106971130A (en) A kind of gesture identification method using face as reference
CN107169479A (en) Intelligent mobile equipment sensitive data means of defence based on fingerprint authentication
US20160026857A1 (en) Image processor comprising gesture recognition system with static hand pose recognition based on dynamic warping
CN107272899B (en) VR (virtual reality) interaction method and device based on dynamic gestures and electronic equipment
CN106503619B (en) Gesture recognition method based on BP neural network
CN108520264A (en) A kind of hand contour feature optimization method based on depth image
CN112528836A (en) Palm vein information acquisition method, device, equipment and storage medium
CN103870071A (en) Touch source identification method and system
CN104915009B (en) The method and system of gesture anticipation
CN111309149A (en) Gesture recognition method and gesture recognition device
CN109670447B (en) Recognition methods, device and the readable storage medium storing program for executing of seal ballot paper full-filling block diagram picture
CN113449600B (en) Two-hand gesture segmentation algorithm based on 3D data
KR101967858B1 (en) Apparatus and method for separating objects based on 3D depth image
CN108268125A (en) A kind of motion gesture detection and tracking based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant