CN111968723A - Kinect-based upper limb active rehabilitation training method - Google Patents
Kinect-based upper limb active rehabilitation training method Download PDFInfo
- Publication number
- CN111968723A CN111968723A CN202010752710.5A CN202010752710A CN111968723A CN 111968723 A CN111968723 A CN 111968723A CN 202010752710 A CN202010752710 A CN 202010752710A CN 111968723 A CN111968723 A CN 111968723A
- Authority
- CN
- China
- Prior art keywords
- preset
- information
- gesture
- kinect
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 63
- 238000000034 method Methods 0.000 title claims abstract description 60
- 210000001364 upper extremity Anatomy 0.000 title claims abstract description 30
- 230000008569 process Effects 0.000 claims abstract description 14
- 230000011218 segmentation Effects 0.000 claims description 36
- 238000001514 detection method Methods 0.000 claims description 26
- 238000003708 edge detection Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 239000003814 drug Substances 0.000 abstract description 3
- 230000004899 motility Effects 0.000 abstract description 3
- 208000006011 Stroke Diseases 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000001684 chronic effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 208000005392 Spasm Diseases 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000003710 cerebral cortex Anatomy 0.000 description 1
- 230000003631 expected effect Effects 0.000 description 1
- 210000003811 finger Anatomy 0.000 description 1
- 210000000245 forearm Anatomy 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000007659 motor function Effects 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000008521 reorganization Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Social Psychology (AREA)
- Multimedia (AREA)
- Psychiatry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Physical Education & Sports Medicine (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention relates to the field of computer-aided rehabilitation medicine, and discloses an upper limb active rehabilitation training method based on Kinect, which comprises the following steps: s1: acquiring gesture information in the image information of the person to be detected through the Kinect, and tracking the gesture information of the person to be detected; s2: establishing a corresponding relation between the tracked gesture information and a preset virtual gesture in the Kinect; s3: and training and guiding the preset virtual gesture corresponding to the tracking gesture information according to the preset rehabilitation training process. The invention provides continuous, accurate, rich and fatigue-free training for the upper limbs of the patient during the direct target exercise training period through the computer controller, can plan tasks to be completed under different functional modes, simultaneously comprises a plurality of functional training, makes the boring training interesting, mobilizes the subjective motility of the patient, and can be used for evaluating and recording the exercise range and the like.
Description
Technical Field
The invention relates to the field of computer-aided rehabilitation medicine, in particular to an upper limb active rehabilitation training method based on Kinect.
Background
Kinect is a body-sensing device developed by microsoft corporation of america for XBOX360 gaming machines. The Kinect is actually a 3D somatosensory camera, which enables a user to play games by voice or body movements without any joystick or other operating rod.
With the application of computer technology, a new treatment method is brought to modern rehabilitation medicine, and Machiel Van der Loos et al find that the computer-aided training method is superior to the traditional rehabilitation training method in the research of post-stroke computer-aided training and traditional method training. Sung H et al speculated that training a patient with chronic stroke in a virtual environment: improving cerebral cortex reorganization may play an important role in motor function recovery of patients with chronic stroke.
Because the Kinect realizes the complete liberation of the human body in the human-computer interaction process and realizes the interaction without any burden, the Kinect can play a role in the field of rehabilitation.
Disclosure of Invention
In view of the current situation of the prior art, the technical problem to be solved by the present invention is to provide a computer-aided rehabilitation training method and a computer vision technology combined upper limb active rehabilitation training method based on a kit.
The method comprises the following steps:
an upper limb active rehabilitation training method based on Kinect comprises the following steps:
s1: acquiring gesture information in the image information of the person to be detected through the Kinect, and tracking the gesture information of the person to be detected;
s2: establishing a corresponding relation between the tracked gesture information and a preset virtual gesture in the Kinect;
s3: and training and guiding the preset virtual gesture corresponding to the tracking gesture information according to the preset rehabilitation training process.
Further, step S1 includes:
s11: carrying out skin color detection on the obtained image information of the person to be detected through the Kinect, and obtaining the image information after the skin color detection;
s12: acquiring depth information of image information of a person to be detected, performing depth segmentation on the acquired depth information according to a preset depth range, and acquiring image information after the depth segmentation;
s13: combining the image information after the depth segmentation with the image information after the skin color detection to obtain gesture information in the image information of the person to be detected;
s14: and tracking the acquired gesture information according to a preset gesture tracking algorithm.
Further, step S2 includes:
s21: marking preset gesture information according to a preset identification method, and storing the preset gesture information in a preset gesture library;
s22: and comparing the gesture information in the acquired image information of the person to be detected with preset gesture information in a preset gesture library, and identifying a gesture identifier corresponding to the gesture information in the image information of the person to be detected.
Further, step S11 includes:
s111: collecting RGB images corresponding to a preset frame in image information of a person to be detected through a Kinect, and carrying out skin color identification on the RGB images according to a preset skin color detection algorithm;
s112: and obtaining the image information after the skin color identification and storing the image information as binary image information.
Further, step S111 includes the steps of:
s1111: searching preset contour information in the acquired image information of the person to be detected by adopting a library function cvFindContours of OpenCV;
s1112: acquiring the number of profile information in the image information of the searched personnel to be detected;
s1113: filtering preset interference information according to a preset filtering algorithm;
s1114: acquiring the image information with the interference information filtered out, and carrying out edge detection on the acquired image information with the interference information filtered out;
s1115: and acquiring the image information after the edge detection.
Further, step S13 includes the steps of:
s131: acquiring a depth image corresponding to an RGB image corresponding to a preset frame in image information of a person to be detected acquired by a Kinect;
s132: performing depth segmentation on the depth image according to a preset depth segmentation algorithm, and acquiring points in a preset threshold range in the depth image after the depth segmentation;
s133: mapping points in a preset threshold range of the obtained depth image to the RGB image, and reserving points in the preset threshold range and corresponding to skin color in the RGB image;
s134: and acquiring and recording the three-dimensional information of the skin color corresponding points in the RGB image and in the preset threshold range reserved in the step S133.
Further, step S3 includes the steps of:
s31: acquiring a preset virtual task in a preset rehabilitation training process;
s32: comparing the tracked gesture information with a preset virtual gesture in a preset virtual task;
s33: and scoring the tracked gesture information according to a preset scoring principle, and displaying a scoring result on a preset position.
Further, the updating of the preset threshold range according to the preset automatic updating process specifically comprises the following steps:
setting a preset fixed threshold value;
judging whether three-dimensional information of a skin color corresponding point in the RGB image is identified or not;
if so, acquiring the position information of the corresponding point;
and obtaining the preset threshold range of the current point according to the currently obtained position information of the corresponding point, and continuing to execute the step S132.
The invention at least comprises the following beneficial effects:
(1): in the upper limb rehabilitation training, a patient does not need to wear any equipment and can carry out burden-free interactive training;
(2): in the upper limb rehabilitation training, depth segmentation is carried out based on Kinect equipment, and real-time detection, identification and tracking of gestures are realized by combining skin color identification;
(3): the virtual hand gesture in the virtual environment is consistent with the gesture definition in the gesture recognition, the established mapping relation is one-to-one correspondence, and the driving speed can be increased
(4): the gestures detected and identified in real time can drive the virtual hands in the 3D rehabilitation training game in real time to complete the rehabilitation training task.
(5) The Kinect-based upper limb active rehabilitation training method is a mirror image movement capability, a computer controller provides continuous, accurate, rich and fatigue-free training for the upper limbs of patients during direct target movement training, tasks to be completed can be planned under different functional modes, meanwhile, a plurality of functional trainings are included, boring training is made interesting, the subjective motility of the patients is mobilized, and the Kinect-based upper limb active rehabilitation training method can be used for evaluating and recording movement ranges and the like.
(6) According to the Kinect-based upper limb active rehabilitation training method, the stroke rehabilitation process can be observed through machine training, the post-stroke rehabilitation quality is potentially improved, the average hospitalization day of a patient is shortened, the economic cost is reduced, and the daily life activity of the patient is improved.
Drawings
FIG. 1 is a first flowchart of a Kinect-based upper limb active rehabilitation training method;
FIG. 2 is a binarized image after skin color detection;
FIG. 3 is a binarized image after filtering interference;
FIG. 4 is edge detection after skin tone detection;
FIG. 5 is a binarized image map after depth segmentation;
FIG. 6 is a flow chart of a depth segmentation and skin color identification combined algorithm;
FIG. 7 is a mapped RGB image after depth segmentation;
FIG. 8 is a detected hand contour combined with depth segmentation and skin tone detection;
fig. 9 is an example of a 3D game scene for rehabilitation training.
Detailed Description
The following are specific embodiments of the present invention and are further described with reference to the drawings, but the present invention is not limited to these embodiments.
In order to achieve the purpose, the following technical scheme is adopted:
in view of the current situation of the prior art, the technical problem to be solved by the present invention is to provide a computer-aided rehabilitation training method and a computer vision technology combined upper limb active rehabilitation training method based on a kit.
The method comprises the following steps:
an upper limb active rehabilitation training method based on Kinect comprises the following steps:
s1: acquiring gesture information in the image information of the person to be detected through the Kinect, and tracking the gesture information of the person to be detected;
s2: establishing a corresponding relation between the tracked gesture information and a preset virtual gesture in the Kinect;
s3: and training and guiding the preset virtual gesture corresponding to the tracking gesture information according to the preset rehabilitation training process.
Wherein step S1: acquiring gesture information in the image information of the person to be detected through the Kinect, and tracking the gesture information of the person to be detected;
the method mainly comprises the following steps of obtaining gesture information of image information of a person to be detected by skin color detection and image depth segmentation, and specifically comprises the following steps:
s11: carrying out skin color detection on the obtained image information of the person to be detected through the Kinect, and obtaining the image information after the skin color detection;
s12: acquiring depth information of image information of a person to be detected, performing depth segmentation on the acquired depth information according to a preset depth range, and acquiring image information after the depth segmentation;
s13: combining the image information after the depth segmentation with the image information after the skin color detection to obtain gesture information in the image information of the person to be detected;
s14: and tracking the acquired gesture information according to a preset gesture tracking algorithm.
Wherein S11: carrying out skin color detection on the obtained image information of the person to be detected through the Kinect, and obtaining the image information after the skin color detection; the method comprises the following specific steps:
extracting a part corresponding to skin color in the image information of the person to be detected, setting a segmentation threshold according to the depth information of the hand in the image information of the person to be detected, obtaining the information of the hand after depth segmentation, and extracting the information of the hand.
According to the Kinect-based upper limb active rehabilitation training method, the position of the hand of the user to be tested and the position of the user to be tested are not required to be limited, and the Kinect-based upper limb active rehabilitation training method can be suitable for gesture tracking under a relatively complex background. In addition, the depth information of the hand can be tracked, namely the gesture can be tracked in a three-dimensional space.
When the skin color detection process is carried out on the image information of the person to be detected, the background where the user is located and the placing position of the hand are not limited. One problem that arises is that the interference information is very large and is filtered out in subsequent processing.
The skin color detection method specifically comprises the following steps:
s111: collecting RGB images corresponding to a preset frame in image information of a person to be detected through a Kinect, and carrying out skin color identification on the RGB images according to a preset skin color detection algorithm;
s112: and obtaining the image information after the skin color identification and storing the image information as binary image information.
Further, step S111 includes the steps of:
s1111: searching preset contour information in the acquired image information of the person to be detected by adopting a library function cvFindContours of OpenCV;
s1112: acquiring the number of profile information in the image information of the searched personnel to be detected;
s1113: filtering preset interference information according to a preset filtering algorithm;
s1114: acquiring the image information with the interference information filtered out, and carrying out edge detection on the acquired image information with the interference information filtered out;
s1115: and acquiring the image information after the edge detection.
Namely: and establishing a skin color statistical model to detect skin colors, namely detecting the skin colors through color space transformation and skin color modeling.
By calculating the contour region size, some small regions are filtered out.
And (3) searching contours by adopting a library function cvFindContours of OpenCV, returning the number of the found contours by the function, and calculating the area size of the contours by using a cvContourAurea function after the contours are found.
As shown in fig. 1, fig. 1 is a binarized image after skin color detection, it can be seen that a hand is detected, and a face is also detected, but due to complexity of the background, some parts of the background with YCbCr distribution similar to skin color are also detected as skin color, so that it is necessary to filter out these interference information.
As shown in fig. 2, for the effect of filtering out the interference information, some small-area interference is filtered out, and the remaining three areas are large-area interference far away from the hand, face and background.
As shown in fig. 3, fig. 3 shows the result of edge detection of the obtained skin color detected image.
The result of the skin color detection is carried out after the obtained image information of the person to be detected.
S21: marking preset gesture information according to a preset identification method, and storing the preset gesture information in a preset gesture library;
s22: and comparing the gesture information in the acquired image information of the person to be detected with preset gesture information in a preset gesture library, and identifying a gesture identifier corresponding to the gesture information in the image information of the person to be detected.
After the gesture information after the skin color detection of the image information of the person to be detected is recognized, the recognized gesture information is compared with the preset gesture information identified by the preset identification method, the recognized gesture information is identified, and the identified gesture information is stored in a preset gesture library.
After the hand detection result is obtained, the gesture needs to be tracked. Thus, in each frame of image, if a hand is detected, the relevant information of the hand in the frame is recorded for tracking. If the hand is detected in two continuous frames of images, calculating a movement vector of the hand according to the information of the hand as a tracking result; after the tracking package of the hand is obtained, the tracking package is sent to an upper application program to be used as control information to complete specified actions, so that the expected effect is achieved.
The method selects numbers 0 to 6 to identify gestures, adopts a shape matching method to identify the gestures, and specifically adopts a library function cvMatchShapes of OpenCV to compare an extracted gesture outline with all gesture template outlines, and takes the smallest of effective return values as an identification result to obtain the gestures.
Further, step S13 includes the steps of:
s131: acquiring a depth image corresponding to an RGB image corresponding to a preset frame in image information of a person to be detected acquired by a Kinect;
s132: performing depth segmentation on the depth image according to a preset depth segmentation algorithm, and acquiring points in a preset threshold range in the depth image after the depth segmentation;
s133: mapping points in a preset threshold range of the obtained depth image to the RGB image, and reserving points in the preset threshold range and corresponding to skin color in the RGB image;
s134: and acquiring and recording the three-dimensional information of the skin color corresponding points in the RGB image and in the preset threshold range reserved in the step S133.
The depth segmentation method is that firstly, the depth information of an image acquired by Kinect is stored in a two-dimensional array. Each element in this array represents the distance in meters from the point of the actual object corresponding to the depth pixel point at that location to Kinect. The depth image acquired by the Kinect can be used for segmentation, a depth range is limited, and the image in the range is extracted.
Fig. 4 shows an image divided according to a set depth range. In the image, the depth range is 0.5m to 0.7m, namely, the image with the distance from Kinect between 0.5m and 0.7m is extracted. And performing binarization processing. In this range, the value is 1, and outside the range, the value is 0, and the outline information of the hand is displayed on the right side of the image.
Further, step S3 includes the steps of:
s31: acquiring a preset virtual task in a preset rehabilitation training process;
s32: comparing the tracked gesture information with a preset virtual gesture in a preset virtual task;
s33: and scoring the tracked gesture information according to a preset scoring principle, and displaying a scoring result on a preset position.
Further, the updating of the preset threshold range according to the preset automatic updating process specifically comprises the following steps:
setting a preset fixed threshold value;
judging whether three-dimensional information of a skin color corresponding point in the RGB image is identified or not;
if so, acquiring the position information of the corresponding point;
and obtaining the preset threshold range of the current point according to the currently obtained position information of the corresponding point, and continuing to execute the step S132.
Under the complex background of the acquired image information of the person to be detected, a method of extracting the hand contour under the condition of reducing the limitation on the user as much as possible is found out from two ideas by a method of simply detecting the skin color and a method of simply dividing the depth, but various restrictions always exist, and the effect is not ideal.
The method and the device combine skin color detection and depth segmentation to detect the gesture of the person to be detected.
The basic flow of the combined algorithm is as follows:
after an RGB image and a corresponding depth image of a certain frame of image information of a person to be detected, which are acquired by Kinect, are acquired, firstly, skin color recognition is carried out on the RGB image, the result of the skin color recognition is stored as a binary image, then, the depth image is used for carrying out depth segmentation, only the image within a threshold value range is calculated, points within the threshold value range are mapped into the RGB image, if the mapped points correspond to skin colors, the points are reserved, and if not, the points are removed. Finally, a binary image of only the hand is obtained. And recording some three-dimensional information of the hand as tracking information to be provided for upper-layer application. The flow chart is shown in figure 5.
When the depth information is used for segmentation, the depth information needs to be associated with RGB information. The RGB information and the depth information acquired by the Kinect have a mapping relation, and according to the mapping relation, pixel points in the image after depth segmentation can be mapped into the RGB image. Fig. 6 is a result of mapping depth information after depth segmentation onto an RGB image. The upper and lower threshold values of the depth in the graph are respectively 0.5m and 0.7m, and as can be seen from the graph, all objects in the distance interval from 0.5m to 0.7m away from the camera are divided, and information outside the interval is discarded. It can be seen that the skin tone portion of this figure is only the palm.
In order to reduce the restriction on the user, the method adopts a scheme supporting automatic tracking and updating of the depth threshold range. That is, the threshold of the depth segmentation can be automatically updated after initialization, and is not limited to a fixed range. The specific method is that a fixed threshold is set firstly, semi-automatic segmentation is carried out, and after a hand is detected, the position of the central point of the hand is taken as a threshold reference value for the depth segmentation of the next frame. If the position of the hand moves out of the shooting range of the camera, the hand can be detected again only by placing the hand in the fixed threshold range when the hand is detected next time. This is not substantially limiting with respect to the position of the user's hand. The final result of the gesture detection is shown in fig. 7.
As shown in fig. 8, the virtual hand gesture in the 3D game scene for rehabilitation training in the Kinect is predefined, and the tracked gesture information of the person to be tested is associated with the virtual hand gesture in the 3D game scene for rehabilitation training. Some virtual tasks are deployed in the scene, such as taking objects placed in different orientations and scoring. When the virtual hand is driven to complete the task, the upper limb of the patient inevitably moves along with the virtual hand, so that the aim of rehabilitation is fulfilled.
These tasks of deployment may encompass the following motions: the patient can stretch the muscles by keeping the upper limbs of the patient at the abduction position, the external rotation position, the elbow extension position, the forearm rotation position and the wrist extension position or the finger extension position and the thumb extension position, and can resist the flexion spasm mode of the upper limbs, and the stretching is carried out for 30 seconds each time until the stretched muscles feel loose.
When the Kinect system is started, a person to be tested is located about 1m in front of the Kinect, the hand of the person to be tested is recognized through the Kinect and tracked in real time, the gesture is mapped to the virtual hand in the 3D rehabilitation game, and the virtual hand is driven to complete a task making and active rehabilitation training according to a preset task of a rehabilitation nursing staff.
The invention provides continuous, accurate, rich and fatigue-free training for the upper limbs of the person to be tested by the computer controller during the direct target exercise training through the mirror image exercise capability, can plan the tasks to be completed under different functional modes, simultaneously comprises a plurality of functional training, makes the boring training interesting, mobilizes the subjective motility of the patient, and can be used for evaluating and recording the exercise range and the like. In addition, the stroke rehabilitation process can be observed through machine training, the post-stroke rehabilitation quality is potentially improved, and the post-stroke rehabilitation method mainly shows that the average hospitalization day of a patient is shortened, the economic cost is reduced, and the daily life activity of the patient is improved.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Claims (8)
1. An upper limb active rehabilitation training method based on Kinect is characterized by comprising the following steps:
s1: acquiring gesture information in the image information of the person to be detected through the Kinect, and tracking the gesture information of the person to be detected;
s2: establishing a corresponding relation between the tracked gesture information and a preset virtual gesture in the Kinect;
s3: and training and guiding the preset virtual gesture corresponding to the tracking gesture information according to the preset rehabilitation training process.
2. The Kinect-based upper limb active rehabilitation training method as claimed in claim 1, wherein the step S1 comprises:
s11: carrying out skin color detection on the obtained image information of the person to be detected through the Kinect, and obtaining the image information after the skin color detection;
s12: acquiring depth information of image information of a person to be detected, performing depth segmentation on the acquired depth information according to a preset depth range, and acquiring image information after the depth segmentation;
s13: combining the image information after the depth segmentation with the image information after the skin color detection to obtain gesture information in the image information of the person to be detected;
s14: and tracking the acquired gesture information according to a preset gesture tracking algorithm.
3. The Kinect-based upper limb active rehabilitation training method as claimed in claim 1, wherein the step S2 comprises:
s21: marking preset gesture information according to a preset identification method, and storing the preset gesture information in a preset gesture library;
s22: and comparing the gesture information in the acquired image information of the person to be detected with preset gesture information in a preset gesture library, and identifying a gesture identifier corresponding to the gesture information in the image information of the person to be detected.
4. The Kinect-based upper limb active rehabilitation training method as claimed in claim 2, wherein the step S11 comprises:
s111: collecting RGB images corresponding to a preset frame in image information of a person to be detected through a Kinect, and carrying out skin color identification on the RGB images according to a preset skin color detection algorithm;
s112: and obtaining the image information after the skin color identification and storing the image information as binary image information.
5. The Kinect-based upper limb active rehabilitation training method as claimed in claim 2, wherein the step S111 comprises the steps of:
s1111: searching preset contour information in the acquired image information of the person to be detected by adopting a library function cvFindContours of OpenCV;
s1112: acquiring the number of profile information in the image information of the searched personnel to be detected;
s1113: filtering preset interference information according to a preset filtering algorithm;
s1114: acquiring the image information with the interference information filtered out, and carrying out edge detection on the acquired image information with the interference information filtered out;
s1115: and acquiring the image information after the edge detection.
6. The Kinect-based upper limb active rehabilitation training method as claimed in claim 5, wherein the step S13 comprises the steps of:
s131: acquiring a depth image corresponding to an RGB image corresponding to a preset frame in image information of a person to be detected acquired by a Kinect;
s132: performing depth segmentation on the depth image according to a preset depth segmentation algorithm, and acquiring points in a preset threshold range in the depth image after the depth segmentation;
s133: mapping points in a preset threshold range of the obtained depth image to the RGB image, and reserving points in the preset threshold range and corresponding to skin color in the RGB image;
s134: and acquiring and recording the three-dimensional information of the skin color corresponding points in the RGB image and in the preset threshold range reserved in the step S133.
7. The Kinect-based upper limb active rehabilitation training method as claimed in claim 3, wherein the step S3 comprises the steps of:
s31: acquiring a preset virtual task in a preset rehabilitation training process;
s32: comparing the tracked gesture information with a preset virtual gesture in a preset virtual task;
s33: and scoring the tracked gesture information according to a preset scoring principle, and displaying a scoring result on a preset position.
8. The Kinect-based active rehabilitation training method for upper limbs as claimed in claim 6, wherein the preset threshold range is updated according to a preset automatic updating process, and the specific steps comprise:
setting a preset fixed threshold value;
judging whether three-dimensional information of a skin color corresponding point in the RGB image is identified or not;
if so, acquiring the position information of the corresponding point;
and obtaining the preset threshold range of the current point according to the currently obtained position information of the corresponding point, and continuing to execute the step S132.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010752710.5A CN111968723A (en) | 2020-07-30 | 2020-07-30 | Kinect-based upper limb active rehabilitation training method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010752710.5A CN111968723A (en) | 2020-07-30 | 2020-07-30 | Kinect-based upper limb active rehabilitation training method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111968723A true CN111968723A (en) | 2020-11-20 |
Family
ID=73363692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010752710.5A Pending CN111968723A (en) | 2020-07-30 | 2020-07-30 | Kinect-based upper limb active rehabilitation training method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111968723A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112691002A (en) * | 2021-03-24 | 2021-04-23 | 上海傅利叶智能科技有限公司 | Control method and device based on gesture interaction rehabilitation robot and rehabilitation robot |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103230664A (en) * | 2013-04-17 | 2013-08-07 | 南通大学 | Upper limb movement rehabilitation training system and method based on Kinect sensor |
CN107247466A (en) * | 2017-06-12 | 2017-10-13 | 中山长峰智能自动化装备研究院有限公司 | Robot head gesture control method and system |
CN109003301A (en) * | 2018-07-06 | 2018-12-14 | 东南大学 | A kind of estimation method of human posture and rehabilitation training system based on OpenPose and Kinect |
-
2020
- 2020-07-30 CN CN202010752710.5A patent/CN111968723A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103230664A (en) * | 2013-04-17 | 2013-08-07 | 南通大学 | Upper limb movement rehabilitation training system and method based on Kinect sensor |
CN107247466A (en) * | 2017-06-12 | 2017-10-13 | 中山长峰智能自动化装备研究院有限公司 | Robot head gesture control method and system |
CN109003301A (en) * | 2018-07-06 | 2018-12-14 | 东南大学 | A kind of estimation method of human posture and rehabilitation training system based on OpenPose and Kinect |
Non-Patent Citations (4)
Title |
---|
千承辉等: "基于Kinect的康复训练***设计与研究", 《吉林大学学报(信息科学版)》 * |
张志常等: "康复认知评估***中基于体感交互技术的手势识别方法", 《数理医药学杂志》 * |
瞿畅等: "基于Kinect的上肢康复训练***开发与应用", 《中国生物医学工程学报》 * |
陈毅博: "基于Kinect的康复医疗***的应用研究", 《电子制作》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112691002A (en) * | 2021-03-24 | 2021-04-23 | 上海傅利叶智能科技有限公司 | Control method and device based on gesture interaction rehabilitation robot and rehabilitation robot |
CN112691002B (en) * | 2021-03-24 | 2021-06-29 | 上海傅利叶智能科技有限公司 | Control device based on gesture interaction rehabilitation robot and rehabilitation robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106250867B (en) | A kind of implementation method of the skeleton tracking system based on depth data | |
US9898651B2 (en) | Upper-body skeleton extraction from depth maps | |
JP4860749B2 (en) | Apparatus, system, and method for determining compatibility with positioning instruction in person in image | |
CN104407694B (en) | The man-machine interaction method and device of a kind of combination face and gesture control | |
JP2021524113A (en) | Image processing methods and equipment, imaging equipment, and storage media | |
US7404774B1 (en) | Rule based body mechanics calculation | |
CN110739040A (en) | rehabilitation evaluation and training system for upper and lower limbs | |
CN105832343B (en) | Multidimensional vision hand function rehabilitation quantitative evaluation system and evaluation method | |
EP3120294A1 (en) | System and method for motion capture | |
US20140105456A1 (en) | Device, system and method for determining compliance with an instruction by a figure in an image | |
JP2012518236A (en) | Method and system for gesture recognition | |
US8565477B2 (en) | Visual target tracking | |
CN102609683A (en) | Automatic labeling method for human joint based on monocular video | |
CN111883229B (en) | Intelligent movement guidance method and system based on visual AI | |
Wang et al. | Feature evaluation of upper limb exercise rehabilitation interactive system based on kinect | |
KR20090084035A (en) | A real time motion recognizing method | |
CN108829233A (en) | A kind of exchange method and device | |
CN107247466B (en) | Robot head gesture control method and system | |
CN107329564B (en) | Man-machine finger guessing method based on gesture intelligent perception and man-machine cooperation mechanism | |
CN109126045A (en) | intelligent motion analysis and training system | |
Zhang et al. | Research on volleyball action standardization based on 3D dynamic model | |
CN111968723A (en) | Kinect-based upper limb active rehabilitation training method | |
CN110478860A (en) | The virtual rehabilitation system of hand function obstacle based on hand object natural interaction | |
KR101085536B1 (en) | Method for Designing Interface using Gesture recognition | |
CN112837339B (en) | Track drawing method and device based on motion capture technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201120 |