CN112076440A - Double-person interactive upper limb rehabilitation system based on recognition screen and training method thereof - Google Patents

Double-person interactive upper limb rehabilitation system based on recognition screen and training method thereof Download PDF

Info

Publication number
CN112076440A
CN112076440A CN202010955057.2A CN202010955057A CN112076440A CN 112076440 A CN112076440 A CN 112076440A CN 202010955057 A CN202010955057 A CN 202010955057A CN 112076440 A CN112076440 A CN 112076440A
Authority
CN
China
Prior art keywords
handle
screen
identification
training
contact
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010955057.2A
Other languages
Chinese (zh)
Inventor
王俊华
王兆坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaokang Medical Technology Co ltd
Original Assignee
Guangzhou Xiaokang Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaokang Medical Technology Co ltd filed Critical Guangzhou Xiaokang Medical Technology Co ltd
Priority to CN202010955057.2A priority Critical patent/CN112076440A/en
Publication of CN112076440A publication Critical patent/CN112076440A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B23/00Exercising apparatus specially adapted for particular parts of the body
    • A63B23/035Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously
    • A63B23/12Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously for upper limbs or related muscles, e.g. chest, upper back or shoulder muscles
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0075Means for generating exercise programs or schemes, e.g. computerized virtual trainer, e.g. using expert databases
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • A63B2071/0638Displaying moving images of recorded environment, e.g. virtual environment

Landscapes

  • Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention provides a double-person interactive upper limb rehabilitation system based on an identification screen and a training method thereof, wherein the system comprises the identification screen and an operating mechanism, a training mode selection module, a training mode generation module, an identification module, a virtual reality interaction module and an evaluation module are arranged in the identification screen, and the training mode generation module is used for generating a virtual training scene and a virtual target on the identification screen; the operating mechanism comprises a handle A and a handle B with different bottom structures and is used for two patients to finish double interactive training; the identification module is used for identifying and distinguishing a handle A and a handle B with different bottom structures in real time and calculating the motion data of the handle A and the handle B on the identification screen; the virtual reality interaction module is used for enabling the virtual target, the virtual scene and the two handles to form interaction; and the evaluation module is used for evaluating the upper limb rehabilitation training conditions of the patient A and the patient B respectively. The invention can accurately evaluate the rehabilitation training condition of the upper limb of the patient.

Description

Double-person interactive upper limb rehabilitation system based on recognition screen and training method thereof
Technical Field
The invention relates to the field of medical rehabilitation training and evaluation, in particular to a double-person interactive upper limb rehabilitation system based on an identification screen and a training method thereof.
Background
A plurality of patients with stroke, brain trauma, spinal cord injury and the like have upper limb dyskinesia, which seriously affect the life quality; for these patients, active rehabilitation therapy is required to restore their function. At present, the rehabilitation of upper limb dyskinesia is a field which is much more difficult to recover relative to lower limbs, and is the key point of research work of a plurality of experts in the rehabilitation field.
At present, upper limb rehabilitation equipment and rehabilitation technology are few, only furniture type simple upper limb rehabilitation equipment such as a frosted plate, a hand-operated wheel, a wood inserting plate and the like is provided, and a patient operates the equipment to perform rehabilitation training, so that the training is not only boring and tasteless, and uninteresting, but also the rehabilitation effect is poor. In addition, the rehabilitation devices are operated by a single person, so that the occupied area is large, and the use efficiency of the devices is low.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a double-person interactive upper limb rehabilitation system based on an identification screen and a training method thereof.
In order to achieve the purpose, the invention adopts the following technical scheme:
the double-person interactive upper limb rehabilitation system based on the recognition screen comprises the recognition screen and an operating mechanism, wherein a training mode selection module, a training mode generation module, a recognition module, a virtual reality interaction module and an evaluation module are arranged in the recognition screen, and the recognition screen is internally provided with the training mode selection module, the training mode generation module, the recognition module, the virtual reality interaction module and the evaluation module
The training mode selection module is used for selecting a training mode on the recognition screen according to the upper limb movement function evaluation result of the patient;
the training mode generating module is used for generating a virtual training scene and a virtual target on the identification screen according to the selected training mode;
the operating mechanism comprises a handle A and a handle B with different bottom structures, and is used for operating the virtual target generated on the recognition screen by the handle A and the handle B according to the training requirements to complete double interactive training;
the identification module is used for identifying and distinguishing the handle A and the handle B in real time according to coordinate values and coordinate quantities returned when the handle A and the handle B with different bottom structures are respectively contacted with the identification screen, and calculating motion data of the handle A and the handle B on the identification screen at the same time, wherein the motion data comprises one or more of real-time position information, motion speed and motion trail;
the virtual reality interaction module is used for acquiring motion data of a handle A and a handle B which are respectively operated by two persons on the recognition screen in real time, and enabling interaction among the virtual target, the virtual scene and the two handles to be formed according to the motion data of the handle A and the handle B on the recognition screen, so that two patients can finish double-person interaction upper limb rehabilitation training;
the evaluation module is used for respectively determining error areas and corresponding error times of the two patients according to the motion data of the handle A and the handle B and the motion trail of the virtual target; the upper limb rehabilitation training system is also used for respectively evaluating the upper limb rehabilitation training conditions of the patient A and the patient B according to the counted error area, the corresponding error times and the training completion condition; wherein the error area is an area where the patient does not operate in place on the identification screen.
Further, when the identification screen is an infrared screen, the bottoms of the handle A and the handle B are provided with island structures with different numbers, and the areas of contact areas, which are used for being in contact with the identification screen, on the island structures are larger than a preset area;
the recognition module is specifically used for determining whether the touch area returned by the contact position on the infrared screen is larger than the preset area, and if so, returning the coordinate value of the contact position on the infrared screen respectively; the coordinate value of the contact position on the infrared screen is the coordinate value of the contact area center on the infrared screen; the recognition module is also used for counting the number of the returned coordinates in real time, determining the coordinate values of the handle A and/or the handle B on the infrared screen according to the counted number of the coordinates and the corresponding coordinate values, and respectively calculating the motion data of the handle A and the handle B in real time according to the coordinate values of the handle A and the handle B on the capacitive screen.
Further, in the identification module,
if the number of the coordinates returned at the same moment is the same as the number of the island structures in the handle A, calculating whether the distance between every two coordinates is within a preset distance range, if so, determining that the handle A operates on the infrared screen at the moment, and calculating the central position of the returned coordinates as the coordinate value of the handle A on the infrared screen at the moment;
if the number of the coordinates returned at the same moment is the same as the number of the island structures in the handle B, calculating whether the distance between every two coordinate values is within a preset distance range, if so, determining that the handle B operates on the infrared screen at the moment, and calculating the central position of the returned coordinates as the coordinate value of the handle B on the infrared screen at the moment;
and if the quantity of the coordinates returned at the same time is the sum of the quantity of the island structures in the handle A and the quantity of the island structures in the handle B, determining that the handle A and the handle B operate on the infrared screen at the same time, and respectively determining the coordinate values of the handle A and the handle B when the handle A and the handle B contact the infrared screen at the time through traversing the distance between every two returned coordinate values.
Further, when the identification screen is a capacitive screen, the bottoms of the handle A and the handle B are provided with different numbers of contact structures, wherein the distance between all the contact structures of the same handle is smaller than the radius of a handle base;
the identification module traverses all contact points on the capacitive screen at the same moment to group the contact points according to the returned contact coordinates on the capacitive screen and the principle that the distance between all the contact points of the same handle on the capacitive screen is smaller than the radius of the base of the handle; and then calculating coordinate values of the handle A and the handle B on the capacitive screen at the moment according to the grouping result, and respectively calculating motion data of the handle A and the handle B according to the coordinate values of the handle A and the handle B on the capacitive screen.
Further, in the identification module,
when the number of the contact points belonging to the same group is the same as the number of the contact point structures arranged at the bottom of the handle A, identifying the contact points as the handle A, and calculating the coordinates of the central position of the group of the contact points as the coordinates of the handle A on the capacitive screen;
when the number of the contact points belonging to the same group is the same as the number of the contact point structures arranged at the bottom of the handle B, the contact points are identified as the handle B, and the coordinates of the central position of the group of the contact points are calculated as the coordinates of the handle B on the capacitive screen.
Further, when two patients compete for interactive training,
the evaluation module is specifically used for respectively judging whether the two patients contact the virtual target according to the real-time position data of the two handles and the motion data of the virtual target; if the two patients are touched, scoring is carried out, and if the two patients are not touched, scoring is carried out or scoring is not carried out, and the scoring conditions of the two patients, the error areas of the two patients on the identification screen and the error times in the corresponding error areas are counted respectively; and finally, respectively evaluating the upper limb rehabilitation training conditions of the patient A and the patient B according to the counted error area, the corresponding error times and the training completion condition.
Further, when two patients are cooperatively trained,
the evaluation module is specifically used for respectively judging whether the motion trail of the virtual target deviates from the preset motion trail or not according to the real-time position data of the two handles and the motion data of the virtual target; if the virtual target deviates, counting a fault area, wherein the fault area is a motion track area of the weak part of the operating handle on the identification screen when the virtual target deviates; and finally, respectively evaluating the upper limb rehabilitation training conditions of the patient A and the patient B according to the counted error area, the corresponding error times and the training completion condition.
On the other hand, the invention also discloses a training method of the double-person interactive upper limb rehabilitation system based on the identification screen, which comprises the following steps:
step 1, selecting a training mode on an identification screen according to an upper limb movement function evaluation result of a patient;
step 2, generating a virtual training scene and a virtual target on the recognition screen according to the selected training mode;
step 3, a patient A and a patient B respectively hold a handle A and a handle B with different bottom structures, a virtual target generated on an identification screen is operated according to training requirements, the identification screen identifies and distinguishes the handle A and the handle B in real time according to coordinate values and coordinate quantities returned when the handle A and the handle B are respectively contacted with the identification screen, motion data of the two handles on the identification screen is calculated, interaction is formed among the virtual target, a virtual scene and the two handles according to the motion data of the handle A and the handle B on the identification screen, and the two patients can finish double-person interactive upper limb rehabilitation training;
step 4, the identification screen respectively determines error areas and corresponding error times of two patients according to the motion data of the handle A and the handle B on the identification screen and the motion track of the virtual target; the upper limb rehabilitation training system is also used for respectively evaluating the upper limb rehabilitation training conditions of the patient A and the patient B according to the counted error area, the corresponding error times and the training completion condition; wherein the error area is an area where the patient does not operate in place on the identification screen.
Further, when the identification screen is an infrared screen, the bottoms of the handle A and the handle B are provided with different numbers of island structures, and the areas of contact areas, which are used for being in contact with the identification screen, on the island structures are larger than a preset area; in step 3, the step of identifying and distinguishing the handle A and the handle B in real time by the identification screen according to the coordinate values and the coordinate quantity returned when the handle A and the handle B are respectively contacted with the identification screen, and calculating the motion data of the two handles on the identification screen comprises the following steps:
determining whether the touch area of the contact position on the infrared screen is larger than the preset area, if so, respectively returning the coordinate values of the contact position on the infrared screen; the coordinate value of the contact position on the infrared screen is the coordinate value of the contact area center on the infrared screen;
and counting the number of the returned coordinates in real time, determining the coordinate values of the handle A and/or the handle B on the infrared screen according to the counted number of the coordinates and the corresponding coordinate values, and respectively calculating the motion data of the handle A and the handle B in real time according to the coordinate values of the handle A and the handle B on the capacitive screen.
Further, when the identification screen is a capacitive screen, the bottoms of the handle A and the handle B are provided with contact structures with different numbers; in step 3, the step of identifying and distinguishing the handle A and the handle B in real time by the identification screen according to the coordinate values and the coordinate quantity returned when the handle A and the handle B are respectively contacted with the identification screen, and calculating the motion data of the two handles on the identification screen comprises the following steps:
traversing all contact points on the capacitive screen at the same moment to group the contact points according to the returned contact coordinates on the capacitive screen and the principle that the distance between all the contact points of the same handle on the capacitive screen is smaller than the radius of the base of the handle;
calculating coordinate values of the handle A and the handle B on the capacitive screen at the moment according to the grouping result, and respectively calculating motion data of the handle A and the handle B according to the coordinate values of the handle A and the handle B on the capacitive screen; when the number of contact points belonging to the same group is the same as the number of contact point structures arranged at the bottom of the handle A, the contact points are identified as the handle A, and the coordinates of the central position of the group of contact points are calculated as the coordinates of the handle A on the capacitive screen; if the number of the contact points belonging to the same group is the same as the number of the contact point structures arranged at the bottom of the handle B, the contact points are identified as the handle B, and the coordinates of the central position of the group of the contact points are calculated as the coordinates of the handle B on the capacitive screen.
Compared with the prior art, the invention has the following advantages: according to the invention, the data returned by the handles A and B with different bottom structures when being contacted with the same identification screen are different, so that two patients respectively hold the handles A and B by hands to complete double interactive upper limb rehabilitation training on the same identification screen, the enthusiasm of the patients on the rehabilitation training is improved, the immediate evaluation and feedback in the upper limb rehabilitation training are realized, the automatic analysis of the upper limb dyskinesia reason after training is realized, and the upper limb rehabilitation training effect is obviously improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a system block diagram of an embodiment of a double interactive upper limb rehabilitation system based on an identification screen according to the present invention;
FIG. 2 is a basic schematic diagram of an infrared screen according to the present invention;
FIG. 3 is a schematic view of the structure of a handle A according to an embodiment of the present invention;
FIG. 4 is a schematic structural view of a handle B according to an embodiment of the present invention;
FIG. 5 is a schematic view of the structure of a handle A according to another embodiment of the present invention;
FIG. 6 is a schematic structural view of a handle B according to another embodiment of the present invention;
FIG. 7 is a diagram illustrating a virtual target in accordance with one embodiment of the present invention;
FIG. 8 is a schematic diagram of rowing distance versus movement speed in a rowing game;
fig. 9 is a flowchart of a training method of the double-person interactive upper limb rehabilitation system based on the recognition screen.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the embodiment of the invention discloses a double-person interactive upper limb rehabilitation system based on an identification screen, which comprises the identification screen and an operating mechanism, wherein a training mode selection module, a training mode generation module, an identification module, a virtual reality interaction module and an evaluation module are arranged in the identification screen, wherein the identification screen is internally provided with the training mode selection module, the training mode generation module, the identification module, the virtual reality interaction module and the evaluation module
The training mode selection module is used for selecting a training mode on the recognition screen according to the upper limb movement function evaluation result of the patient;
the training mode generating module is used for generating a virtual training scene and a virtual target on the identification screen according to the selected training mode;
the operating mechanism comprises a handle A and a handle B with different bottom structures, and is used for operating the virtual target generated on the recognition screen by the handle A and the handle B according to the training requirements to complete double interactive training;
the identification module is used for identifying and distinguishing the handle A and the handle B in real time according to coordinate values and coordinate quantities returned when the handle A and the handle B with different bottom structures are respectively contacted with the identification screen, and calculating motion data of the handle A and the handle B on the identification screen at the same time, wherein the motion data comprises one or more of real-time position information, motion speed and motion trail;
the virtual reality interaction module is used for acquiring motion data of a handle A and a handle B which are respectively operated by two persons on the recognition screen in real time, and enabling interaction among the virtual target, the virtual scene and the two handles to be formed according to the motion data of the handle A and the handle B on the recognition screen, so that two patients can finish double-person interaction upper limb rehabilitation training;
the evaluation module is used for respectively determining error areas and corresponding error times of the two patients according to the motion data of the handle A and the handle B and the motion trail of the virtual target; the upper limb rehabilitation training system is also used for respectively evaluating the upper limb rehabilitation training conditions of the patient A and the patient B according to the counted error area, the corresponding error times and the training completion condition; wherein the error area is an area where the patient does not operate in place on the identification screen.
During training, a patient A and a patient B stand in front of the same recognition screen respectively, and a corresponding training environment and a corresponding training mode are selected on the recognition screen according to the suggestion of a rehabilitation teacher; wherein the training mode comprises a one-hand training mode or a one-hand training mode, a double competition mode or a double cooperation mode, … … and the like; the training environment includes game type selection, virtual scene selection, etc., such as a rowing game, a fishing game, a rattrap game, or a badminton game, etc. In addition, the difficulty level is selected, wherein the difficulty level is determined according to one or more factors including the speed of the virtual target, the track type of the virtual target, the size or the number of the virtual target and the motion area of the handle.
After a training environment and a training mode are selected on the recognition screen, the training mode generation module can generate a virtual scene and a virtual target on the recognition screen according to the selected options, and a patient A and a patient B operate the virtual target according to game rules of corresponding games, for example, when a user knocks a land mouse, the patient A and the patient B respectively use a handle A and a handle B to knock the virtual land mouse appearing on the recognition screen until the game is finished, and when a fishing game is performed, the patient A and the patient B respectively use the handle A and the handle B to respectively drag a virtual fishing net on the recognition screen to move towards the direction of the virtual fish so as to fish; wherein, the game ending can be set as the game ending when the game time reaches the preset time.
When a patient A and a patient B respectively hold the handle A and the handle B to tap the identification screen or move on the identification screen, due to the fact that the structures of the bottoms of the handle A and the handle B are different, data returned when the handle A and the handle B are in contact with the identification screen are different, the identification screen determines which operations on the identification screen belong to the operations of the handle A and which operations belong to the operations of the handle B according to the returned data, then the operations are recorded, namely the position coordinates of the handle A and the handle B on the identification screen are recorded in real time, the motion tracks of the handle A and the handle B can be respectively formed according to the position coordinates, and meanwhile, the motion speed of the corresponding position can be calculated in real time.
In the training process, the patient can train by means of a training auxiliary device according to the self condition, such as a standing frame, an upper limb weight reduction device and the like. Wherein, the option of training auxiliary device is also available on the recognition screen, which is convenient for recording the option, so that the evaluation module can accurately evaluate according to the option of the patient.
In the evaluation module, when the upper limb training part corresponding to the corresponding area of the infrared screen can be preset in the system, the system respectively determines the error areas and the corresponding error times of the two patients according to the motion data of the handle A and the handle B and the motion trail of the virtual target, and finally evaluates the upper limb rehabilitation conditions of the two patients according to the score statistical result, the error areas and the error times in the corresponding error areas. And, the evaluation result and the composite score may be respectively presented on corresponding regions of the recognition screen within the respective fields of view.
Certainly, the upper limb training part corresponding to the corresponding area of the recognition screen is verified through experiments by experts or doctors, and when the hand-held handle of the patient moves to the area, the upper limb training part is mainly trained to a certain part or joint of the upper limb of the patient. For example, the identification screen may be divided into regions, an upper left region, a lower left region, an upper middle region, a lower middle region, an upper right region, a lower right region, etc., corresponding to the right elbow joint of the patient when the handle is moved to the upper left region, a left elbow joint of the patient when the handle is moved to the upper right region, etc. … …, and the division of the corresponding regions of the identification screen may be divided as the case may be. Because the standing positions of the two patients are different, when the two patients hold the handle to move to the same position of the identification screen, the parts exercised by the two patients are different, so that the corresponding areas of the identification screen are divided into a plurality of types and respectively correspond to the specific conditions of the two patients.
For the evaluation module, when two patients are performing competitive training or cooperative training, the following two cases can be divided:
(1) when two patients compete for interactive training, namely when the two patients compete with each other to complete the training task:
the evaluation module is specifically used for respectively judging whether the two patients contact the virtual target according to the real-time position data of the two handles and the motion data of the virtual target; if the two patients are touched, scoring is carried out, and if the two patients are not touched, scoring is carried out or scoring is not carried out, and the scoring conditions of the two patients, the error areas of the two patients on the identification screen and the error times in the corresponding error areas are counted respectively; and finally, respectively evaluating the upper limb rehabilitation training conditions of the patient A and the patient B according to the counted error area, the corresponding error times and the training completion condition.
Since the two patients are in competition, the score, the error region and the number of errors in the error region of the two patients need to be respectively counted to respectively evaluate the upper limb rehabilitation training conditions of the patient a and the patient B.
(2) When two patients perform collaborative interactive training, namely the two patients mutually assist to complete a training task:
the evaluation module is specifically used for respectively judging whether the motion trail of the virtual target deviates from the preset motion trail or not according to the real-time position data of the two handles and the motion data of the virtual target; if the virtual target deviates, counting a fault area, wherein the fault area is a motion track area of the weak part of the operating handle on the identification screen when the virtual target deviates; and finally, respectively evaluating the upper limb rehabilitation training conditions of the patient A and the patient B according to the counted error area, the corresponding error times and the training completion condition.
Similarly, since the two patients are in a cooperative relationship, the movement tracks of the virtual targets operated by the two patients together need to be determined, and then the patient is determined to have a fault according to whether the movement tracks deviate from the preset movement tracks and the deviation directions, so that a fault area is determined, namely the movement track area of the weak side of the operating handle on the recognition screen, and finally the upper limb rehabilitation training conditions of the patient A and the patient B are respectively evaluated according to the fault area and the fault times.
No matter competitive interactive training or cooperative interactive training, as the upper limb training part corresponding to the corresponding area of the infrared screen is preset in the system, the rehabilitation condition of the patient can be evaluated according to the error areas and the corresponding error times of the patient A and the patient B, and reference is provided for doctors. Meanwhile, doctors can generally judge which parts or joints of the upper limbs of the two patients are more serious and which parts or joints are more slight; after a plurality of periodic exercises, according to the data comparison of a plurality of error areas before and after, which parts or joints can be roughly judged to be better recovered and which parts or joints are worse recovered, thereby correspondingly adjusting the upper limb rehabilitation exercise and being beneficial to the quick recovery of the patient.
Specifically, the identification screens in the embodiment of the present invention include an infrared screen, a resistor screen, a surface acoustic wave screen, a capacitor screen, and the like, and when the handle a and the handle B operate on the identification screens such as the infrared screen, the resistor screen, the surface acoustic wave screen, the capacitor screen, and the like, the identification screens such as the infrared screen, the resistor screen, the surface acoustic wave screen, the capacitor screen, and the like can identify the operation of the handle a and the handle B.
The infrared screen in the embodiment of the present invention is an infrared touch screen capable of supporting multi-touch, and is an electronic device that detects and locates an object in contact with the screen by using an infrared matrix densely distributed in the direction of X, Y. The infrared screen is provided with a circuit board outer frame in front of the display, the circuit board is provided with infrared transmitting tubes and infrared receiving tubes around the screen, and the infrared transmitting tubes and the infrared receiving tubes are in one-to-one correspondence to form a transverse-vertical crossed infrared matrix, which is shown in fig. 2. When the handle A or the handle B touches the screen, two infrared rays in the transverse direction and the vertical direction passing through the position can be blocked, so that the coordinate position of a touch point on the screen can be judged. Wherein any non-transparent object operating on the infrared screen may change the infrared on the touch point to realize the touch screen operation, and therefore, the infrared screen can return the contact area of the object contacting with the infrared screen.
Therefore, we can set the areas of the contact areas for contact with the infrared ray screen in the grip a and the grip B to be different, and then recognize the operation of the a grip and the operation of the B grip on the infrared ray screen by the areas of the grips. However, due to the problem of detection precision of infrared hardware, the returned contact area on the infrared screen is often large in error; for example, the return value of the contact area of the handle a tends to fluctuate randomly within the range of 5% to 50%, and the error is large and therefore easily causes a recognition error. Moreover, in rehabilitation training, the sizes of the handle A and the handle B cannot be too different, so that the rehabilitation patient cannot grasp the handles inconveniently.
In consideration of the fact that the detection error of the infrared screen on a single contact area is very large, and the sizes of the handle A and the handle B can be greatly different, in the embodiment of the invention, when the screen is the infrared screen, the island type contact bottom surface is adopted, and the two handles are distinguished according to the relative position relation of the contact surfaces.
Referring specifically to fig. 3 and 4, the bottoms of the handles a and B have different numbers of island structures; in order to eliminate the interference of other non-handle objects such as finger touch and the like, in the embodiment of the invention, the areas of contact areas, which are used for being in contact with the identification screen, on the island-type structure are all larger than a preset area, namely the area of the island-type structure in contact with the ground is larger than the preset area; the identification module is specifically used for identifying and calculating the motion data of the handle A and the handle B according to the contact area and the coordinates returned by the contact position on the infrared screen.
In one embodiment of the present invention, as shown in fig. 3 and 4, the bottom of the handle a has 2 island structures, that is, the bottom of the handle a is divided into two islands of area A, B, so that the infrared screen can detect that two contact surfaces exist simultaneously; the bottom of the handle B is of a plane structure, and the bottom of the handle B can also be of an island structure.
Further, the recognition module is specifically configured to determine whether a touch area of a contact on the infrared screen is larger than the preset area, and if so, return coordinate values of the contact on the infrared screen; the coordinate value of the contact position on the infrared screen is the coordinate value of the contact area center on the infrared screen; the recognition module is also used for recognizing and calculating the motion data of the handle A and the handle B in real time according to the returned coordinate quantity and the coordinate value of the contact position on the infrared screen.
The system comprises an infrared screen, a contact position acquisition module, a data acquisition module and a data processing module, wherein the coordinates of the contact position and the contact area of the contact position can be obtained through an SDK development kit of the infrared screen; wherein the coordinates of the contact are obtained as the center position of the contact area. In the embodiment of the invention, the identification module determines whether touch operation exists on the infrared screen, eliminates the interference of other non-handle objects such as finger touch and the like according to whether the contact area of the operation position or the contact position on the infrared screen is larger than the preset area, and only returns the coordinates of which the contact area is larger than the preset area.
As shown in FIG. 3, the area values of the regions A and B of the handle A are now denoted as S1AAnd S2ALet P denote the position of their center of gravity (i.e., the point returning the coordinates)1A(x1A,y1A) And P2A(x2A,y2A). Thus, a point C can be setA(xc,yc) Is P1A(x1A,y1A) And P2A(x2A,y2A) The midpoint is the central position of the handle, and the midpoint is taken as the position point of the handle in the embodiment of the invention; as shown in FIG. 4, since the handle B has only one area, it is assumed that its position coordinate is identical to the coordinate of the touch point returned from the infrared ray screen, and the touch point is recordedIs CB(xc,yc) Its area value is S2. Setting the preset area as SminFor eliminating the interference of other non-handle objects such as finger touch, there are
Figure BDA0002678317840000131
Further, the identification module is divided into the following three cases:
a, if the number of the coordinates returned at the same moment is the same as the number of the island structures in the handle A, calculating whether the distance between every two coordinates is within a preset distance range, if so, determining that the handle A operates on the infrared screen at the moment, and calculating the central position of the returned coordinates as the coordinate value of the handle A on the infrared screen at the moment;
b, if the number of the coordinates returned at the same moment is the same as the number of the island structures in the handle B, calculating whether the distance between every two coordinate values is within a preset distance range, if so, determining that the handle B operates on the infrared screen at the moment, and calculating the central position of the returned coordinates as the coordinate value of the handle B on the infrared screen at the moment;
and C, if the quantity of the coordinates returned at the same time is the sum of the quantity of the island structures in the handle A and the quantity of the island structures in the handle B, determining that the handle A and the handle B operate on the infrared screen at the same time, and respectively determining the coordinate values of the handle A and the handle B when the handle A and the handle B contact the infrared screen at the time through traversing the distance between every two returned coordinate values.
Taking fig. 3 and fig. 4 as an example, when the handle a is the structure shown in fig. 3 and the handle B is the structure shown in fig. 4, the interference term is eliminated through the formula (1):
(1) if only 1 effective contact area exists on the current identification screen, only 1 coordinate is returned, and the handle which is currently in contact with the identification screen is considered as B;
(2) if there are 2 effective contact areas on the current identification screen, then there are 2 coordinates to return, then it is also necessary to verify whether the distance between the two coordinate positions on the identification screen is within the preset range, and the specific verification formula is as follows:
Figure BDA0002678317840000141
where d is the distance threshold between region A and region B in FIG. 3, R1Is the base radius of handle a; if the returned 2-given coordinate values satisfy the formula (2), the two contact regions are considered to be two regions of the handle a, and the current position coordinate C of the handle a is determined to be the current position coordinate C of the handle aA(xc,yc) Can be calculated by the formula (3):
Figure BDA0002678317840000142
(3) if 3 effective contact areas exist on the current identification screen, 3 coordinates return, the two contact areas, the distance between the two coordinates of which is within the preset range, can be determined to be two areas of the handle A by traversing the distance between every two coordinate values, and then the central positions of the two coordinates are determined to be the position coordinates of the current handle A on the identification screen according to the formula (3); the rest of the contact area represented by the coordinate is regarded as the handle B, and the coordinate is the position coordinate of the current handle a on the identification screen.
Fig. 3 and 4 are the optimal schemes of the handle a and the handle B when the infrared screen is adopted in the invention, which can quickly identify and calculate the motion data of the handle a and the handle B. Of course, the present invention is not limited to the structures of the handle a and the handle B shown in fig. 3 and 4, and the handle B may also be a handle with a plurality of island structures at the bottom, that is, the bottom of the handle B is provided with a plurality of island contact bottom surfaces, so that when determining whether the contact position is the handle B, the verification by the formula (2) is also required; handle a is also not limited to 2 island configurations.
In the embodiment of the present invention, the capacitive screen refers to a capacitive touch screen supporting multi-touch. The largest difference between the capacitive screen and the infrared screen is that the capacitive screen can only detect the coordinate value of a point and cannot return to the contact area. Therefore, the handle cannot be distinguished by using the difference in the contact area or the contact block as described above. In addition, the capacitive screen has relatively strict requirements on the size and the interval of the contact points: the contact area of each contact point cannot be too large or too small, and needs to be controlled within 10.2 square millimeters; at the same time, the distance between a point and a point cannot be less than 45 mm. Therefore, the above-mentioned infrared screen method of distinguishing the handle by determining the effective contact area is ineffective when the handle is applied to the capacitive screen. Furthermore, a single contact point cannot support the handle standing on the screen, since the contact point requirement is relatively small. Therefore, in the embodiment of the invention, when the identification screen adopts a capacitive screen, the handle A and the handle B are distinguished in a point array-based manner. The size of the bases of the handle A and the handle B is the same, but the number and distribution of the contact point structures arranged at the bottoms of the handles are different.
As shown in fig. 5 and 6, when the recognition screen is a capacitive screen, the bottoms of the handle a and the handle B have different numbers of contact structures, and the coordinates of the contact points can be obtained through an SDK development kit of the capacitive screen; the recognition module recognizes the handle A and the handle B according to the returned contact coordinates on the capacitive screen, and recognizes and calculates the motion data of the handle A and the handle B on the capacitive screen in real time. Wherein, in order to guarantee that the handle can stand on the screen steadily, the contact point structure in handle A and handle B's bottom is no less than 3 to these contact point structures are at handle A and handle B's bottom evenly distributed respectively.
Specifically, the identification module traverses all contact points on the capacitive screen at the same moment to group the contact points according to whether the distance between all the contact points of the same handle is smaller than the radius of the base of the handle; and then determining the contact positions of the handle A and the handle B on the capacitive screen according to the grouping result, and calculating the coordinate values of the handle A and the handle B on the capacitive screen at the moment.
Specifically, in the identification module, when the number of contact points belonging to the same group is the same as the number of contact point structures arranged at the bottom of the handle A, the contact points are identified as the handle A, and the coordinate of the central position of the group of contact points is calculated as the coordinate of the handle A on the capacitive screen; if the number of the contact points belonging to the same group is the same as the number of the contact point structures arranged at the bottom of the handle B, the contact points are identified as the handle B, and the coordinates of the central position of the group of the contact points are calculated as the coordinates of the handle B on the capacitive screen.
Taking fig. 5 and fig. 6 as an example, a plane can be determined according to 3 points, so that the handle can stably stand on the capacitive screen, the handle adopts a structure of 3 contact points, and the handle B adopts a structure of 4 contact points; the base of the handle A and the base of the handle B are the same in size, and the radiuses of the handle A and the handle B are R. Then, according to fig. 5, the coordinates of the 3 contact point structures when the handle a contacts the capacitive screen are respectively set to P1A(x1A,y1A)、P2A(x2A,y2A) And P3A (x3A,y3A). And the gravity center C of the triangle formed by the three contact pointsA(xcA,ycA) I.e., the center of the handle, this point will be used as the location point of the handle in embodiments of the present invention. As shown in FIG. 6, the coordinates of the four contact points of the handle B are respectively marked as P1B(x1B,y1B),P2A(x2B,y2B),P3B(x3B,y3B) And P4B(x4B,y4B) The four contact point coordinates form a square. The center of gravity of the square is denoted as CB (xcB,ycB) I.e. the central position of the handle B. The specific distinguishing and positioning method of the handle A and the handle B is as follows:
(1) grouping of contacts
Due to the presence of the handle base (with a radius denoted R), all contact points belonging to the same handle are physically less distant d than the base diameter 2R, i.e. the condition of equation (4) is satisfied, so that the contact points can be screened and grouped by this equation:
Figure BDA0002678317840000171
when the distance between two contact points satisfies equation (4), the two contact points are considered to belong to the same group. By traversing all returned contact points within the capacitive screen, a fast grouping of contact points may be achieved.
(2) Differentiating handles according to number of contact points within the same group
Taking the handles a and B shown in fig. 5 and 6 as an example, if the number of contact points of the same group is 3, the handle a is considered; if the number of contact points is 4, it is considered as the handle B.
After the handle A and the handle B are distinguished, the specific positions of the handles need to be positioned; therefore, the center position of the handle needs to be found, and the center position of the handle A, B is denoted as C (x)c,yc). From a geometric point of view, whether handle a or B, its center position is actually set to be exactly the center position of the geometric figure formed by the contact points. Therefore, the center position of the handle can be calculated by calculating the center, as shown in equation (5):
Figure BDA0002678317840000172
similarly, fig. 5 and 6 are the optimal schemes of the handle a and the handle B when the capacitive screen is adopted in the present invention, and the motion data of the handle a and the handle B can be quickly identified and calculated. Of course, the present invention is not limited to the structures of the handles a and B shown in fig. 5 and 6.
The movement tracks of the handle A and the handle B can be connected according to real-time position coordinates recorded on the identification screen by the handle A and the handle B and according to the time sequence to respectively form the movement tracks of the handle A and the handle B. Specifically, the points may be connected to each other by interpolating the points with a straight line.
The handle speeds of the handle a and the handle B are only used for representing the speed of the object movement and do not represent the direction of the object movement, and the operational formula is shown in formula (6):
Figure BDA0002678317840000181
in the equation, i represents the acquired serial number, and from the beginning of the program, the 1 st contact point i is equal to 1, and so on.
In the virtual display interaction module, when interaction is performed, two situations can be divided: the handle is contacted with the single point for multiple times; the handle is continuously contacted on the identification screen.
(1) The handle is contacted with the single point for multiple times;
for example, the handle is used for playing a land mouse, a ball and the like, namely, the handle is contacted with a single point for many times. In the embodiment of the invention, a bounding box detection mode is adopted to realize the method for operating between the handle and the virtual object. The specific idea is as follows:
as shown in FIG. 7, assuming that a game virtual object can be surrounded by a polygon (bounding box), it is now determined whether point p (x, y) is within the polygon, so long as a horizontal line is drawn horizontally along point p, if the intersection points of both ends of point p and the edge line on the polygon are odd, it indicates that p is within the polygon, otherwise p is outside the polygon.
Let P be the set of vertices of a polygon (n number of edges)i={P1,P2,…,PnAnd (5) judging whether the intersection point exists between the ray of the test point and one edge of the polygon or not according to a mathematical model shown in the formula (7):
Figure BDA0002678317840000182
as long as the position point of the handle is in the enclosure, the handle is considered to collide with the game object (such as hitting a ball or a mouse, etc.), and the interaction with the game object is completed.
(2) Continuous contact of handle on recognition screen
For example, for a game such as rowing, fishing, etc. which requires a certain distance of movement after the handle touches the recognition screen, the interaction method requires calculating the continuous effective contact time and thus the speed. As shown in fig. 8, in a game such as rowing, when the handle is stroked in a rowing area near the boat, it is necessary to calculate the stroking distance and speed of the handle, and thereby determine whether the boat should not move forward and yaw.
Because the game is in a periodic cycle mode to check whether the screen is contacted, if the handle is stroked on the screen, the acquisition period interval delta t is withinCorresponding contact point information must be returned, and the contact point is in the corresponding paddling area. Set the continuously collected point set as SPi={SP1,SP2,…,SPnAnd then the connection stroking distance can be calculated by the formula (8):
Figure BDA0002678317840000191
the stroke speed can be calculated by the following formula (9):
Figure BDA0002678317840000192
and controlling the running track of the ship according to the rowing distance D and the rowing speed D.
In the embodiment of the invention, the evaluation module can determine the upper limb rehabilitation condition of the patient according to the movement tracks of the handle A and the handle B on the recognition screen, so as to adjust the next training.
Referring to fig. 9, the embodiment of the invention further provides a training method of the double-person interactive upper limb rehabilitation system based on the recognition screen, which comprises the following steps:
step 1, selecting a training environment and a training mode on an identification screen according to an upper limb movement function evaluation result of a patient;
step 2, generating a virtual training scene and a virtual target on the recognition screen according to the selected training environment and the selected training mode;
step 3, operating the virtual target generated on the recognition screen by the patient A and the patient B according to training requirements, recognizing and calculating the motion data of the handle A and the handle B on the recognition screen in real time according to the coordinate values and the coordinate quantity returned when the handle A and the handle B are respectively contacted with the recognition screen, and enabling the virtual target, the virtual scene and the two handles to form interaction according to the motion data of the handle A and the handle B on the recognition screen, so that the two patients can finish double-person interactive upper limb rehabilitation training;
and 4, the recognition screen evaluates the upper limb rehabilitation conditions of the patient A and the patient B respectively according to the motion data of the handle A and the handle B on the recognition screen.
The training method of the double-person interactive upper limb rehabilitation system based on the identification screen is an execution object with the double-person interactive upper limb rehabilitation system based on the identification screen as a step. Specifically, step 1 is to use a training mode selection module as an execution object of the step, step 2 is to use a training mode generation module as an execution object of the step, and step 3 is to use a virtual reality interaction module and a recognition module as execution objects of the step; step 4 is an execution target of the step by the evaluation module.
Similarly, the identification screens in the embodiments of the present invention include an infrared screen, a resistive screen, a surface acoustic wave screen, a capacitive screen, and the like, and when the handle a and the handle B operate on the identification screens such as the infrared screen, the resistive screen, the surface acoustic wave screen, the capacitive screen, and the like, the identification screens such as the infrared screen, the resistive screen, the surface acoustic wave screen, the capacitive screen, and the like can identify the operation of the handle a and the handle B.
When the identification screen is an infrared screen, the bottoms of the handle A and the handle B are provided with island structures with different numbers, and the areas of contact areas, which are used for being in contact with the identification screen, on the island structures are larger than a preset area; in step 3, the step of identifying and calculating the motion data of the handle a and the handle B on the identification screen in real time by the identification screen according to the coordinate values and the coordinate quantities returned when the handle a and the handle B are respectively contacted with the identification screen comprises:
step 301a, determining whether the touch area of the contact position on the infrared screen is larger than the preset area, if so, returning the coordinate values of the contact position on the infrared screen respectively; the coordinate value of the contact position on the infrared screen is the coordinate value of the contact area center on the infrared screen;
and step 302a, identifying and calculating the motion data of the handle A and the handle B in real time according to the returned coordinate quantity and the coordinate value of the contact position on the infrared screen.
The steps 301a and 302a are executed by taking the identification module as a step. The identification process of the handle a and the handle B and the determination of the motion data by the infrared screen are described in detail above, and therefore, redundant description is not repeated here.
When the identification screen is a capacitive screen, the bottoms of the handle A and the handle B are provided with contact structures with different numbers; in step 3, the step of identifying and calculating the motion data of the handle A and the handle B on the identification screen in real time by the identification screen according to the coordinate values and the coordinate quantity returned when the handle A and the handle B are respectively contacted with the identification screen comprises the following steps:
step 301b, traversing all contact points on the capacitive screen at the same moment to group the contact points according to whether the distance between all the contact points of the same handle is smaller than the radius of the base of the handle;
step 302B, determining the contact positions of the handle A and the handle B on the capacitive screen according to the grouping result; when the number of contact points belonging to the same group is the same as the number of contact point structures arranged at the bottom of the handle A, the contact points are identified as the handle A, and the coordinates of the central position of the group of contact points are calculated as the coordinates of the handle A on the capacitive screen; if the number of the contact points belonging to the same group is the same as the number of the contact point structures arranged at the bottom of the handle B, the contact points are identified as the handle B, and the coordinates of the central position of the group of the contact points are calculated as the coordinates of the handle B on the capacitive screen.
Similarly, steps 301b and 302b are executed by using the identification module as a step. The identification process of the capacitive screen for the handles a and B and the determination of the motion data are also described in detail above, and therefore, they are not described in detail herein.
Similarly, the above interaction process has been described in detail in the above section, and will not be described in too much detail here.
In summary, the invention utilizes different data returned when the handles A and B with different bottom structures are contacted with the same identification screen, so that two patients respectively hold the handles A and B by hands to complete double interactive upper limb rehabilitation training on the same identification screen, the enthusiasm of the patients on the rehabilitation training is improved, the immediate evaluation and feedback in the upper limb rehabilitation training are realized, the automatic analysis of the upper limb dyskinesia reason after training is realized, and the rehabilitation training effect of the upper limbs is obviously improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. The double-person interactive upper limb rehabilitation system based on the recognition screen is characterized by comprising the recognition screen and an operating mechanism, wherein a training mode selection module, a training mode generation module, a recognition module, a virtual reality interaction module and an evaluation module are arranged in the recognition screen, and the recognition screen is internally provided with the training mode selection module, the training mode generation module, the recognition module, the virtual reality interaction module and the evaluation module
The training mode selection module is used for selecting a training mode on the recognition screen according to the upper limb movement function evaluation result of the patient;
the training mode generating module is used for generating a virtual training scene and a virtual target on the identification screen according to the selected training mode;
the operating mechanism comprises a handle A and a handle B with different bottom structures, is used for enabling the patient A to hold the handle A and the patient B to hold the handle B, and operates the virtual target generated on the recognition screen according to the training requirement to finish double interactive training;
the identification module is used for identifying and distinguishing the handle A and the handle B in real time according to coordinate values and coordinate quantities returned when the handle A and the handle B with different bottom structures are respectively contacted with the identification screen, and calculating motion data of the handle A and the handle B on the identification screen at the same time, wherein the motion data comprises one or more of real-time position information, motion speed and motion trail;
the virtual reality interaction module is used for acquiring motion data of a handle A and a handle B which are respectively operated by two persons on the recognition screen in real time, and enabling interaction among the virtual target, the virtual scene and the two handles to be formed according to the motion data of the handle A and the handle B on the recognition screen, so that two patients can finish double-person interaction upper limb rehabilitation training;
the evaluation module is used for respectively determining error areas and corresponding error times of the two patients according to the motion data of the handle A and the handle B and the motion trail of the virtual target; the upper limb rehabilitation training system is also used for respectively evaluating the upper limb rehabilitation training conditions of the patient A and the patient B according to the counted error area, the corresponding error times and the training completion condition; wherein the error area is an area where the patient does not operate in place on the identification screen.
2. The double-person interactive upper limb rehabilitation system based on the identification screen, according to claim 1, wherein when the identification screen is an infrared screen, the bottoms of the handle A and the handle B are provided with different numbers of island structures, and the areas of contact areas, used for being in contact with the identification screen, of the island structures are larger than a preset area;
the recognition module is specifically used for determining whether the touch area returned by the contact position on the infrared screen is larger than the preset area, and if so, returning the coordinate value of the contact position on the infrared screen respectively; the coordinate value of the contact position on the infrared screen is the coordinate value of the contact area center on the infrared screen; the recognition module is also used for counting the number of the returned coordinates in real time, determining the coordinate values of the handle A and/or the handle B on the infrared screen according to the counted number of the coordinates and the corresponding coordinate values, and respectively calculating the motion data of the handle A and the handle B in real time according to the coordinate values of the handle A and the handle B on the capacitive screen.
3. The double interactive upper limb rehabilitation system based on identification screen according to claim 2, wherein in the identification module,
if the number of the coordinates returned at the same moment is the same as the number of the island structures in the handle A, calculating whether the distance between every two coordinates is within a preset distance range, if so, determining that the handle A operates on the infrared screen at the moment, and calculating the central position of the returned coordinates as the coordinate value of the handle A on the infrared screen at the moment;
if the number of the coordinates returned at the same moment is the same as the number of the island structures in the handle B, calculating whether the distance between every two coordinate values is within a preset distance range, if so, determining that the handle B operates on the infrared screen at the moment, and calculating the central position of the returned coordinates as the coordinate value of the handle B on the infrared screen at the moment;
and if the quantity of the coordinates returned at the same time is the sum of the quantity of the island structures in the handle A and the quantity of the island structures in the handle B, determining that the handle A and the handle B operate on the infrared screen at the same time, and respectively determining the coordinate values of the handle A and the handle B when the handle A and the handle B contact the infrared screen at the time through traversing the distance between every two returned coordinate values.
4. The double interactive upper limb rehabilitation system based on identification screen of claim 1, wherein when the identification screen is a capacitive screen, the bottoms of the handle a and the handle B have different numbers of contact structures, wherein the distance between all the contact structures of the same handle is smaller than the radius of the handle base;
the identification module traverses all contact points on the capacitive screen at the same moment to group the contact points according to the returned contact coordinates on the capacitive screen and the principle that the distance between all the contact points of the same handle on the capacitive screen is smaller than the radius of the base of the handle; and then calculating coordinate values of the handle A and the handle B on the capacitive screen at the moment according to the grouping result, and respectively calculating motion data of the handle A and the handle B according to the coordinate values of the handle A and the handle B on the capacitive screen.
5. The double interactive recognition screen-based upper limb rehabilitation system according to claim 4, wherein in the recognition module,
when the number of the contact points belonging to the same group is the same as the number of the contact point structures arranged at the bottom of the handle A, identifying the contact points as the handle A, and calculating the coordinates of the central position of the group of the contact points as the coordinates of the handle A on the capacitive screen;
when the number of the contact points belonging to the same group is the same as the number of the contact point structures arranged at the bottom of the handle B, the contact points are identified as the handle B, and the coordinates of the central position of the group of the contact points are calculated as the coordinates of the handle B on the capacitive screen.
6. The double-person interactive upper limb rehabilitation system based on the identification screen as claimed in claim 1, wherein when two patients compete for interactive training, the evaluation module is specifically configured to respectively judge whether the two patients contact the virtual target according to the real-time position data of the two handles and the motion data of the virtual target; if the two patients are touched, scoring is carried out, and if the two patients are not touched, scoring is carried out or scoring is not carried out, and the scoring conditions of the two patients, the error areas of the two patients on the identification screen and the error times in the corresponding error areas are counted respectively; and finally, respectively evaluating the upper limb rehabilitation training conditions of the patient A and the patient B according to the counted error area, the corresponding error times and the training completion condition.
7. The double-person interactive upper limb rehabilitation system based on the identification screen as claimed in claim 1, wherein when two patients perform cooperative interactive training, the evaluation module is specifically configured to respectively judge whether the motion trajectory of the virtual target deviates from a preset motion trajectory according to the real-time position data of the two handles and the motion data of the virtual target; if the virtual target deviates, counting a fault area, wherein the fault area is a motion track area of the weak part of the operating handle on the identification screen when the virtual target deviates; and finally, respectively evaluating the upper limb rehabilitation training conditions of the patient A and the patient B according to the counted error area, the corresponding error times and the training completion condition.
8. A training method of a double-person interactive upper limb rehabilitation system based on an identification screen is characterized by comprising the following steps:
step 1, selecting a training mode on an identification screen according to an upper limb movement function evaluation result of a patient;
step 2, generating a virtual training scene and a virtual target on the recognition screen according to the selected training mode;
step 3, a patient A and a patient B respectively hold a handle A and a handle B with different bottom structures, a virtual target generated on an identification screen is operated according to training requirements, the identification screen identifies and distinguishes the handle A and the handle B in real time according to coordinate values and coordinate quantities returned when the handle A and the handle B are respectively contacted with the identification screen, motion data of the two handles on the identification screen is calculated, interaction is formed among the virtual target, a virtual scene and the two handles according to the motion data of the handle A and the handle B on the identification screen, and the two patients can finish double-person interactive upper limb rehabilitation training;
step 4, the identification screen respectively determines error areas and corresponding error times of two patients according to the motion data of the handle A and the handle B on the identification screen and the motion track of the virtual target; the upper limb rehabilitation training system is also used for respectively evaluating the upper limb rehabilitation training conditions of the patient A and the patient B according to the counted error area, the corresponding error times and the training completion condition; wherein the error area is an area where the patient does not operate in place on the identification screen.
9. The training method of the double-person interactive upper limb rehabilitation system based on the identification screen as claimed in claim 8, wherein when the identification screen is an infrared screen, the bottoms of the handle A and the handle B are provided with different numbers of island structures, and the contact area areas of the island structures for contacting with the identification screen are larger than a preset area; in step 3, the step of identifying and distinguishing the handle A and the handle B in real time by the identification screen according to the coordinate values and the coordinate quantity returned when the handle A and the handle B are respectively contacted with the identification screen, and calculating the motion data of the two handles on the identification screen comprises the following steps:
determining whether the touch area of the contact position on the infrared screen is larger than the preset area, if so, respectively returning the coordinate values of the contact position on the infrared screen; the coordinate value of the contact position on the infrared screen is the coordinate value of the contact area center on the infrared screen;
and counting the number of the returned coordinates in real time, determining the coordinate values of the handle A and/or the handle B on the infrared screen according to the counted number of the coordinates and the corresponding coordinate values, and respectively calculating the motion data of the handle A and the handle B in real time according to the coordinate values of the handle A and the handle B on the capacitive screen.
10. The training method of the double interactive upper limb rehabilitation system based on the identification screen as claimed in claim 8, wherein when the identification screen is a capacitive screen, the bottoms of the handle A and the handle B have different numbers of contact structures; in step 3, the step of identifying and distinguishing the handle A and the handle B in real time by the identification screen according to the coordinate values and the coordinate quantity returned when the handle A and the handle B are respectively contacted with the identification screen, and calculating the motion data of the two handles on the identification screen comprises the following steps:
traversing all contact points on the capacitive screen at the same moment to group the contact points according to the returned contact coordinates on the capacitive screen and the principle that the distance between all the contact points of the same handle on the capacitive screen is smaller than the radius of the base of the handle;
calculating coordinate values of the handle A and the handle B on the capacitive screen at the moment according to the grouping result, and respectively calculating motion data of the handle A and the handle B according to the coordinate values of the handle A and the handle B on the capacitive screen; when the number of contact points belonging to the same group is the same as the number of contact point structures arranged at the bottom of the handle A, the contact points are identified as the handle A, and the coordinates of the central position of the group of contact points are calculated as the coordinates of the handle A on the capacitive screen; if the number of the contact points belonging to the same group is the same as the number of the contact point structures arranged at the bottom of the handle B, the contact points are identified as the handle B, and the coordinates of the central position of the group of the contact points are calculated as the coordinates of the handle B on the capacitive screen.
CN202010955057.2A 2020-09-11 2020-09-11 Double-person interactive upper limb rehabilitation system based on recognition screen and training method thereof Pending CN112076440A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010955057.2A CN112076440A (en) 2020-09-11 2020-09-11 Double-person interactive upper limb rehabilitation system based on recognition screen and training method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010955057.2A CN112076440A (en) 2020-09-11 2020-09-11 Double-person interactive upper limb rehabilitation system based on recognition screen and training method thereof

Publications (1)

Publication Number Publication Date
CN112076440A true CN112076440A (en) 2020-12-15

Family

ID=73737617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010955057.2A Pending CN112076440A (en) 2020-09-11 2020-09-11 Double-person interactive upper limb rehabilitation system based on recognition screen and training method thereof

Country Status (1)

Country Link
CN (1) CN112076440A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113576409A (en) * 2021-07-19 2021-11-02 河南翔宇医疗设备股份有限公司 Training evaluation system and training evaluation method
CN114816130A (en) * 2022-06-29 2022-07-29 长沙朗源电子科技有限公司 Writing recognition method and system of electronic whiteboard, storage medium and electronic whiteboard
CN115129164A (en) * 2022-08-31 2022-09-30 江西科技学院 Interaction control method and system based on virtual reality and virtual reality equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120013555A1 (en) * 2010-07-15 2012-01-19 Panasonic Corporation Touch screen system
CN102521521A (en) * 2011-12-23 2012-06-27 北京瑞信在线***技术有限公司 Method and device for identifying object
CN103853450A (en) * 2012-12-06 2014-06-11 柯尼卡美能达株式会社 Object operation apparatus and object operation control method
US20140267105A1 (en) * 2013-03-12 2014-09-18 Sharp Kabushiki Kaisha Drawing device, display method, and recording medium
CN206863719U (en) * 2017-05-08 2018-01-09 北京中嘉空间展示设计有限公司 Capacitance plate identifies interaction desk
CN109364436A (en) * 2018-10-10 2019-02-22 广州晓康医疗科技有限公司 A kind of two-person standing formula rehabilitation training of upper limbs system and its application method
CN109407960A (en) * 2018-11-05 2019-03-01 北京中海创达科技有限公司 A kind of Intellisense desktop assembly, cognitive method and sensory perceptual system
CN109589557A (en) * 2018-11-29 2019-04-09 广州晓康医疗科技有限公司 Based on reality environment tandem race rehabilitation training of upper limbs system and appraisal procedure
CN109589556A (en) * 2018-11-29 2019-04-09 广州晓康医疗科技有限公司 Based on the double collaboration rehabilitation training of upper limbs system of reality environment and appraisal procedure
CN210353736U (en) * 2019-07-02 2020-04-21 摩拓为(北京)科技有限公司 Intelligent interactive object recognition table and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120013555A1 (en) * 2010-07-15 2012-01-19 Panasonic Corporation Touch screen system
CN102521521A (en) * 2011-12-23 2012-06-27 北京瑞信在线***技术有限公司 Method and device for identifying object
CN103853450A (en) * 2012-12-06 2014-06-11 柯尼卡美能达株式会社 Object operation apparatus and object operation control method
US20140267105A1 (en) * 2013-03-12 2014-09-18 Sharp Kabushiki Kaisha Drawing device, display method, and recording medium
CN206863719U (en) * 2017-05-08 2018-01-09 北京中嘉空间展示设计有限公司 Capacitance plate identifies interaction desk
CN109364436A (en) * 2018-10-10 2019-02-22 广州晓康医疗科技有限公司 A kind of two-person standing formula rehabilitation training of upper limbs system and its application method
CN109407960A (en) * 2018-11-05 2019-03-01 北京中海创达科技有限公司 A kind of Intellisense desktop assembly, cognitive method and sensory perceptual system
CN109589557A (en) * 2018-11-29 2019-04-09 广州晓康医疗科技有限公司 Based on reality environment tandem race rehabilitation training of upper limbs system and appraisal procedure
CN109589556A (en) * 2018-11-29 2019-04-09 广州晓康医疗科技有限公司 Based on the double collaboration rehabilitation training of upper limbs system of reality environment and appraisal procedure
CN210353736U (en) * 2019-07-02 2020-04-21 摩拓为(北京)科技有限公司 Intelligent interactive object recognition table and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113576409A (en) * 2021-07-19 2021-11-02 河南翔宇医疗设备股份有限公司 Training evaluation system and training evaluation method
CN113576409B (en) * 2021-07-19 2023-09-05 河南翔宇医疗设备股份有限公司 Training evaluation system and training evaluation method
CN114816130A (en) * 2022-06-29 2022-07-29 长沙朗源电子科技有限公司 Writing recognition method and system of electronic whiteboard, storage medium and electronic whiteboard
CN114816130B (en) * 2022-06-29 2022-09-20 长沙朗源电子科技有限公司 Writing recognition method and system of electronic whiteboard, storage medium and electronic whiteboard
CN115129164A (en) * 2022-08-31 2022-09-30 江西科技学院 Interaction control method and system based on virtual reality and virtual reality equipment

Similar Documents

Publication Publication Date Title
CN112076440A (en) Double-person interactive upper limb rehabilitation system based on recognition screen and training method thereof
Chen et al. Computer-assisted yoga training system
CN109350923B (en) Upper limb rehabilitation training system based on VR and multi-position sensors
Hughes et al. Notational analysis of sport: Systems for better coaching and performance in sport
WO2018070414A1 (en) Motion recognition device, motion recognition program, and motion recognition method
CN109589556B (en) Double-person cooperative upper limb rehabilitation training system based on virtual reality environment and evaluation method
US7404774B1 (en) Rule based body mechanics calculation
CN110428486B (en) Virtual interaction fitness method, electronic equipment and storage medium
WO2011009302A1 (en) Method for identifying actions of human body based on multiple trace points
CN101991949B (en) Computer based control method and system of motion of virtual table tennis
WO2014199387A1 (en) Personal digital trainer for physiotheraputic and rehabilitative video games
Charles et al. A participatory design framework for the gamification of rehabilitation systems
WO2014199385A1 (en) Rehabilitative posture and gesture recognition
JPWO2019130527A1 (en) Extraction program, extraction method and information processing equipment
CN109589557B (en) Upper limb rehabilitation training system and evaluation method based on virtual reality environment double competition
Pereira et al. Interpersonal coordination analysis of tennis players from different levels during official matches
Hegazy et al. Online detection and classification of in-corrected played strokes in table tennis using IR depth camera
CN113409651A (en) Live broadcast fitness method and system, electronic equipment and storage medium
Purwantiningsih et al. Visual analysis of body movement in serious games for healthcare
CN108579080A (en) The interaction realization method and system of entity racket and virtual ball under mixed reality environment
CN108269265A (en) Billiard ball batter's box assay method and its device based on deep learning
Guochen et al. Video analysis method of basketball training assistant based on deep learning theory during COVID-19 spread
JP6694333B2 (en) Rehabilitation support control device and computer program
Liu et al. An action recognition technology for badminton players using deep learning
Xipeng et al. Research on badminton teaching technology based on human pose estimation algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201215

RJ01 Rejection of invention patent application after publication