CN110738192A - Human motion function auxiliary evaluation method, device, equipment, system and medium - Google Patents

Human motion function auxiliary evaluation method, device, equipment, system and medium Download PDF

Info

Publication number
CN110738192A
CN110738192A CN201911040330.2A CN201911040330A CN110738192A CN 110738192 A CN110738192 A CN 110738192A CN 201911040330 A CN201911040330 A CN 201911040330A CN 110738192 A CN110738192 A CN 110738192A
Authority
CN
China
Prior art keywords
motion
video
curve
evaluated
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911040330.2A
Other languages
Chinese (zh)
Inventor
张�林
霍志敏
吴建宝
田野
范伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911040330.2A priority Critical patent/CN110738192A/en
Publication of CN110738192A publication Critical patent/CN110738192A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides human body motion function auxiliary evaluation methods, which comprise the steps of obtaining a motion video to be evaluated, wherein the motion video is obtained by shooting an object to be evaluated to execute a specified action, detecting key points of a human body in each frame image in the motion video to obtain coordinates of human body joints, selecting the coordinates of the key points corresponding to the specified action from the coordinates of the human body joints in each frame image, determining motion characteristics of each frame image according to the coordinates of the key points in each frame image in the motion video, and generating a motion curve according to the motion characteristics of each frame image in the motion video, wherein the motion curve is used for human body motion function auxiliary evaluation processing.

Description

Human motion function auxiliary evaluation method, device, equipment, system and medium
Technical Field
The application relates to the technical field of video processing, in particular to human motion function auxiliary evaluation methods, devices, equipment, systems and storage media.
Background
The Parkinson's disease is chronic degenerative disorder of the central nervous system, which can damage the action skills, speech ability and other abilities of patients, the Parkinson's disease has the symptoms of unilateral limb involuntary shaking, similar action of pill rubbing by hands, muscle pain or body unbending, dull facial expression, difficult action initiation, slow speech speed and the like, most of patients with Parkinson's disease are old people, however, in recent years, the Parkinson's disease has the trend of younger age, young Parkinson patients who are not less than 40 years old in clinical treatment can not be completely cured at present because the Parkinson's disease is still cured, and the death cause of the Parkinson's disease is commonly caused by the complications caused by the Parkinson's disease, so that the Parkinson's disease is more concerned in the research of the Parkinson's disease at present, how to better monitor physiological data of the Parkinson's disease, and the analysis result is obtained by analyzing the physiological data and is provided for the patients, families of patients and medical staff.
At present, medical staff monitor Parkinson patients mainly by guiding the patients to do specific actions according to the standard of the world movement disorder society Parkinson comprehensive rating scale (MDS-UPDRS), visually observing the limb movement conditions of the patients and giving movement function scores according to the observation results, wherein the process takes 30 minutes for doctors and the evaluation results are subjective, and the main reasons are that 1, detailed movement laws are difficult to capture, 2, different doctors have different experiences and evaluation standards, and the evaluation of the patients of the department is subjective.
However, the condition management of the Parkinson patients requires that the patients go to the hospital for reexamination every hours to evaluate the condition of the patients, the patient with inconvenient movement can go to the hospital to see a doctor on time and cause great pain to the patients according to the time interval of several months to several days, many patients abandon to go to the hospital for examination due to inconvenience, the movement function of the patients cannot be monitored effectively in time, the monitoring is not timely, the monitoring result is inaccurate, the condition management work is limited, and the condition delay problem is caused.
Certainly, except for parkinson's disease, many nervous system diseases have similar problems as described above, and therefore, human motor function intelligent assessment schemes are urgently needed in the market at present, so that the monitoring efficiency and the monitoring accuracy can be further improved by steps under the condition of simplifying user operation, and the user can conveniently complete human motor function monitoring anytime and anywhere.
Disclosure of Invention
The embodiment of the application provides human motion function auxiliary assessment methods, devices, equipment, systems and storage media, which can intelligently monitor the motion function condition of a patient and provide reference data for auxiliary assessment of physical conditions for the patient and medical personnel.
In view of the above, the application provides methods for auxiliary assessment of human motor function, which include:
acquiring a motion video to be evaluated, wherein the motion video is obtained by shooting an object to be evaluated and executing a specified action;
detecting key points of a human body in each frames of images in the motion video to obtain coordinates of human body joints, and selecting key point coordinates corresponding to the appointed action from the coordinates of the human body joints in each frames of images;
determining the motion characteristics of each frame image according to the key point coordinates in each frame image in the motion video;
generating a motion curve according to the motion characteristics of each frames of images in the motion video;
extracting curve characteristics according to the motion curve to obtain motion curve characteristics;
and determining a motion function evaluation result corresponding to the object to be evaluated based on the motion curve characteristic through a preset evaluation algorithm.
The second aspect of the application provides human motion function auxiliary evaluation methods, which include:
acquiring a motion video to be evaluated, wherein the motion video is obtained by shooting an object to be evaluated and executing a specified action;
detecting key points of a human body in each frames of images in the motion video to obtain coordinates of human body joints, and selecting key point coordinates corresponding to the appointed action from the coordinates of the human body joints in each frames of images;
determining the motion characteristics of each frame image according to the key point coordinates in each frame image in the motion video;
and generating a motion curve according to the motion characteristics of each frames of images in the motion video, wherein the motion curve is used for carrying out auxiliary evaluation processing on human motion functions.
The third aspect of the present application provides kinds of human motion function auxiliary evaluation devices, the devices include:
the video shooting unit is used for acquiring a motion video to be evaluated, wherein the motion video is obtained by shooting an object to be evaluated and executing a specified action;
the key point detection unit is used for detecting key points of a human body in each frames of images in the motion video to obtain coordinates of human body joints, and selecting the coordinates of the key points corresponding to the specified action from the coordinates of the human body joints in each frames of images;
a motion feature determining unit, configured to determine a motion feature of each frame image according to the keypoint coordinates in each frame image in the motion video;
the motion curve generating unit is used for generating a motion curve according to the motion characteristics of each frames of images in the motion video;
the curve characteristic extraction unit is used for extracting curve characteristics according to the motion curve to obtain motion curve characteristics;
and the motion function evaluation unit is used for determining a motion function evaluation result corresponding to the object to be evaluated based on the motion curve characteristic through a preset evaluation algorithm.
The fourth aspect of the present application provides kinds of human motion function auxiliary evaluation devices, the devices include:
the video shooting unit is used for acquiring a motion video to be evaluated, wherein the motion video is obtained by shooting an object to be evaluated and executing a specified action;
the key point detection unit is used for detecting key points of a human body in each frames of images in the motion video to obtain coordinates of human body joints, and selecting the coordinates of the key points corresponding to the specified action from the coordinates of the human body joints in each frames of images;
a motion feature determining unit, configured to determine a motion feature of each frame image according to the keypoint coordinates in each frame image in the motion video;
the motion curve generating unit is used for generating a motion curve according to the motion characteristics of each frames of images in the motion video;
the fifth aspect of the present application provides human motion function auxiliary evaluation systems, the systems including:
video capture device, auxiliary evaluation device and video analysis device,
the video shooting device is used for shooting an object to be evaluated to execute a specified action to obtain an action video to be evaluated;
the th auxiliary evaluation device is used for sending the action video to be evaluated to the video analysis device;
the video analysis equipment is used for detecting key points of a human body in each frame image in the motion video to obtain coordinates of the human body joint, selecting the coordinates of the key points corresponding to the designated action from the coordinates of the human body joint in each frame image, determining the motion characteristics of each frame image according to the coordinates of the key points in each frame image in the motion video, generating a motion curve according to the motion characteristics of each frame image in the motion video, extracting curve characteristics according to the motion curve to obtain the characteristics of the motion curve, determining the motion function evaluation result corresponding to the object to be evaluated according to the motion curve characteristics by a preset evaluation algorithm, and sending the motion curve and the motion function evaluation result corresponding to the object to be evaluated to the auxiliary evaluation equipment;
the th auxiliary evaluation device is further configured to display the motion curve and a motion function evaluation result corresponding to the object to be evaluated on an auxiliary evaluation interface.
The sixth aspect of the present application provides human motion function auxiliary evaluation systems, the systems including:
a second video capture device and a second video analysis device; wherein the content of the first and second substances,
the second video shooting device is used for shooting an object to be evaluated to execute a specified action to obtain an action video to be evaluated and sending the action video to be evaluated to the video analysis device;
the second video analysis device is used for detecting key points of a human body in each frame image in the motion video to obtain coordinates of human body joints, selecting the coordinates of the key points corresponding to the appointed action from the coordinates of the human body joints in each frame image, determining the motion characteristics of each frame image according to the coordinates of the key points in each frame image in the motion video, generating a motion curve according to the motion characteristics of each frame image in the motion video, extracting curve characteristics according to the motion curve to obtain the characteristics of the motion curve, determining the motion function evaluation result corresponding to the object to be evaluated according to the motion curve characteristics by a preset evaluation algorithm, and sending the motion curve and the motion function evaluation result corresponding to the object to be evaluated to the second video shooting device;
the second video shooting device is further used for displaying the motion curve and a motion function evaluation result corresponding to the object to be evaluated.
The seventh aspect of the present application provides kinds of human motion function auxiliary evaluation devices, where the devices include a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is used for executing the steps of the human body motion function auxiliary evaluation method according to the instructions in the program codes.
An eighth aspect of the present application provides computer-readable storage media for storing program codes for executing the steps of the above-mentioned human body movement function auxiliary evaluation method.
According to the technical scheme, the embodiment of the application has the following advantages:
the embodiment of the application provides auxiliary assessment methods for human body movement functions, wherein in the method, a movement video of an object to be assessed is shot, then the movement video is analyzed to obtain movement characteristics of limbs of the object to be assessed in the video, a movement curve is generated, and the movement function condition of the object to be assessed is assessed based on the movement curve, so that the intelligent assessment of the movement function condition of the object to be assessed is realized, and reference data for assisting in assessing the body condition is provided for patients and medical care personnel.
Meanwhile, the auxiliary evaluation method for the human body motion function can analyze the motion condition of the object to be evaluated and automatically give an evaluation result for the object to be evaluated to refer to so that the object to be evaluated can evaluate the physical condition at home without going to a hospital for examination regularly. So, solved the puzzlement that needs regularly to go to hospital to inspect to the inconvenient patient of action, can also assist medical personnel to monitor patient's health.
Drawings
Fig. 1 is an application scene diagram of human motion function auxiliary evaluation methods provided in the embodiments of the present application;
fig. 2 is a schematic flowchart of an auxiliary assessment method for human body movement function according to an embodiment of the present application;
FIG. 3 is a schematic diagram of image target detection provided in an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a target detection model provided in an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a process of adjusting a target area in images according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a process of adjusting a target area in another kinds of images according to an embodiment of the present application;
FIG. 7 is a comparison graph of the effects of images before and after adjustment provided by the embodiment of the present application;
FIG. 8a is a schematic diagram of a body joint identified using a keypoint detection technique according to an embodiment of the present application;
FIG. 8b is a schematic diagram of a hand joint identified by a keypoint detection technique according to an embodiment of the present application;
FIG. 9 is a schematic diagram of image quality detection provided by an embodiment of the present application;
FIG. 10 is a schematic structural diagram of a deep learning model for keypoint detection according to an embodiment of the present disclosure;
fig. 11 is a schematic diagram of a third-stage partial feature map in a deep learning model for keypoint detection according to an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of convex hull areas for calculating selected keypoints according to an embodiment of the present application;
FIG. 13 is a graph illustrating a motion amplitude characteristic of four hand movements provided in accordance with an embodiment of the present disclosure;
fig. 14 is a schematic flowchart of another methods for assisting assessment of human motor function according to the present embodiment;
fig. 15a is a schematic structural diagram of an auxiliary human motor function assessment system according to an embodiment of the present application;
FIG. 15b is a schematic diagram of assessment reports provided by an embodiment of the present application;
fig. 15c is a schematic flow chart illustrating the application of the method for auxiliary assessment of human motion function in a hospital consulting room scene according to the embodiment of the present application;
fig. 16 is a schematic structural diagram of another kinds of human motion function auxiliary evaluation systems provided in the embodiment of the present application;
fig. 17 is a schematic flow chart illustrating a method for auxiliary assessment of human motion function in a hospital consulting room according to an embodiment of the present application;
fig. 18 is a schematic flowchart of an auxiliary assessment method for human body movement function according to an embodiment of the present application;
FIG. 19 is a schematic structural diagram of an auxiliary human motor function assessment device according to an embodiment of the present application;
fig. 20 is a schematic structural diagram of servers provided in the embodiment of the present application;
fig. 21 is a schematic structural diagram of kinds of terminal devices provided in the embodiment of the present application.
Detailed Description
For a better understanding of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of of the present application, rather than all embodiments.
Furthermore, the terms "comprising" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a series of steps or elements is not necessarily limited to the expressly listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Aiming at the problems of , such as strong human subjectivity, poor reliability of an evaluation result, time-consuming evaluation process, difficulty in matching of repeated actions of a patient on site and the like, of the scheme for monitoring the motion function condition of the patient by manual naked eyes of medical staff at present, the embodiment of the application provides human motion function auxiliary evaluation methods.
It should be noted that the method for auxiliary assessment of human motor function provided in the embodiment of the present application may be used in other scenarios requiring assessment of human motor function besides the scenario for performing motor function assessment on patients with neurological diseases, such as parkinson patients and ataxia patients, and the specific application scenario of the method for auxiliary assessment of human motor function provided in the embodiment of the present application is not limited at all.
It should be understood that the human motion function auxiliary evaluation method provided by the embodiment of the present application may be independently applied to devices with data processing capability, such as terminal devices, servers, and the like, and may also be implemented by the terminal devices and the servers being mutually matched; the terminal device may be a computer, a smart phone, a Personal Digital Assistant (PDA), or the like; the server may specifically be an application server or a Web server, and in actual deployment, the server may be an independent server or a cluster server.
In order to facilitate understanding of the technical solution provided by the embodiment of the present application, an application scenario in which the terminal device and the server cooperate with each other to implement the human motion function auxiliary evaluation method provided by the embodiment of the present application is described below.
Referring to fig. 1, fig. 1 is a diagram of an application scenario of human motion function auxiliary evaluation methods provided in an embodiment of the present application, as shown in fig. 1, the application scenario includes a terminal device 101 and a server 102.
The terminal device 101 is used for shooting a motion video of a patient and uploading the motion video to the server 102; the server 102 is configured to execute the human body motion function auxiliary evaluation method provided by the embodiment of the application, and determine a motion function evaluation result of the patient, so as to automatically evaluate a motion function condition of the patient and provide reference data for monitoring a physical condition for the patient and a medical staff.
The method comprises the steps that when the method is applied specifically, terminal equipment 101 shoots a motion video of a patient executing a specified action and uploads the motion video to a server 102, after the server 102 receives the motion video, motion characteristics of each frame image are determined according to key point coordinates in each frame image in the motion video, a motion curve is generated according to the motion characteristics of each frame image in the motion video, curve characteristics are extracted according to the motion curve to obtain motion curve characteristics, and then a motion function evaluation result corresponding to an object to be evaluated is determined according to the motion curve characteristics through a preset evaluation algorithm.
The following describes a method for assisting in evaluating human motor function provided by the present application by way of example.
Referring to fig. 2, fig. 2 is a schematic flow chart of human motion function auxiliary evaluation methods provided by the embodiments of the present application, the methods are suitable for evaluating human motion functions, for convenience of description, the following embodiments take a server as an example of an execution subject, and describe the human motion function auxiliary evaluation method, as shown in fig. 2, the human motion function auxiliary evaluation method includes the following steps:
step S201: and acquiring a motion video to be evaluated, wherein the motion video is obtained by shooting an object to be evaluated and executing a specified action.
It can be understood that the server needs to acquire a motion video to be evaluated, where the motion video refers to a video obtained by shooting an object to be evaluated and executing a specified action; the object to be evaluated may be a patient requiring motor function evaluation, for example, a parkinson patient or the like.
In practical applications, for example, for performing motor function assessment on a parkinson patient, the patient is required to perform a specified action according to the MDS-UPDRS standard, for example, a finger, a fist, a toe clap, etc., based on which the motor function of the patient is assessed, for each executed actions, motor videos are taken from the patient to be assessed and sent to the server, so that the server analyzes the videos after receiving the motor videos.
, before starting to shoot the motion video of the object to be evaluated, the shooting device can position the motion part of the object to be evaluated by using a target detection algorithm and a position correction algorithm according to the shooting requirements, then automatically calculate the optimal shooting angle of the corresponding part, automatically adjust the focal length of the camera to enable the motion limb to be in the center of the picture and occupy a larger picture, and then start shooting, so as to obtain high-quality motion video for motion video analysis of steps.
The above object detection algorithm is described below.
During specific implementation, a target area can be obtained by detecting a target area of the initially acquired image through a target detection model, wherein the target area is a position of a detection part corresponding to the specified action in the image.
Before shooting a moving video, shooting equipment can firstly shoot an image of an object to be evaluated and input the image into a target detection model, and a target area in the image is positioned; the target area can be understood as the position of a moving part of the object to be evaluated, which performs a specified action, in the image.
In a specific implementation, an initially acquired image and a detection target (for example, a hand) are input into the target detection model, and the position coordinates of a target area in the image are output. For ease of understanding, the hand of the object to be evaluated in the image is detected as an example, as shown in fig. 3.
Referring to fig. 3, fig. 3 is a schematic diagram of detecting a hand region of a person in an image. In this figure, a starting image is input into a target detection model, and detection through the model is performedMeasuring and outputting the detection result ((x) of the target area0,y0),(x1,y1) ); wherein (x)0,y0) The coordinates of the upper left corner point of the box in the diagram, i.e. the distance of the point from the left edge and the upper edge of the image, respectively, are x0And y0,(x1,y1) The coordinates of the point at the bottom right corner of the box in the diagram, i.e. the distance of the point from the left edge and the top edge of the image, respectively, are x1And y1
The target detection model is a model trained in advance and having a target area detection capability, and the image to be detected and the detection target are used as input of the model, and the coordinates of the target area are used as output.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an object detection model. As shown in fig. 4, the structure of the object detection model will be described below.
The target detection model can be divided into two parts, namely a basic network block and a multi-scale feature block, wherein the basic network block is used for extracting features from an original image, a common deep convolutional neural network such as a VGG (vertical convolutional group) and ReseNet (robust convolutional group) feature extraction network can be selected, and the multi-scale feature block mainly performs the operation of reducing the height and the width of a feature map input at the upper layer (such as halving), so that the feature map and an anchor frame with different sizes can be extracted by the network model, and the detection effect of the network on target objects with different sizes can be improved.
For example, if a larger size object in the image is to be detected, the height and width of the feature map provided by the upper layer can be reduced (e.g., halved) in the multi-scale feature block, and the receptive field of each cell in the feature map on the input image is changed , so that fewer anchor frames are generated based on the feature map, and the receptive field of each cell in the feature map is larger, so that the larger size object in the image is better suitable for detection.
In practical application, the target detection model can be designed according to requirements, wherein the basic network block is not limited to the VGG and ResNet network provided above, but also can be other deep learning networks with feature extraction capability, the multi-scale feature block can adjust network parameters according to the size of the target detected actually so as to obtain a higher-quality shooting effect, and no limitation is made on the specific structure of the model.
After the position of the target area is obtained according to the target detection algorithm, the position deviation between the target area and the center of the image picture can be continuously calculated by adopting a position correction algorithm, so that the shooting angle is adjusted, and a high-quality motion video can be obtained for subsequent analysis.
The position correction algorithm is described below.
Firstly, the horizontal position deviation between the target area detected by the target detection model and the center of the image picture can be determined, and then the shooting angle of the shooting equipment is controlled and adjusted according to the horizontal position deviation, so that the detection part in the image is positioned at the center of the shooting picture; furthermore, the proportion of the target area in the image picture can be determined, and the focal length of the shooting equipment is adjusted according to the proportion control, so that the proportion of the detection part in the shooting picture falls into a preset range.
For the convenience of understanding, the process of calculating the position deviation and then adjusting the shooting angle according to the position deviation is described by taking the example of shooting the target person by the camera, and referring to fig. 5, fig. 5 is a schematic diagram of processes for adjusting the target area in the image according to the embodiment of the present application.
In fig. 5, 501 denotes a photographed picture before camera position adjustment, where a circle denotes an image center, a solid-line rectangular box denotes a position of a target person locked by a target detection algorithm, and a prism in the middle of the solid-line rectangular box denotes a position center of the target person. As can be seen at 501, the position of the target person is offset from the center of the image by a distance d1 in the lateral direction and d2 in the longitudinal direction. Therefore, the camera needs to be moved laterally by a distance d1 and moved longitudinally by a distance d2 to obtain a new shot picture, as shown in 502, 502 represents the shot picture after the position of the target person is adjusted, and at this time, the prism in the middle of the solid rectangular frame overlaps the middle circle of 502, that is, the target person is located in the center of the shot picture.
According to the camera shooting angle obtained above, the target person is already located in the center of the camera picture, at this time, steps are needed to adjust the size of the target person in the camera, the device is moved on the Z axis of the device by calculating the proportion of the target person in the shot picture, when the picture of the target person is too large (out of the mirror), the device is moved along the Z axis to make the camera far away from the patient to adjust the focal distance of the camera, and when the proportion of the patient in the picture is too small, the device is moved along the Z axis to make the camera close to the patient to adjust the focal distance of the camera.
The process of calculating the specific gravity, moving and adjusting the camera is described on the basis of fig. 5, referring to fig. 6, fig. 6 is a schematic diagram of another processes for adjusting the target area in the image according to the embodiment of the present application.
As shown in fig. 6, 601 shows the situation where the target person is too far away from the camera and the image of the target person is too small, and it is necessary to move the device along the Z-axis to bring the camera close to the target person and adjust the focal length of the camera to obtain a clearly enlarged image of the target person as shown at 602.
In fig. 6, the distances Dw and Dh are given to the target person from the two sides of the image, and the length and width of the image are W, H, respectively, assuming that the moving distance required to bring the device close to the target person is m (cm), then:
M=0.8·Dw·α(Dw<Dh) (1)
or:
M=0.8·Dh·α(Dh<Dw) (2)
wherein α is the coefficient between the device movement distance along the Z-axis and the screen adjustment distance, which can be determined experimentally.
In summary, after the target detection algorithm and the camera posture correction algorithm are calculated, the position acquisition of the target area can be completed, the offset distance is calculated, and the shooting angle is automatically adjusted, so that the target area is located in the center of the shot picture as much as possible. Meanwhile, the equipment can move the camera along the Z axis, so that the size of the picture of the target patient is increased or reduced, the focal length is adjusted, and the shooting definition is improved. Based on this, the effect schematic diagram before and after the adjustment of the initial image shown in fig. 3 after the above-mentioned processing is shown in fig. 7, and fig. 7 is the effect schematic diagram before and after the adjustment of the initial image shown in fig. 3.
It should be understood that the above is only an exemplary illustration of the process for adjusting the position of the target area in the image, in practical applications, the shooting device may be other devices with shooting functions, such as a camera, a mobile phone, etc., the initially acquired image may have other problems that need to be adjusted, and for different shooting conditions, the above process is taken as an example to make corresponding adjustments, and no specific limitation is made to other cases herein.
And , the quality of the initial image can be detected to improve the quality of the video shot, because the motion condition of the object to be evaluated needs to be comprehensively analyzed in the subsequent motion video analysis process, if the object to be evaluated is taken out of the mirror or part of the body joint is not shot in the motion process, the accuracy of the subsequent video analysis is seriously influenced.
Therefore, by detecting the initial image quality, when the image quality is unqualified, the shooting angle and the focal length are adjusted in time, and shooting is carried out again to obtain a qualified shot image. Based on the above, the motion video shot by the object to be evaluated is used for subsequent analysis, so that a more accurate evaluation result can be obtained.
The above-described image quality detection is described below.
Firstly, detecting key points of a human body on an initial image acquired initially to obtain human body joint coordinates, counting according to the human body joint coordinates in the initial image to obtain the number of human body joints, and then determining the ratio of the number of the human body joints to the number of joints to be detected corresponding to the specified action; and when the ratio is larger than the threshold value of the joint detectable rate corresponding to the specified action, determining that the quality of the initially acquired image is qualified, and prompting to start shooting for the motion video.
The human body key point detection process can understand that the human body joint position in the initial image is automatically identified, the joint position is determined by two pixel positions, namely the distance between the position of a certain joint in the image and the left edge and the upper edge of the image is recorded as: (x, y).
The human body key point detection can identify the body joints and the hand joints of the person in the image. For ease of understanding, refer to fig. 8a and 8b, where fig. 8a is a schematic diagram illustrating a human body joint being identified by the human body key point detection technology provided in the embodiment of the present application; fig. 8b is a schematic diagram of human hand joints identified by the human key point detection technology provided in the embodiment of the present application.
Specifically, in fig. 8a, 0-24 reference numerals correspond to 25 joints of the human body, respectively: nose, neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, hip center, right hip, right knee, right ankle, left hip, left knee, left ankle, right eye, left eye, right ear, left big toe, left small toe, left heel, right big toe, right small toe, right heel.
In fig. 8b, the joints of the hand 21 corresponding to 0-20 symbols are respectively: the joint comprises a lunar bone (0), an oblique square bone (1), three movable joints (2-4) of a thumb, four movable joints (5-8) of an index finger, four movable joints (9-12) of a middle finger, four movable joints (13-16) of a ring finger and four movable joints (17-20) of a little finger. The figure is a schematic diagram of the right hand, corresponding to the left hand.
In practical application, after the human body key point detection technology is adopted to detect human body joints in an initial image, the number of the detected joints is counted, then the ratio of the number of the detected joints to the number of the joints to be detected for executing specified actions is calculated, and whether the initial image is qualified or not is judged by comparing the ratio with a threshold value.
Specifically, assume that the number of detected joints is JdetectThe number of the joints to be detected of the moving limb is as follows: j. the design is a squaretargetAnd then the joint detectable rate is as follows: j. the design is a squarerate=Jdetect/J target100%. Assume that the joint detectable rate threshold is set to J0Then when J israte>J0When the quality of the image is judged to be qualified, the image can be clicked to enter a motion video recording stage of steps, otherwise, the shooting angle of the equipment needs to be adjusted until the joint detection rate is larger than J0. Wherein, JtargetThe number of joints to be detected of each hand is equal to that of the joints to be detected when the hands are shot21, when a body motion video is taken, the number of joints to be detected becomes 25, referring to fig. 8a and 8 b.
For easy understanding, referring to fig. 9, fig. 9 is a schematic diagram of image quality detection provided by an embodiment of the present application. The above image quality detection process will be described by taking a photographing manual as an example.
The initial image is subjected to key point detection, and as shown by 901 in fig. 9, the number of joints in the detected 901 is JdetectThe number of joints to be detected is 10: j. the design is a squaretargetWhen 21, the joint detectable rate: j. the design is a squarerate10/21-100% is 48%. Assumed joint detection rate J095% when Jrate<J0And if the image is unqualified, adjusting the shooting angle in time, re-shooting to obtain the image, and detecting the image, as shown by 902 in fig. 9, detecting the joint detectable rate J of the image shot after adjustmentrate=100%>J0And if the image is qualified, prompting to start shooting the motion video.
The above description has been made on the acquisition of the motion video to be evaluated in fig. 2 and the shooting process of the motion video, next, referring to fig. 2 again, after the motion video to be evaluated is acquired in step S201, step S202 is further performed in step .
And S202, detecting key points of a human body in each frames of images in the motion video to obtain coordinates of the human body joints, and selecting the key point coordinates corresponding to the specified action from the coordinates of the human body joints in each frames of images.
When the server acquires the motion video, each frame image in the video needs to be analyzed frame by frame, and in specific implementation, the server can detect the human body joint coordinates in the images by adopting the human body key point technology, and select different key points from the human body joint coordinates of each frame image.
It is understood that the corresponding performed body joint points are different for different actions, and herein the body joint points are understood as body bone joint points. Therefore, the human joint points to be paid attention to are also different. For example, when performing a finger action, the index finger (point 8) and thumb tip (point 4) in FIG. 8b may be selected as the keypoints.
The human body key point detection process may be understood as recognizing an input image by using end-to-end deep learning models, outputting a response map of the image to the position of a body joint, where the position of the maximum value in the response map is the predicted position of the human body joint.
In the specific implementation, an image to be detected can be input into a pre-trained deep learning model, feature extraction is carried out on the image to be detected through an th network part in the deep learning model to form a primary feature image input second network, multi-level multi-dimensional feature extraction is carried out through the second network part according to the image to be detected and the primary feature image to obtain a body joint position response image output by the last layer of the second network part, and the position coordinate where the maximum value in the body joint position response image is located serves as the detected human body joint coordinate.
For convenience of understanding, referring to fig. 10, fig. 10 is a schematic structural diagram of a human body key point detection model provided in an embodiment of the present application.
The above-mentioned deep learning model is divided into two parts, the th network part corresponds to the th stage (stage1) in fig. 10 for performing feature extraction on the inputted image to form a primary feature image to be inputted into the second network, the second network part corresponds to the second stage (stage2), the third stage (stage3) and the subsequent stages in fig. 10 for performing multi-level multi-dimensional feature extraction based on the inputted original image and the primary feature image outputted from the th network part, outputs a response map of the body joint positions in the image in the last level, and takes the position coordinates where the maximum value in the response map is located as the detected body joint coordinates.
Here, a description will be given of a procedure of realizing the keypoint detection by the model, taking as an example that the model receives RGB images of a half-body model (9 body joints predicting the upper half body) as input.
Stage where stage is basic convolutional networks (convs boxes) that predict the response of each component directly from the color image, for example, a bust model has 9 body joints and additionally contains background responses, then stage results in a total of 10 response maps, and the size of the response image is 46 x 46.
The second stage is also to predict the response graph of each joint from the color image, but series layers (concat blocks) are added between two convolution networks (convs blocks), and the process assists in channel-layer combination of the feature data of the following three aspects:
1. the convolution network in the second Stage is responsible for extracting the texture features of the image, as shown in the convs box of fig. 10-Stage2, and the output response graph size of the Stage is 46 x 32.
2. The second stage has two convolution networks, the th convolution network extracts texture features, and the texture features are combined with the spatial features of the stage through a tandem layer process, and the scale of the spatial features of the stage is 46 × 10.
3. To center the response of each Stage in the image, starting from Stage2, center-constrained features, which are gaussian function templates, were added for each Stage to approximate the response to the center of the image.
And in the third stage, the original image is not used as an input in the third stage, feature maps with the depth of 128 are taken out from convolution networks in the second stage to be used as the input, three features including texture features, spatial features and central constraint features are combined by using tandem lamination, a part of feature maps obtained in the third stage are shown in the following FIG. 11, and the left side of the original image is input.
And (3) a subsequent stage: the structure of the fourth stage is identical to that of the third stage. When a more complex network is designed (the network depth is increased), the structure of the third stage is only needed to be repeated, and the structure of the fourth stage and the structure of the subsequent stages are the same as that of the third stage. Meanwhile, when different numbers of human body key points (joint points) need to be predicted (for example, a whole body model and a hand model are predicted), the number of required response graphs in each stage only needs to be adjusted, for example: the 25 key point models of the whole body are predicted by changing the output of the response graph into 25 layers, the 21 key point models of the hand are predicted, and the number of the response graphs is changed into 21 layers.
It should be noted that the above is only an exemplary illustration of a deep learning model structure for detecting human body key points, and in an actual application process, specific parameters of the model and a network structure may be set according to requirements to implement detection of body joints in an image, which is not limited herein.
And S203, determining the motion characteristics of each frame image according to the key point coordinates in each frame image in the motion video.
After determining the coordinates of the key points in each frame image, the server needs to determine the motion features in each frame image according to the key point coordinates, wherein the motion features can be understood as the body shape corresponding to the execution action.
Since the corresponding motion characteristics are different for different test actions, different methods are needed to extract the motion information in each frame image in the motion video, which can be represented by characteristic values.
In specific implementation, the target extraction algorithm corresponding to the specified action may be selected from a plurality of preset extraction algorithms, where the extraction algorithm includes: a distance class extraction algorithm, an area class extraction algorithm and an angle class extraction algorithm.
When the target extraction algorithm is a distance class extraction algorithm, for each frames of images in the motion video, determining coordinate distances according to the coordinates of key points in each frames of images as the motion characteristics of each frames of images.
Assume that the selected keypoint coordinate is (x)0,y0) And (x)1,y1) Then its distance value dist can be expressed as:
when the target extraction algorithm is an area class extraction algorithm, for each frames of images in the motion video, determining the convex hull area according to the coordinates of the key points in each frames of images as the motion characteristics of each frames of images.
For the area class, it is usually necessary to calculate the area of the Convex Hull (Convex Hull) of the selected keypoint, as shown in fig. 12, fig. 12 shows a schematic diagram of kinds of Convex Hull areas for calculating the selected keypoint provided by the embodiment of the present application.
In fig. 12, dots represent a set of all key points, a polygonal region represents a convex hull region surrounded by key points, and the area of the convex hull region is determined as a feature value of a single image. Assuming that the keypoint set is X and the convex hull S is the intersection of all convex hulls containing keypoint X, then:
Figure BDA0002252654730000162
when the target extraction algorithm is an angle class extraction algorithm, for each frames of images in the motion video, determining a joint angle according to the coordinates of key points in each frames of images as the motion characteristic of each frames of images.
Assume that the selected keypoint coordinate is (x)0,y0) And (x)1,y1) Then its distance value ang can be expressed as:
ang=tan-12(y1-y0,x1-x0) (5)
and step S204, generating a motion curve according to the motion characteristics of each frame image in the motion video.
For example, when the distance class characteristics of a patient are extracted, if the camera is far away from the patient, the calculated distance between joints is small, and if the camera is close to the patient, the calculated distance between joints is large.
In specific implementation, a normalization coefficient can be determined according to the motion characteristics of each frame image in the motion video, and then the motion characteristics of each frame image in the motion video are subjected to normalization processing according to the normalization coefficient to obtain motion characteristics normalized .
In practical application, the feature value of each frame image needs to be divided by the regression coefficient of the video to obtain the regression -based motion feature, wherein the regression coefficient may select the size of a specific object in the video, such as the median of the face area value of the subject in the whole video image frame, and may also select other parameters as the regression coefficient of the video, which is not limited herein.
It should be noted that the motion characteristic of each frames of images is discrete values, and after obtaining the motion characteristic of each frames in the video sequence, discrete data characteristics can be connected into continuous waveforms by using an interpolation method.
In addition, the deep learning model for detecting the key points of the human body has specified errors in the training data, specified systematic errors naturally exist in the key points obtained by the image through the attitude estimation algorithm, which causes the characteristic waveform obtained by the interpolation to be extremely unsmooth.
In specific implementation, an -dimensional gaussian filter can be adopted to perform noise reduction processing on the characteristic waveform, and the noise reduction processing is expressed by a mathematical expression:
Figure BDA0002252654730000171
where r denotes the blur radius and σ denotes the standard deviation of the normal distribution. Here the filter is used as a sliding window and the signature is scrolled to perform Gaussian smoothing.
By combining the above steps, the server may generate a motion feature sequence according to the motion features classified into corresponding to each frames of images in the motion video, and perform interpolation and noise reduction processing on the motion feature sequence to generate a motion curve.
Step S205: and extracting curve characteristics according to the motion curve to obtain the motion curve characteristics.
It can be understood that, after obtaining a relatively smooth characteristic waveform, since the length of each segment of video is not , and in the case of limited training data, if the waveform is directly used as the final motion characteristic, the dimension of the waveform is too high, and the waveform needs to be reduced to a feature with a lower dimension and clinical value.
When the waveform amplitude characteristics are extracted from the motion curve, the waveform amplitude characteristics comprise at least of the maximum value, the minimum value, the median, the standard deviation and the amplitude time sequence change characteristics of the motion curve.
In a specific implementation, the above process of extracting features related to waveform amplitude from a motion curve can be understood as finding local maximum and minimum values in a waveform from the process, and taking the difference between the local maximum and minimum values as segments of waveform amplitude.
When the waveform frequency characteristics are extracted from the motion curve, the waveform frequency characteristics comprise at least of the maximum value, the minimum value, the median, the standard deviation and the frequency time sequence change characteristics of the motion curve frequency.
In particular, in order to achieve robustness of the whole system, in the process of extracting waveform frequency characteristics from a motion curve, peaks and valleys of the curve need to be found, and meanwhile, every small segments of waveform frequency are estimated by using a fourier transform method.
The frequency of the segment is then:
argmax|Sj|,j∈{0,1,…,tf-1} (7)
similar to the extracted amplitude feature, the frequency-dependent feature value also includes a maximum value, a minimum value, a median, a standard deviation of the frequency, and a time-series change of the frequency (e.g., a difference between average frequencies in the first half and the second half, and an overall change trend of the frequency).
And when the waveform abnormal feature is extracted from the operation curve, the waveform abnormal feature comprises the number of data interruptions of the operation curve.
In the specific implementation, the process of extracting the waveform abnormal feature from the operation curve is mainly used for searching the abnormal features with clinical significance, such as the number of pauses, the number of freezes and the like, because no pause and freeze judgment basis exists in a clinical level, the judgment of the abnormal feature is usually carried out on the basis of the information of amplitude, frequency and the like of each small segments of waveforms, for example, when the amplitude and the frequency of the segment of waveforms are both 0, the current state is judged to be a pause state.
And weighting the obtained waveform amplitude characteristic, waveform frequency characteristic and waveform abnormal characteristic based on the obtained waveform amplitude characteristic, waveform frequency characteristic and waveform abnormal characteristic, and taking the weighted characteristic as a motion curve characteristic.
In the concrete implementation, the waveform amplitude characteristic, the waveform frequency characteristic and the waveform abnormal characteristic are multiplied by the corresponding weighting coefficients and then added to obtain the motion curve characteristic. The sum of the weighting coefficients corresponding to the waveform amplitude characteristic, the waveform frequency characteristic, and the waveform abnormality characteristic is 1, and the specific value thereof may be obtained through experiments or may be preset.
It can be understood that the motion video shot by performing different actions can obtain corresponding series motion curve characteristics, and different motion characteristic curves, such as motion amplitude characteristic curves, can be obtained by extracting different motion curve characteristics according to the above contents.
For ease of understanding, referring to fig. 13, fig. 13 is a schematic diagram of the motion amplitude characteristic curves of the four motions of the hand provided by the present application. In fig. 13, the amplitude variation motion characteristic curves corresponding to four actions, i.e., finger-to-finger action of the hand, fist making of the hand, horizontal alternation of the hand, and vertical alternation of the hand, are respectively shown from top to bottom.
Step S206: and determining a motion function evaluation result corresponding to the object to be evaluated based on the motion curve characteristic through a preset evaluation algorithm.
After the above steps are executed, the server may evaluate the motion function of the object to be evaluated based on the obtained motion curve characteristics, and obtain a corresponding evaluation result.
In specific implementation, the preset rule-based evaluation algorithm and the machine learning-based evaluation algorithm may be used to evaluate based on the motion curve characteristics to obtain the motion function evaluation result evaluated by each algorithm, and then the motion function evaluation result with the largest occurrence frequency is selected from the motion function evaluation results evaluated by each algorithm to serve as the function evaluation result corresponding to the object to be evaluated.
In practical applications, the input of the evaluation algorithm, namely the above rule-based evaluation algorithm and machine learning-based evaluation algorithm, is the sequence feature value in the processed feature waveform, and the output is a discrete fraction value of 0-4.
The evaluation algorithm based on the rules is mainly based on evaluation indexes in MDS-UPDRS, such as the number of pauses and freezes, the change trend of amplitude and frequency, and the scores are obtained by judging the conditions of the characteristics.
The learning-based algorithm includes various machine learning models, such as linear Regression (Logistic Regression), Random Forest (Random Forest), GBDT, Support Vector Machine (SVM), etc. in previous training, the features derived from the motion curve and corresponding scores are passed to the machine learning model so that it can learn the corresponding parameters.
And finally, performing majority voting (majority vote) on the output result obtained by the algorithm based on the rules and the learning, taking the result with the most votes as the output of the whole algorithm, and taking the result as the motion function evaluation result of the object to be evaluated.
According to the auxiliary assessment methods for the human motion function, the motion video is shot and analyzed, and the motion function of the object to be assessed is assessed based on the motion curve after the video analysis, so that the aim of automatically assessing the human motion function is fulfilled.
In the process of shooting the motion video, the shooting angle is adjusted by adopting a target detection algorithm and a position correction algorithm, so that the high-quality motion video is obtained, the subsequent analysis of the motion video is facilitated, and the accuracy of evaluation is improved. In the process of analyzing the motion video, the video is analyzed by adopting a key point detection technology, a feature extraction algorithm and an evaluation algorithm, so that the limitation of visual observation in the traditional method is broken. In the process of evaluating the motion function, the motion function is evaluated by adopting rules and a deep learning algorithm according to the motion curve obtained by analyzing the video, so that the subjectivity of judgment of medical care personnel in the traditional method is avoided, a more objective evaluation result is obtained, valuable reference data can be provided for a patient and a doctor, and the monitoring of the doctor on the physical condition of the patient is assisted.
It should be noted that, another human body movement function auxiliary evaluation methods are provided in the embodiment of the present application, and referring to fig. 14, fig. 14 is a schematic flow chart of another human body movement function auxiliary evaluation methods provided in the embodiment of the present application, for convenience of description, the following embodiments are described by taking a server as an execution subject, as shown in fig. 14, the method includes the following steps:
step S1401: and acquiring a motion video to be evaluated, wherein the motion video is obtained by shooting an object to be evaluated and executing a specified action.
And S1402, detecting key points of the human body in each frames of images in the motion video to obtain coordinates of the joint of the human body, and selecting the coordinates of the key points corresponding to the specified action from the coordinates of the joint of the human body in each frames of images.
Step S1403, determining the motion characteristics of each frame image according to the coordinates of the key points in each frame image in the motion video.
And step S1404, generating a motion curve according to the motion characteristics of each frame image in the motion video, wherein the motion curve is used for carrying out auxiliary evaluation processing on human motion functions.
For the specific operation process of the above steps S1401 to S1404, reference may be made to the above steps S201 to S204, which are not described herein again. It should be noted that the motion curve obtained in step S1404 can be directly used for performing the human motion function auxiliary evaluation process, and the subsequent processes are not limited to the above processes of steps S205-S206, and are not limited herein.
According to the human body motion function auxiliary assessment methods provided by the embodiment, the motion video of the object to be assessed is shot, then the motion video is analyzed, the motion characteristics of the limbs of the object to be assessed in the video are obtained, the motion curve is generated, the motion function condition of the object to be assessed is assessed based on the motion curve, so that the intelligent assessment of the motion function condition of the object to be assessed is realized, and the reference data for auxiliary assessment of the body condition is provided for patients and medical staff.
For the above-described human motion function auxiliary evaluation method, the embodiment of the present application provides human motion function auxiliary evaluation systems, so that the above-described human motion function auxiliary evaluation method is applied and implemented in practice.
Referring to fig. 15a, fig. 15a is a schematic structural diagram of an human motion function auxiliary evaluation system provided by an embodiment of the present application, in the system, a video shooting device 1501, a auxiliary evaluation device 1502 and a video analysis device 1503 are included;
the video shooting device 1501 is used for shooting an object to be evaluated to execute a specified action to obtain an action video to be evaluated, and the auxiliary evaluation device 1502 is used for sending the action video to be evaluated to the video analysis device;
the video analysis device 1503 is configured to perform human body key point detection on each frame image in the motion video to obtain human body joint coordinates, select key point coordinates corresponding to the specified action from the human body joint coordinates in each frame image, determine motion characteristics of each frame image according to the key point coordinates in each frame image in the motion video, generate a motion curve according to the motion characteristics of each frame image in the motion video, perform curve characteristic extraction according to the motion curve to obtain motion curve characteristics, determine a motion function evaluation result corresponding to the object to be evaluated based on the motion curve characteristics through a preset evaluation algorithm, and send the motion curve and the motion function evaluation result corresponding to the object to be evaluated to the auxiliary evaluation device.
The th auxiliary evaluation device 1502 is further configured to display the motion curve and the motion function evaluation result corresponding to the object to be evaluated on an auxiliary evaluation interface.
It is understood that, after the th video analysis device 1503 obtains the evaluation result, the evaluation result can be sent to the th auxiliary evaluation device 1502 so as to be viewed by the th auxiliary evaluation device 1502.
In specific implementation, the th video analysis device 1503 is further configured to display the auxiliary evaluation interface, where the auxiliary evaluation interface bears a motion video viewing control, a motion characteristic curve viewing control, a history evaluation record viewing control, a score and remark submitting control, and an evaluation report viewing control, when the motion video viewing control is triggered, display a motion video selection list and display a selected target motion video, when the motion characteristic curve viewing control is triggered, display a motion characteristic curve selection list and display a selected target motion characteristic curve, when the score and remark submitting control is triggered, acquire and store currently input score and remark contents, and when the evaluation report viewing control is triggered, display a motion function evaluation report.
For ease of understanding, the auxiliary assessment interface described above is shown at 1500 in FIG. 15 a. In the auxiliary evaluation interface 1500, 6 modules are included: a video playing module 1504, a feature displaying module 1505, a reference video playing module 1506, a reference feature displaying module 1507, a patient record inquiring module 1508, and a present scoring module 1509.
The video playing module 1504 may select a sports video to be played, and double-click a file name to play, where the file name "xx. hand" shown in the figure, mp4 "is merely an exemplary illustration, and when the video playing module is used specifically, the video file format may be in other different forms, which is not limited herein.
It should be noted that, the video selection may select an original video, i.e., a motion video to be evaluated, or may select a reference video. When the original video is selected, the video corresponding to the original video can be automatically played by double clicking the video file, the video can be selected from the video which is analyzed and processed, and the video brings the joint detection result, so that the joint shaking condition can be conveniently observed.
The feature display module 1505 can select different types of features of original videos, such as motion amplitude variation, motion frequency variation, and the like, so as to facilitate observation of motion conditions of an object to be evaluated from different feature dimensions.
The reference video playing module 1506 plays the reference video selected by the video playing module 1504.
The reference feature display module 1507 searches the similar feature maps in the standard library according to the standard library provided by the video analysis database deployed in the second video auxiliary evaluation device, which contains different evaluation result data, and uses the similar feature map search function to input the video analysis features, and displays the similar feature maps after sorting according to the relevance, and for the searched similar feature maps, the user can click to "view video", automatically play the corresponding video content in the left column, so as to compare and refer to the original video and features, and simultaneously click to play the corresponding video file at the reference video, and display the corresponding waveform on the right side.
The patient record query module 1508 may automatically retrieve the same type of videos and assessment results for the patient, and display them in chronological order for reference by the physician.
The present scoring module 1509 includes a scoring option, an evaluation box, and three buttons for submitting evaluation and remarks, viewing reports, and printing reports, wherein the form of the reports may be as shown in fig. 15 b.
It should be noted that the auxiliary evaluation interface 1500 and the evaluation report are shown as an example, and in practical application, the adaptive adjustment may be performed according to different situations, which is not limited herein.
In practical applications, the system provided by the above embodiment can be used in different application scenarios, for example, the system shown in fig. 15a can be used in a hospital consulting room scenario.
In an application scenario of a hospital consulting room, the th video capturing device 1501 may be a device for capturing motion video, such as a camera, a video camera, etc., provided in the hospital consulting room, the th auxiliary assessment device 1502 may be a terminal device used by a doctor, such as a desktop computer, a notebook, etc., and the th video analysis device 1503 may be a server for video analysis.
For convenience of understanding, refer to fig. 15c, and fig. 15c is a schematic flow chart illustrating the application of the method for auxiliary assessment of human motion function in a hospital consulting room scenario according to the embodiment of the present application. The evaluation of motor function of a parkinson patient in a hospital office setting is described below by way of example using a camera, a desktop, and a server as devices in the system shown in fig. 15 a.
Based on the method, in the practical application process, a camera is used for shooting a motion video of the Parkinson patient executing the specified action, and then the motion video of the patient shot by the camera is uploaded to a desktop computer used by medical staff for video analysis. After receiving the motion video, the computer establishes connection with a server by using deployed video analysis software and uploads the motion video needing video analysis to the server. And then, the server analyzes the motion video by adopting a video analysis technology to obtain a motion curve, scores the motion video based on the motion curve and a scoring algorithm, and sends the motion curve and a scoring result to the computer, and medical personnel confirms the score obtained by the server again by combining with an auxiliary assessment interface to obtain an assessment report and transmits the report to the patient.
It should be noted that the human motion function auxiliary evaluation system provided in fig. 15a can be applied to different scenes according to different requirements, in addition to being used in the hospital consulting room scene. In addition, the above system is used in a hospital consulting room, and the used devices are only exemplary, and in practical application, the system can be determined according to specific situations, and is not limited herein.
According to the human motion function auxiliary evaluation system provided by the embodiment, the th video shooting device is used for shooting the motion video of the object to be evaluated, the th auxiliary evaluation device is used for sending the motion video to the th video analysis device, the th video analysis device adopts the human motion function auxiliary evaluation method provided by the embodiment of the application to obtain an evaluation result, and the evaluation result is displayed on the th auxiliary evaluation device.
For the human motion function auxiliary evaluation method provided by the above embodiment, the present application also provides another human motion function auxiliary evaluation systems.
Referring to fig. 16, fig. 16 is a schematic structural diagram of another human motion function assistant evaluation systems provided by the embodiments of the present application, in which the system includes a second video capture device 1601 and a second video analysis device 1602;
the second video shooting device 1601 is configured to shoot an object to be evaluated to execute a specified action to obtain an action video to be evaluated, and send the action video to be evaluated to the video analysis device;
the second video analysis device 1602 is configured to perform human key point detection on every frames of images in the motion video to obtain human joint coordinates, select key point coordinates corresponding to the specified action from the human joint coordinates in every frames of images, determine motion characteristics of every frames of images according to the key point coordinates in every frames of images in the motion video, generate a motion curve according to the motion characteristics of every frames of images in the motion video, perform curve characteristic extraction according to the motion curve to obtain motion curve characteristics, determine a motion function evaluation result corresponding to the object to be evaluated based on the motion curve characteristics through a preset evaluation algorithm, and send the motion curve and the motion function evaluation result corresponding to the object to be evaluated to the second video shooting device;
the second video capturing device 1601 is further configured to show the motion curve and a motion function evaluation result corresponding to the object to be evaluated.
Optionally, the second video analysis device 1602 is further configured to:
and carrying out abnormity analysis according to the motion function evaluation result corresponding to the object to be evaluated and the historical motion evaluation result corresponding to the object to be evaluated, and sending abnormity prompt information to the second video shooting device when abnormity is found.
The above anomaly analysis may be understood as comparison analysis of the current evaluation result and the historical evaluation result, and when a significant difference is found, it may be indicated that the condition of the patient is worsened, and at this time, an anomaly prompt message needs to be sent to the second video shooting device to prompt that the evaluation result of the object to be evaluated is abnormal. In the practical application process, different abnormal situations may occur, and are not limited herein.
Optionally, the second video analysis device 1602 is further configured to: and inputting the motion function evaluation result corresponding to the object to be evaluated into a local health management system, and uploading data to a remote management system at regular time through the local health management system.
It is understood that each time the second video analysis device obtains the evaluation result, the evaluation result can be stored in the local health management system, all the motor function evaluation data of the patient can be recorded, and the data in the system can be uploaded to the remote management system at regular time, wherein the remote management system can be understood as a system providing the medical staff with reference to the data, so that the medical staff can monitor the physical condition of the patient remotely.
In practical applications, the system provided by the above embodiment can be used in different application scenarios, for example, the system shown in fig. 16 can be applied in a home environment.
In an application scenario of a home environment, the second video capture device 1601 may have a capture function device, such as a tablet computer, a mobile phone, etc., and the second auxiliary evaluation device 1602 may be a server for video analysis.
For convenience of understanding, referring to fig. 17, fig. 17 is a schematic flow chart illustrating that the method for auxiliary assessment of human motion function provided by the embodiment of the present application is applied in a hospital consulting room scenario. In the following, a mobile phone and a server are taken as examples of devices in the system shown in fig. 17, and the evaluation of the motor function of the parkinson patient in a home environment is described as an example.
The server scores the motion function of the patient corresponding to the motion video based on the motion curve and a scoring algorithm and sends the motion curve and a scoring result to the mobile phone, the patient can check the evaluation result of the motion video by using the mobile phone in environment, the server can analyze the scoring result and the historical scoring result corresponding to the patient abnormally after obtaining the scoring result, when the result is displayed normally, the scoring result is recorded in a local management system, and the evaluation data is sent to a remote inquiry system used by a doctor corresponding to the patient regularly, and when the analysis result is abnormal, the patient can contact the doctor by using the remote inquiry system so that the doctor can provide help for the patient condition in time.
It should be noted that the human motion function auxiliary evaluation system provided in fig. 16 can be applied to different scenes according to different requirements, except for being used in the above-mentioned home environment. In addition, the above system is used in a home environment, and the devices used in the system are only exemplary, and may be determined according to specific situations in actual application, and are not limited herein.
According to the auxiliary human body motion function evaluation systems provided in the above embodiments, in the system, the object to be evaluated can use the second video shooting device to shoot the motion video, and use the second video analysis device to analyze and evaluate the motion video to obtain the evaluation result.
It should be noted that the auxiliary evaluation method for human body movement function provided by the present application can be applied to the above system, and can also be independently applied to a terminal device, such as a tablet computer, a mobile phone, and the like, and auxiliary evaluation methods for human body movement function provided by the embodiment of the present application can be operated in the terminal device as pieces of software capable of being operated offline, so as to implement the auxiliary evaluation function.
Referring to fig. 18, fig. 18 is a schematic flow chart of human motion function auxiliary evaluation methods provided in the embodiment of the present application, in fig. 18, an intelligent device is taken as an example to evaluate a motion function of an object to be evaluated.
Specifically, a patient uses a mobile phone to shoot an initial image, the initial image is detected by using a key point detection technology, and when the quality of the initial image is judged to be unqualified, the shooting angle is adjusted until the image is qualified. Then, a mobile phone is used for shooting a motion video for executing the specified action, and the motion video is analyzed by using a key point detection technology to generate a motion curve. And evaluating the motion function of the object in the motion video based on different characteristics extracted from the motion curve and a scoring algorithm, obtaining an evaluation result and displaying the evaluation result to a patient for viewing.
In summary, the method for auxiliary evaluation of human motion function provided in the embodiment of the present application is not only suitable for the above-mentioned scenario, but also can be applied to other scenarios requiring auxiliary evaluation of human motion function, and is not limited herein.
Aiming at the human motion function auxiliary evaluation method provided by the embodiment of the application, the application also provides a corresponding human motion function auxiliary evaluation device so as to facilitate the application and realization of the human motion function auxiliary evaluation method in practice.
Referring to fig. 19, fig. 19 is a schematic structural diagram of an kinds of human motion function auxiliary evaluation device 1900 provided in the embodiment of the present application, and the device includes:
a video shooting unit 1901, configured to obtain a moving video to be evaluated, where the moving video is obtained by shooting a target to be evaluated and executing a specified action.
A key point detecting unit 1902, configured to perform human key point detection on each frame of images in the motion video to obtain human joint coordinates, and select a key point coordinate corresponding to the specified action from the human joint coordinates in each frame of images.
A determine motion feature unit 1903, configured to determine motion features of each frame image according to the coordinates of the keypoint in each frame image in the motion video.
A motion curve generating unit 1904, configured to generate a motion curve according to the motion characteristics of each frames of images in the motion video, where the motion curve is used to perform human motion function auxiliary evaluation processing.
Optionally, the apparatus 1900 further includes:
a curve feature extraction unit 1905, configured to perform curve feature extraction according to the motion curve to obtain a motion curve feature.
A motion function evaluation unit 1906, configured to determine, by a preset evaluation algorithm, a motion function evaluation result corresponding to the object to be evaluated based on the motion curve feature.
Optionally, the motion characteristic determining unit 1903 is specifically configured to:
selecting a target extraction algorithm corresponding to the specified action from a plurality of preset extraction algorithms;
when the target extraction algorithm is a distance class extraction algorithm, for each frames of images in the motion video, determining coordinate distances according to the coordinates of key points in each frames of images as the motion characteristics of each frames of images;
when the target extraction algorithm is an area class extraction algorithm, determining the convex hull area according to the coordinates of key points in each frame image as the motion characteristic of each frame image for each frame images in the motion video;
when the target extraction algorithm is an angle class extraction algorithm, for each frames of images in the motion video, determining a joint angle according to the coordinates of key points in each frames of images as the motion characteristic of each frames of images.
Optionally, the motion curve generating unit 1904 is specifically configured to:
determining a normalization coefficient according to the motion characteristics of each frame image in the motion video;
according to the grouping coefficient, grouping the motion characteristics of each frame image in the motion video to obtain a grouped motion characteristic;
generating a motion characteristic sequence according to the motion characteristics which are classified and correspond to each frame of images in the motion video, and performing interpolation and noise reduction processing on the motion characteristic sequence to generate a motion curve.
Optionally, the curve feature extraction unit 1905 is specifically configured to:
extracting waveform amplitude characteristics from the motion curve, wherein the waveform amplitude characteristics comprise at least of maximum values, minimum values, median values, standard deviations and amplitude time sequence variation characteristics of the motion curve;
extracting waveform frequency characteristics from the motion curve, wherein the waveform frequency characteristics comprise at least of maximum values, minimum values, median values, standard deviations and frequency time sequence variation characteristics of the motion curve frequency;
extracting waveform abnormal features from the operating curve, wherein the waveform abnormal features comprise the number of data interruptions of the operating curve;
and weighting the waveform amplitude characteristic, the waveform frequency characteristic and the waveform abnormal characteristic to obtain a motion curve characteristic.
Optionally, the motion function evaluating unit 1906 is specifically configured to:
evaluating based on the motion curve characteristics respectively through a preset rule-based evaluation algorithm and a machine learning-based evaluation algorithm to obtain a motion function evaluation result evaluated by each algorithm;
and selecting the motion function evaluation result with the largest occurrence frequency from the motion function evaluation results evaluated by each algorithm as the function evaluation result corresponding to the object to be evaluated.
Optionally, the apparatus 1900 further includes:
the position calibration unit is specifically configured to:
carrying out target area detection on the initially acquired image through a target detection model to obtain a target area, wherein the target area is the position of a detection part corresponding to the specified action in the image;
determining the horizontal position deviation of the target area and the center of an image picture, and controlling and adjusting the shooting angle of shooting equipment according to the horizontal position deviation so that the detection part is positioned at the center of the shooting picture;
and determining the proportion of the target area in the image picture, and controlling and adjusting the focal length of the shooting equipment according to the proportion so that the proportion of the detection part in the shooting picture falls into a preset range.
Optionally, the apparatus 1900 further includes:
the image quality monitoring unit is specifically configured to:
detecting key points of a human body on an initial image acquired initially to obtain human body joint coordinates, and counting according to the human body joint coordinates in the initial image to obtain the number of human body joints;
determining the ratio of the number of the human body joints to the number of joints to be detected corresponding to the specified action;
and when the ratio is larger than the threshold value of the joint detectable rate corresponding to the specified action, determining that the quality of the initially acquired image is qualified, and prompting to start shooting for the motion video.
Optionally, the key point detecting unit 1802 is specifically configured to:
inputting an image to be detected into a pre-trained deep learning model, performing feature extraction on the image to be detected through an th network part in the deep learning model to form a primary feature image input second network, and performing multi-level multi-dimensional feature extraction through a second network part according to the image to be detected and the primary feature image to obtain a body joint position response map output by the last layer of the second network part;
and taking the position coordinate of the maximum value in the body joint position response image as the detected body joint coordinate.
The human motion function auxiliary evaluation device is used for shooting the motion video of the object to be evaluated, analyzing the motion video, acquiring the motion characteristics of the limb of the object to be evaluated in the video, generating the motion curve, evaluating the motion function condition of the object to be evaluated based on the motion curve, thereby realizing the intelligent evaluation of the motion function condition of the object to be evaluated and providing reference data for auxiliary evaluation of the body condition for patients and medical personnel.
Meanwhile, the human motion function auxiliary evaluation device can analyze the motion condition of the object to be evaluated and automatically give an evaluation result for the object to be evaluated to refer to, so that the object to be evaluated can evaluate the physical condition at home without going to a hospital for examination regularly. So, solved the puzzlement that needs regularly to go to hospital to inspect to the inconvenient patient of action, can also assist medical personnel to monitor patient's health.
servers and terminal devices for auxiliary evaluation of human motion function are further provided in the embodiments of the present application, and the servers and terminal devices for auxiliary evaluation of human motion function provided in the embodiments of the present application will be described below from the perspective of hardware implementation.
Referring to fig. 20, fig. 20 is a structural schematic diagram of servers provided in this embodiment, where the server 2000 may have relatively large differences due to different configurations or performances, and may include or or more Central Processing Units (CPUs) 2022 (e.g., or or more processors) and memories 2032, or or more storage media 2030 (e.g., or or more mass storage devices) storing the application 2042 or data 2044, where the memories 2032 and the storage media 2030 may be transient storage or persistent storage, the program stored in the storage media 2030 may include or or more modules (not shown), each module may include operations on series of instructions in the server, further, the central processing unit 2022 may be configured to communicate with the storage media 2030, and execute series of instructions operations in the storage media 2030 on the server 2000.
The server 2000 may also include or or more power supplies 2026, or or more wired or wireless network interfaces 2050, or or more input/output interfaces 2058, and/or or or more operating systems 2041, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps performed by the server in the above embodiment may be based on the server structure shown in fig. 20.
The CPU 2022 is configured to perform the following steps:
acquiring a motion video to be evaluated, wherein the motion video is obtained by shooting an object to be evaluated and executing a specified action;
detecting key points of a human body in each frames of images in the motion video to obtain coordinates of human body joints, and selecting key point coordinates corresponding to the appointed action from the coordinates of the human body joints in each frames of images;
determining the motion characteristics of each frame image according to the key point coordinates in each frame image in the motion video;
and generating a motion curve according to the motion characteristics of each frames of images in the motion video, wherein the motion curve is used for carrying out auxiliary evaluation processing on human motion functions.
Optionally, the CPU 2022 may further perform the steps of:
extracting curve characteristics according to the motion curve to obtain motion curve characteristics;
and determining a motion function evaluation result corresponding to the object to be evaluated based on the motion curve characteristic through a preset evaluation algorithm.
Optionally, the CPU 2022 may further execute method steps of any specific implementation manner of the human motion function auxiliary evaluation method in this embodiment of the present application.
Referring to fig. 21, fig. 21 is a schematic structural diagram of types of terminal devices provided in the embodiment of the present application, which only shows parts related to the embodiment of the present application for convenience of description, and details of the technology are not disclosed, and refer to the method part in the embodiment of the present application, the terminal device may be any terminal device including a smart phone, a tablet computer, a Personal Digital Assistant (PDA), and the like, taking the terminal as the tablet computer as an example:
fig. 21 is a block diagram illustrating a partial structure of a tablet computer related to a terminal provided in an embodiment of the present application. Referring to fig. 21, the tablet computer includes: radio Frequency (RF) circuit 2110, memory 2120, input unit 2130, display 2140, sensor 2150, audio circuit 2160, wireless fidelity (WiFi) module 2170, processor 2180, and power source 2190. Those skilled in the art will appreciate that the tablet configuration shown in fig. 21 is not intended to be a limitation of a tablet and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the tablet pc in detail with reference to fig. 21:
the memory 2120 may be used to store software programs and modules, and the processor 2180 may execute various functional applications and data processing of the tablet computer by running the software programs and modules stored in the memory 2120. the memory 2120 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, application programs required for at least functions (such as a sound playing function, an image playing function, etc.), etc., the data storage area may store data created according to the use of the cellular phone (such as audio data, a phonebook, etc.), etc.
The processor 2180 is the control center of the tablet computer, and connects various parts of the whole tablet computer by using various interfaces and lines, and performs various functions of the tablet computer and processes data by running or executing software programs and/or modules stored in the memory 2120 and calling up data stored in the memory 2120, thereby performing overall monitoring of the tablet computer, alternatively, the processor 2180 may include or more processing units, and preferably, the processor 2180 may integrate an application processor, which mainly processes an operating system, a user interface, application programs, and the like, and a modem processor, which mainly processes wireless communication, it is understood that the modem processor may not be integrated into the processor 2180.
In this embodiment, the processor 2180 is configured to execute the steps of any implementation manners of the human motion function auxiliary evaluation method provided in the embodiment of the present application.
The embodiment of the present application further provides computer-readable storage media for storing program codes, where the program codes are used to execute any implementation manners of the human motion function assistant assessment methods described in the foregoing embodiments.
The embodiment of the present application further provides computer program products including instructions, which when run on a computer, cause the computer to execute any implementation manners of the human motion function assistant assessment methods described in the foregoing embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units into logical functional divisions may be realized in other ways, for example, multiple units or components may be combined or integrated into another systems, or features may be omitted or not executed, in another point, the shown or discussed coupling or direct coupling or communication connection between each other may be through interfaces, indirect coupling or communication connection between units or devices may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in places, or may also be distributed on multiple network units.
In addition, the functional units in the embodiments of the present application may be integrated into processing units, or each unit may exist alone physically, or two or more units are integrated into units.
Based on the understanding, the technical solution of the present application, or a part of the technical solution or all or part of the technical solution, may be embodied in the form of a software product, the computer software product is stored in storage media, and includes several instructions for making computer devices (which may be personal computers, servers, or network devices) execute all or part of the steps of the methods described in the embodiments of the present application.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (16)

1, human motion function auxiliary evaluation method, characterized by comprising:
acquiring a motion video to be evaluated, wherein the motion video is obtained by shooting an object to be evaluated and executing a specified action;
detecting key points of a human body in each frames of images in the motion video to obtain coordinates of human body joints, and selecting key point coordinates corresponding to the appointed action from the coordinates of the human body joints in each frames of images;
determining the motion characteristics of each frame image according to the key point coordinates in each frame image in the motion video;
generating a motion curve according to the motion characteristics of each frames of images in the motion video;
extracting curve characteristics according to the motion curve to obtain motion curve characteristics;
and determining a motion function evaluation result corresponding to the object to be evaluated based on the motion curve characteristic through a preset evaluation algorithm.
2. The method of claim 1, wherein said determining motion characteristics of each frame image according to said keypoint coordinates of each frame image in said motion video comprises:
selecting a target extraction algorithm corresponding to the specified action from a plurality of preset extraction algorithms;
when the target extraction algorithm is a distance class extraction algorithm, for each frames of images in the motion video, determining coordinate distances according to the coordinates of key points in each frames of images as the motion characteristics of each frames of images;
when the target extraction algorithm is an area class extraction algorithm, determining the convex hull area according to the coordinates of key points in each frame image as the motion characteristic of each frame image for each frame images in the motion video;
when the target extraction algorithm is an angle class extraction algorithm, for each frames of images in the motion video, determining a joint angle according to the coordinates of key points in each frames of images as the motion characteristic of each frames of images.
3. The method of claim 1, wherein generating a motion curve according to the motion characteristics of each frames of images in the motion video comprises:
determining a normalization coefficient according to the motion characteristics of each frame image in the motion video;
according to the grouping coefficient, grouping the motion characteristics of each frame image in the motion video to obtain a grouped motion characteristic;
generating a motion characteristic sequence according to the motion characteristics which are classified and correspond to each frame of images in the motion video, and performing interpolation and noise reduction processing on the motion characteristic sequence to generate a motion curve.
4. The method of claim 1, wherein the performing curve feature extraction according to the motion curve to obtain motion curve features comprises:
extracting waveform amplitude characteristics from the motion curve, wherein the waveform amplitude characteristics comprise at least of maximum values, minimum values, median values, standard deviations and amplitude time sequence variation characteristics of the motion curve;
extracting waveform frequency characteristics from the motion curve, wherein the waveform frequency characteristics comprise at least of maximum values, minimum values, median values, standard deviations and frequency time sequence variation characteristics of the motion curve frequency;
extracting waveform abnormal features from the operating curve, wherein the waveform abnormal features comprise the number of data interruptions of the operating curve;
and weighting the waveform amplitude characteristic, the waveform frequency characteristic and the waveform abnormal characteristic to obtain a motion curve characteristic.
5. The method according to claim 1, wherein the determining, by a preset evaluation algorithm, a motion function evaluation result corresponding to the object to be evaluated based on the motion curve feature comprises:
evaluating based on the motion curve characteristics respectively through a preset rule-based evaluation algorithm and a machine learning-based evaluation algorithm to obtain a motion function evaluation result evaluated by each algorithm;
and selecting the motion function evaluation result with the largest occurrence frequency from the motion function evaluation results evaluated by each algorithm as the function evaluation result corresponding to the object to be evaluated.
6. The method of claim 1, further comprising:
carrying out target area detection on the initially acquired image through a target detection model to obtain a target area, wherein the target area is the position of a detection part corresponding to the specified action in the image;
determining the horizontal position deviation of the target area and the center of an image picture, and controlling and adjusting the shooting angle of shooting equipment according to the horizontal position deviation so that the detection part is positioned at the center of the shooting picture;
and determining the proportion of the target area in the image picture, and controlling and adjusting the focal length of the shooting equipment according to the proportion so that the proportion of the detection part in the shooting picture falls into a preset range.
7. The method of claim 6, further comprising:
detecting key points of a human body on an initial image acquired initially to obtain human body joint coordinates, and counting according to the human body joint coordinates in the initial image to obtain the number of human body joints;
determining the ratio of the number of the human body joints to the number of joints to be detected corresponding to the specified action;
and when the ratio is larger than the threshold value of the joint detectable rate corresponding to the specified action, determining that the quality of the initially acquired image is qualified, and prompting to start shooting for the motion video.
8. The method according to claim 1 or 7, wherein the performing human body key point detection to obtain human body joint coordinates comprises:
inputting an image to be detected into a pre-trained deep learning model, performing feature extraction on the image to be detected through an th network part in the deep learning model to form a primary feature image input second network, and performing multi-level multi-dimensional feature extraction through a second network part according to the image to be detected and the primary feature image to obtain a body joint position response map output by the last layer of the second network part;
and taking the position coordinate of the maximum value in the body joint position response image as the detected body joint coordinate.
9, auxiliary assessment method for human body movement function, which is characterized in that the method comprises:
acquiring a motion video to be evaluated, wherein the motion video is obtained by shooting an object to be evaluated and executing a specified action;
detecting key points of a human body in each frames of images in the motion video to obtain coordinates of human body joints, and selecting key point coordinates corresponding to the appointed action from the coordinates of the human body joints in each frames of images;
determining the motion characteristics of each frame image according to the key point coordinates in each frame image in the motion video;
and generating a motion curve according to the motion characteristics of each frames of images in the motion video, wherein the motion curve is used for carrying out auxiliary evaluation processing on human motion functions.
10, auxiliary evaluation system for human motion function, which is characterized in that the system comprises:
video capture device, auxiliary evaluation device and video analysis device,
the video shooting device is used for shooting an object to be evaluated to execute a specified action to obtain an action video to be evaluated;
the th auxiliary evaluation device is used for sending the action video to be evaluated to the video analysis device;
the video analysis equipment is used for detecting key points of a human body in each frame image in the motion video to obtain coordinates of the human body joint, selecting the coordinates of the key points corresponding to the designated action from the coordinates of the human body joint in each frame image, determining the motion characteristics of each frame image according to the coordinates of the key points in each frame image in the motion video, generating a motion curve according to the motion characteristics of each frame image in the motion video, extracting curve characteristics according to the motion curve to obtain the characteristics of the motion curve, determining the motion function evaluation result corresponding to the object to be evaluated according to the motion curve characteristics by a preset evaluation algorithm, and sending the motion curve and the motion function evaluation result corresponding to the object to be evaluated to the auxiliary evaluation equipment;
the th auxiliary evaluation device is further configured to display the motion curve and a motion function evaluation result corresponding to the object to be evaluated on an auxiliary evaluation interface.
11. The system according to claim 10, wherein the th auxiliary assessment device is further configured to display the auxiliary assessment interface, the auxiliary assessment interface carrying a motion video viewing control, a motion profile viewing control, a historical assessment record viewing control, a score and remark submission control, and an assessment report viewing control;
when the motion video viewing control is triggered, displaying a motion video selection list, and displaying the selected target motion video;
when the motion characteristic curve viewing control is triggered, displaying a motion characteristic curve selection list, and displaying the selected target motion characteristic curve;
when the grading and remark submitting control is triggered, obtaining and storing currently input grading and remark contents;
when the assessment report viewing control is triggered, a motor function assessment report is displayed.
12, auxiliary evaluation system for human motion function, which is characterized in that the system includes:
a second video capture device and a second video analysis device; wherein the content of the first and second substances,
the second video shooting device is used for shooting an object to be evaluated to execute a specified action to obtain an action video to be evaluated and sending the action video to be evaluated to the video analysis device;
the second video analysis device is used for detecting key points of a human body in each frame image in the motion video to obtain coordinates of human body joints, selecting the coordinates of the key points corresponding to the appointed action from the coordinates of the human body joints in each frame image, determining the motion characteristics of each frame image according to the coordinates of the key points in each frame image in the motion video, generating a motion curve according to the motion characteristics of each frame image in the motion video, extracting curve characteristics according to the motion curve to obtain the characteristics of the motion curve, determining the motion function evaluation result corresponding to the object to be evaluated according to the motion curve characteristics by a preset evaluation algorithm, and sending the motion curve and the motion function evaluation result corresponding to the object to be evaluated to the second video shooting device;
the second video shooting device is further used for displaying the motion curve and a motion function evaluation result corresponding to the object to be evaluated.
13. The system of claim 12, wherein the second video analysis device is further configured to:
and according to the motion function evaluation result corresponding to the object to be evaluated and the historical motion evaluation result corresponding to the object to be evaluated, performing field anomaly analysis, and when an anomaly is found, sending anomaly prompt information to the second video shooting device.
14. The system of claim 13, wherein the second video analysis device is further configured to: and inputting the motion function evaluation result corresponding to the object to be evaluated into a local health management system, and uploading data to a remote management system at regular time through the local health management system.
An apparatus of the type 15, , comprising:
a processor and a memory;
wherein the memory is for storing a computer program;
the processor is configured to execute the computer program to implement the method of any of claims 1-9.
16, computer-readable storage medium, characterized in that the computer-readable storage medium is used for storing a computer program for performing the method of any of claims 1-9 to .
CN201911040330.2A 2019-10-29 2019-10-29 Human motion function auxiliary evaluation method, device, equipment, system and medium Pending CN110738192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911040330.2A CN110738192A (en) 2019-10-29 2019-10-29 Human motion function auxiliary evaluation method, device, equipment, system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911040330.2A CN110738192A (en) 2019-10-29 2019-10-29 Human motion function auxiliary evaluation method, device, equipment, system and medium

Publications (1)

Publication Number Publication Date
CN110738192A true CN110738192A (en) 2020-01-31

Family

ID=69270247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911040330.2A Pending CN110738192A (en) 2019-10-29 2019-10-29 Human motion function auxiliary evaluation method, device, equipment, system and medium

Country Status (1)

Country Link
CN (1) CN110738192A (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111048205A (en) * 2019-12-17 2020-04-21 创新工场(北京)企业管理股份有限公司 Method and device for assessing symptoms of Parkinson's disease
CN111310616A (en) * 2020-02-03 2020-06-19 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111460976A (en) * 2020-03-30 2020-07-28 上海交通大学 Data-driven real-time hand motion evaluation method based on RGB video
CN111488824A (en) * 2020-04-09 2020-08-04 北京百度网讯科技有限公司 Motion prompting method and device, electronic equipment and storage medium
CN111539352A (en) * 2020-04-27 2020-08-14 支付宝(杭州)信息技术有限公司 Method and system for judging human body joint motion direction
CN111938658A (en) * 2020-08-10 2020-11-17 陈雪丽 Joint mobility monitoring system and method for hand, wrist and forearm
CN111985448A (en) * 2020-09-02 2020-11-24 深圳壹账通智能科技有限公司 Vehicle image recognition method and device, computer equipment and readable storage medium
CN112418153A (en) * 2020-12-04 2021-02-26 上海商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and computer storage medium
CN112528957A (en) * 2020-12-28 2021-03-19 北京万觉科技有限公司 Human motion basic information detection method and system and electronic equipment
CN112541382A (en) * 2020-04-13 2021-03-23 深圳优地科技有限公司 Method and system for assisting movement and identification terminal equipment
CN112543936A (en) * 2020-10-29 2021-03-23 香港应用科技研究院有限公司 Motion structure self-attention-seeking convolutional network for motion recognition
CN112741620A (en) * 2020-12-30 2021-05-04 华南理工大学 Cervical spondylosis evaluation device based on limb movement
CN112998700A (en) * 2021-05-26 2021-06-22 北京欧应信息技术有限公司 Apparatus, system and method for assisting assessment of a motor function of an object
CN113095157A (en) * 2021-03-23 2021-07-09 深圳市创乐慧科技有限公司 Image shooting method and device based on artificial intelligence and related products
CN113397503A (en) * 2021-06-16 2021-09-17 苏州景昱医疗器械有限公司 Control method of household medical equipment and related device
CN113902084A (en) * 2020-07-06 2022-01-07 阿里体育有限公司 Motion counting method and device, electronic equipment and computer storage medium
WO2022022551A1 (en) * 2020-07-29 2022-02-03 清华大学 Method and device for analyzing video for evaluating movement disorder having privacy protection function
CN114189509A (en) * 2020-08-24 2022-03-15 株式会社爱克萨威泽资 Information processing method, information processing apparatus, and recording medium
CN114267086A (en) * 2021-12-30 2022-04-01 西南石油大学 Execution quality evaluation method for complex continuous motion in motion
WO2022088176A1 (en) * 2020-10-29 2022-05-05 Hong Kong Applied Science and Technology Research Institute Company Limited Actional-structural self-attention graph convolutional network for action recognition
US11417078B2 (en) 2020-02-03 2022-08-16 Beijing Sensetime Technology Development Co., Ltd. Image processing method and apparatus, and storage medium
CN115063723A (en) * 2022-06-20 2022-09-16 无锡慧眼人工智能科技有限公司 Method for identifying defects of movement type obstacles based on human body posture estimation
CN115188063A (en) * 2021-04-06 2022-10-14 广州视源电子科技股份有限公司 Running posture analysis method and device based on running machine, running machine and storage medium
CN115205740A (en) * 2022-07-08 2022-10-18 温州医科大学 Body-building exercise auxiliary teaching method and system
WO2023025051A1 (en) * 2021-08-23 2023-03-02 港大科桥有限公司 Video action detection method based on end-to-end framework, and electronic device
CN116110584A (en) * 2023-02-23 2023-05-12 江苏万顶惠康健康科技服务有限公司 Human health risk assessment early warning system
CN116309699A (en) * 2023-02-01 2023-06-23 中国科学院自动化研究所 Method, device and equipment for determining associated reaction degree of target object

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111048205A (en) * 2019-12-17 2020-04-21 创新工场(北京)企业管理股份有限公司 Method and device for assessing symptoms of Parkinson's disease
WO2021155632A1 (en) * 2020-02-03 2021-08-12 北京市商汤科技开发有限公司 Image processing method and apparatus, and electronic device and storage medium
CN111310616A (en) * 2020-02-03 2020-06-19 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111310616B (en) * 2020-02-03 2023-11-28 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
US11417078B2 (en) 2020-02-03 2022-08-16 Beijing Sensetime Technology Development Co., Ltd. Image processing method and apparatus, and storage medium
CN111460976A (en) * 2020-03-30 2020-07-28 上海交通大学 Data-driven real-time hand motion evaluation method based on RGB video
CN111460976B (en) * 2020-03-30 2023-06-06 上海交通大学 Data-driven real-time hand motion assessment method based on RGB video
CN111488824A (en) * 2020-04-09 2020-08-04 北京百度网讯科技有限公司 Motion prompting method and device, electronic equipment and storage medium
CN111488824B (en) * 2020-04-09 2023-08-08 北京百度网讯科技有限公司 Motion prompting method, device, electronic equipment and storage medium
CN112541382A (en) * 2020-04-13 2021-03-23 深圳优地科技有限公司 Method and system for assisting movement and identification terminal equipment
CN111539352A (en) * 2020-04-27 2020-08-14 支付宝(杭州)信息技术有限公司 Method and system for judging human body joint motion direction
CN113902084A (en) * 2020-07-06 2022-01-07 阿里体育有限公司 Motion counting method and device, electronic equipment and computer storage medium
WO2022022551A1 (en) * 2020-07-29 2022-02-03 清华大学 Method and device for analyzing video for evaluating movement disorder having privacy protection function
CN111938658A (en) * 2020-08-10 2020-11-17 陈雪丽 Joint mobility monitoring system and method for hand, wrist and forearm
CN111938658B (en) * 2020-08-10 2023-09-01 陈雪丽 Joint activity monitoring system and method for hand, wrist and forearm
CN114189509A (en) * 2020-08-24 2022-03-15 株式会社爱克萨威泽资 Information processing method, information processing apparatus, and recording medium
CN111985448A (en) * 2020-09-02 2020-11-24 深圳壹账通智能科技有限公司 Vehicle image recognition method and device, computer equipment and readable storage medium
WO2022088176A1 (en) * 2020-10-29 2022-05-05 Hong Kong Applied Science and Technology Research Institute Company Limited Actional-structural self-attention graph convolutional network for action recognition
CN112543936B (en) * 2020-10-29 2021-09-28 香港应用科技研究院有限公司 Motion structure self-attention-drawing convolution network model for motion recognition
CN112543936A (en) * 2020-10-29 2021-03-23 香港应用科技研究院有限公司 Motion structure self-attention-seeking convolutional network for motion recognition
CN112418153A (en) * 2020-12-04 2021-02-26 上海商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and computer storage medium
CN112418153B (en) * 2020-12-04 2024-06-11 上海商汤科技开发有限公司 Image processing method, device, electronic equipment and computer storage medium
CN112528957A (en) * 2020-12-28 2021-03-19 北京万觉科技有限公司 Human motion basic information detection method and system and electronic equipment
CN112741620A (en) * 2020-12-30 2021-05-04 华南理工大学 Cervical spondylosis evaluation device based on limb movement
CN113095157A (en) * 2021-03-23 2021-07-09 深圳市创乐慧科技有限公司 Image shooting method and device based on artificial intelligence and related products
CN115188063A (en) * 2021-04-06 2022-10-14 广州视源电子科技股份有限公司 Running posture analysis method and device based on running machine, running machine and storage medium
CN112998700B (en) * 2021-05-26 2021-09-24 北京欧应信息技术有限公司 Apparatus, system and method for assisting assessment of a motor function of an object
CN112998700A (en) * 2021-05-26 2021-06-22 北京欧应信息技术有限公司 Apparatus, system and method for assisting assessment of a motor function of an object
CN113397503A (en) * 2021-06-16 2021-09-17 苏州景昱医疗器械有限公司 Control method of household medical equipment and related device
WO2022262495A1 (en) * 2021-06-16 2022-12-22 苏州景昱医疗器械有限公司 Control method and related apparatus for household medical device
WO2023025051A1 (en) * 2021-08-23 2023-03-02 港大科桥有限公司 Video action detection method based on end-to-end framework, and electronic device
CN114267086A (en) * 2021-12-30 2022-04-01 西南石油大学 Execution quality evaluation method for complex continuous motion in motion
CN115063723B (en) * 2022-06-20 2023-10-24 无锡慧眼人工智能科技有限公司 Movement type obstacle defect recognition method based on human body posture estimation
CN115063723A (en) * 2022-06-20 2022-09-16 无锡慧眼人工智能科技有限公司 Method for identifying defects of movement type obstacles based on human body posture estimation
CN115205740A (en) * 2022-07-08 2022-10-18 温州医科大学 Body-building exercise auxiliary teaching method and system
CN116309699A (en) * 2023-02-01 2023-06-23 中国科学院自动化研究所 Method, device and equipment for determining associated reaction degree of target object
CN116309699B (en) * 2023-02-01 2023-11-17 中国科学院自动化研究所 Method, device and equipment for determining associated reaction degree of target object
CN116110584B (en) * 2023-02-23 2023-09-22 江苏万顶惠康健康科技服务有限公司 Human health risk assessment early warning system
CN116110584A (en) * 2023-02-23 2023-05-12 江苏万顶惠康健康科技服务有限公司 Human health risk assessment early warning system

Similar Documents

Publication Publication Date Title
CN110738192A (en) Human motion function auxiliary evaluation method, device, equipment, system and medium
Liao et al. A review of computational approaches for evaluation of rehabilitation exercises
US11763603B2 (en) Physical activity quantification and monitoring
Wang et al. Human posture recognition based on images captured by the kinect sensor
US11759126B2 (en) Scoring metric for physical activity performance and tracking
US11663845B2 (en) Method and apparatus for privacy protected assessment of movement disorder video recordings
Loureiro et al. Using a skeleton gait energy image for pathological gait classification
Williams et al. Assessment of physical rehabilitation movements through dimensionality reduction and statistical modeling
Jianwattanapaisarn et al. Emotional characteristic analysis of human gait while real-time movie viewing
CN112741620A (en) Cervical spondylosis evaluation device based on limb movement
CN116543455A (en) Method, equipment and medium for establishing parkinsonism gait damage assessment model and using same
Kastaniotis et al. Using kinect for assesing the state of Multiple Sclerosis patients
US12009083B2 (en) Remote physical therapy and assessment of patients
Lin et al. A Feasible Fall Evaluation System via Artificial Intelligence Gesture Detection of Gait and Balance for Sub-Healthy Community-Dwelling Older Adults in Taiwan
Rumambi et al. Motion Detection Application to Measure Straight Leg Raise ROM Using MediaPipe Pose
Abraham et al. Ensemble of shape functions and support vector machines for the estimation of discrete arm muscle activation from external biceps 3D point clouds
US20240008803A1 (en) Body fluid volume estimation device, body fluid volume estimation method, and non-transitory computer-readable medium
Moreau et al. A motion recognition technique based on linear matrix representation to improve Parkinson’s disease treatments
Jaleel et al. Body motion detection and tracking using a Kinect sensor
Amprimo et al. Deep learning for hand tracking in Parkinson’s disease video-based assessment: Current and future perspectives
CN116958755A (en) Clinical teaching student transient behavior model evaluation system based on deep learning algorithm
CN117576722A (en) Rehabilitation training method, device, equipment and medium based on image recognition
Ramirez using Human Skeleton Features. In: 11th
Kim et al. TULIP: Multi-camera 3D Precision Assessment of Parkinson's Disease
Boukhennoufa Wearable sensor-based rehabilitation exercise assessment for post-stroke rehabilitation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40021372

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination