CN113327247B - Facial nerve function assessment method, device, computer equipment and storage medium - Google Patents

Facial nerve function assessment method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN113327247B
CN113327247B CN202110793517.0A CN202110793517A CN113327247B CN 113327247 B CN113327247 B CN 113327247B CN 202110793517 A CN202110793517 A CN 202110793517A CN 113327247 B CN113327247 B CN 113327247B
Authority
CN
China
Prior art keywords
facial
image
tooth
face
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110793517.0A
Other languages
Chinese (zh)
Other versions
CN113327247A (en
Inventor
蒋晟
吴剑煌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202110793517.0A priority Critical patent/CN113327247B/en
Priority to PCT/CN2021/113659 priority patent/WO2023284067A1/en
Publication of CN113327247A publication Critical patent/CN113327247A/en
Application granted granted Critical
Publication of CN113327247B publication Critical patent/CN113327247B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the invention discloses a facial nerve function assessment method, a facial nerve function assessment device, computer equipment and a storage medium. The method comprises the following steps: acquiring a face video image of a user completing a preset face action; preprocessing the face video image to obtain a face image which meets the preset amplitude requirement when the user finishes the preset face action; and inputting the facial image into a trained deep learning network model to obtain the facial nerve function grade of the user. According to the technical scheme provided by the embodiment of the invention, the facial video image of the preset facial action of the user is automatically acquired, and the facial nerve function grade of the user is automatically evaluated according to the facial video image, so that subjectivity and prejudice of manual evaluation are avoided, the evaluation standard is unified, the evaluation result is more objective and accurate, a more suitable rehabilitation training prescription is conveniently given later, and meanwhile, the workload of a rehabilitation doctor is greatly reduced.

Description

Facial nerve function assessment method, device, computer equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of medical detection, in particular to a facial nerve function assessment method, a facial nerve function assessment device, computer equipment and a storage medium.
Background
Facial paralysis, also known as peripheral facial paralysis, frequently occurs on one side of the face (facial paralysis on both sides is extremely rare). Facial paralysis is a common and frequently occurring disease, and the etiology is generally traumatic, iatrogenic, infectious, congenital, poisoning and the like, and can be seen in different age groups and sexes. Due to the rapid-paced work and life of the modern society, the work pressure of the young society is increasingly increased, and the morbidity is in a trend of younger. Facial paralysis can influence the functions of eyes and mouths, and in addition, because of defects of appearance, people are spelt, self-closing, depression and the like, serious negative effects are brought to physical and mental health of patients, and daily life, work and study of the patients are greatly influenced.
During diagnosis and treatment, a rehabilitation doctor firstly needs to evaluate facial nerve functions of a patient with facial paralysis, usually by requiring the patient to make a series of specific facial actions, observing the asymmetry and the movement degree of the face of the patient, and evaluating the facial nerve functions according to a scale; and then, a rehabilitation doctor formulates a targeted facial paralysis rehabilitation training prescription according to the evaluation result, and observes the subsequent rehabilitation condition of the patient through evaluation. Therefore, it is very important to objectively and accurately evaluate the facial nerve function of a patient. However, for subjective reasons, different doctors manually evaluate facial nerve functions of the same patient according to the same evaluation standard, and the evaluation results are often inconsistent, so that deviation occurs in a follow-up rehabilitation training prescription formulated according to the evaluation results. And the manual evaluation of the rehabilitation doctor is time-consuming and labor-consuming, so that the workload of the rehabilitation doctor is increased.
Disclosure of Invention
The embodiment of the invention provides a facial nerve function assessment method, a device, computer equipment and a storage medium, which are used for improving the accuracy of facial nerve function assessment and greatly reducing the workload of rehabilitation doctors.
In a first aspect, an embodiment of the present invention provides a method for evaluating a facial nerve function, including:
acquiring a face video image of a user completing a preset face action;
Preprocessing the face video image to obtain a face image which meets the preset amplitude requirement when the user finishes the preset face action;
And inputting the facial image into a trained deep learning network model to obtain the facial nerve function grade of the user.
In a second aspect, an embodiment of the present invention further provides a facial nerve function assessment device, including:
the facial video image acquisition module is used for acquiring facial video images of the user for completing the preset facial actions;
The facial image obtaining module is used for preprocessing the facial video image to obtain a facial image which meets the requirement of preset amplitude when the user finishes the preset facial action;
and the facial nerve function grade obtaining module is used for inputting the facial image into the trained deep learning network model so as to obtain the facial nerve function grade of the user.
In a third aspect, an embodiment of the present invention further provides a computer apparatus, including:
One or more processors;
a memory for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the facial nerve function assessment method provided by any embodiment of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the facial nerve function assessment method provided by any embodiment of the present invention.
The embodiment of the invention provides a facial nerve function assessment method, which comprises the steps of firstly obtaining a facial video image of a user after finishing a preset facial action, then preprocessing the obtained facial video image to obtain a facial image meeting the preset amplitude requirement when the user finishes the preset facial action, and inputting the obtained facial image into a trained deep learning network model to obtain the facial nerve function level of the user. According to the facial nerve function assessment method provided by the embodiment of the invention, the facial video image of the user for completing the preset facial action is automatically acquired, and the facial nerve function grade of the user is automatically assessed according to the facial video image, so that subjectivity and prejudice of manual assessment are avoided, the assessment standard is unified, the assessment result is more objective and accurate, a more suitable rehabilitation training prescription is conveniently given later, and meanwhile, the workload of a rehabilitation doctor is greatly lightened.
Drawings
FIG. 1 is a flowchart of a facial nerve function assessment method according to an embodiment of the present invention;
FIG. 2 is a distribution diagram of exemplary facial feature points provided in accordance with a first embodiment of the present invention;
FIG. 3 is a flowchart of a facial nerve function assessment method according to a second embodiment of the present invention;
Fig. 4 is a schematic structural diagram of a facial nerve function assessment device according to a third embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts steps as a sequential process, many of the steps may be implemented in parallel, concurrently, or with other steps. Furthermore, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example 1
Fig. 1 is a flowchart of a facial nerve function evaluation method according to an embodiment of the present invention. The embodiment is applicable to the situation of evaluating the facial nerve function level of a patient with facial paralysis, and the method can be executed by the facial nerve function evaluation device provided by the embodiment of the invention, and the device can be realized by hardware and/or software and can be generally integrated in computer equipment. As shown in fig. 1, the method specifically comprises the following steps:
S11, acquiring a face video image of the user completing the preset face action.
Specifically, the scheme of the embodiment can be implemented by the facial nerve function evaluation system software installed on the mobile phone. When the mobile phone is used, the user can be prompted to hold the mobile phone with the hand, the face keeps a proper distance from the mobile phone under the condition of proper illumination, the front face of the face is aligned to the front camera to perform the preset face action, and the demonstration action can be specifically prompted for the user so as to facilitate the user to learn. The preset facial actions may include calm, lifting eyebrows, closing eyes, showing teeth, etc., and may prompt the user that each action lasts for 3 seconds except calm and is repeated 5 times every 3 seconds. In the process that the user performs the preset facial action, the mobile phone can capture the facial video image with the lower color through the front camera for standby, and the resolution of the facial video image can be 600×800.
S12, preprocessing the face video image to obtain a face image which meets the requirement of preset amplitude when the user finishes the preset face action.
Specifically, the evaluation first needs to preprocess the obtained face video image to obtain a face image meeting the requirements, so that the obtained face image can be used as the input of a subsequent deep learning network model. The face image meeting the requirements can be specifically a face image with the largest amplitude in various preset face actions such as eyebrow lifting, eye closing and tooth displaying of the user, namely, the preset amplitude requirement is the largest amplitude, meanwhile, the preset amplitude requirement can also be a face image exceeding the preset amplitude threshold, and one face image exceeding the preset amplitude threshold can be randomly selected as the face image meeting the requirements in the process of completing one preset face action.
Optionally, the preprocessing the face video image to obtain a face image that meets a preset amplitude requirement when the user completes the preset face action includes: acquiring two-dimensional coordinates of facial feature points on each frame of image in the facial video image; and determining facial images meeting the preset amplitude requirement in the process of completing the preset facial action every time of the user according to the two-dimensional coordinates.
Specifically, two-dimensional coordinates of 68 facial feature points in each frame of image in the facial video image can be obtained, wherein the distribution situation of the 68 facial feature points can be shown in fig. 2, wherein the upper left corner of the image is taken as the origin of coordinates, the right rectangular coordinate system is taken as the positive direction of the horizontal axis, and the downward direction is taken as the positive direction of the vertical axis. The facial feature points can be distributed with respect to the facial organs and facial contours, and the motion states of the facial organs can be depicted by the facial feature points. In the process of the preset facial action of the user, the positions of the facial feature points are changed in each frame of image, so that the facial image meeting the requirement of the preset amplitude in the process of completing the preset facial action of the user every time can be determined according to the two-dimensional coordinates (specifically, the change amount of the two-dimensional coordinates) of the facial feature points. Further optionally, before the acquiring the two-dimensional coordinates of the facial feature point on each frame of image in the face video image, the method further includes: carrying out face alignment on each frame of image so as to find out an interested region; and cutting each frame of image according to the interested region and the preset resolution. Specifically, the preprocessing process may first perform facial alignment on each frame of image of the facial video image, find an area of interest, that is, a facial area of a facial paralysis patient, and uniformly cut each frame of image according to the area of interest into a size with a resolution of 250 x 250, where the size of the cut area of interest should be kept consistent, so as to calculate motion amplitudes of various preset facial motions by using facial feature points, and correspondingly, an upper left corner of the image after cutting may be used as an origin of coordinates, and right in a rectangular coordinate system is used as a horizontal axis positive direction, and downward is used as a vertical axis positive direction.
Further optionally, the preset facial motion includes an eyebrow lifting motion, an eye closing motion and a tooth showing motion, and the facial feature points include an eyebrow lifting feature point, an eye closing feature point and a tooth showing feature point; correspondingly, the determining, according to the two-dimensional coordinates, the face image meeting the preset amplitude requirement in the process of completing each preset face action of the user includes: determining a eyebrow lifting facial image corresponding to the maximum healthy eyebrow lifting characteristic value in the process of completing the eyebrow lifting action every time according to the two-dimensional coordinates of the eyebrow lifting characteristic points; determining an eye-closing face image corresponding to the maximum eye-closing characteristic value of the healthy side in the process of completing the eye-closing action every time according to the two-dimensional coordinates of the eye-closing characteristic points; and determining a tooth indicating surface image corresponding to the maximum tooth indicating characteristic value of the healthy side in the process of completing the tooth indicating action every time by the user according to the two-dimensional coordinates of the tooth indicating characteristic points.
Specifically, after the two-dimensional coordinates of the facial feature points on each frame of image are obtained, the facial feature points corresponding to the eyebrow lifting motion, the eye closing motion and the tooth showing motion can be calculated respectively, so as to judge one frame of image with the largest motion amplitude of various motions, and the facial image at the moment can be stored. For the eyebrow lifting movement, a corresponding eyebrow lifting facial image when the eyebrow lifting characteristic value of the healthy side is maximum in the process of completing one eyebrow lifting movement by the user can be determined according to the two-dimensional coordinates of the eyebrow lifting characteristic points, wherein as shown in fig. 2, the eyebrow lifting characteristic points can comprise left eyebrow lifting characteristic points (17, 18,19,20, 21) and right eyebrow lifting characteristic points (22, 23,24,25, 26), the left eyebrow lifting characteristic value can be BrowL = (y 17+y18+y19+y20+y21)/5, the right eyebrow lifting characteristic value can be BrowR = (y 22+y23+y24+y25+y26)/5, if the patient is right facial paralysis, the healthy eyebrow lifting characteristic value is BrowH = BrowL, and if the patient is left facial paralysis, the healthy eyebrow lifting characteristic value is BrowH = BrowR. Wherein y represents the ordinate of the corresponding feature point, and defines that the eyebrow lifting action is performed after the user is calm, and then the user is restored to calm to complete one eyebrow lifting action, and in the process that the user completes one eyebrow lifting action, the maximum BrowH max of the healthy side eyebrow lifting feature values in each frame of image is calculated, so that the system can intercept the moment image and save the moment image as an eyebrow lifting face image, and can save the image when the amplitude of 5 eyebrow lifting actions is maximum.
For eye-closing motion, an eye-closing face image corresponding to the process of completing the eye-closing motion of the user with the maximum eye-closing feature value of the healthy side can be determined according to the two-dimensional coordinates of the eye-closing feature points, wherein, as shown in fig. 2, the eye-closing feature points can include a left eye-closing feature point (37,38,40,41) and a right eye-closing feature point (43,44,46,47), the left eye-closing feature value can be EyeL = [ (y 41+y40)-(y37+y38) ]/2, the right eye-closing feature value can be EyeR = [ (y 47+y46)-(y43+y44) ]/2, the eye-closing feature value of the healthy side is EyeH = EyeL if the patient is right side paralysis, and the eye-closing feature value of the healthy side is EyeH = EyeR if the patient is left side paralysis. Wherein y represents the ordinate of the corresponding feature point, and defines that the eye closing action is completed after the eye closing action is completed from calm, and in the process that the user completes one eye closing action, the maximum EyeH max of the healthy side eye closing feature values in each frame of image is calculated, so that the system can intercept the moment image and store the moment image as an eye closing face image, and can store the image when the amplitude of the eye closing action is maximum for 5 times.
For the tooth-indicating action, a tooth-indicating surface image corresponding to the condition that the tooth-indicating characteristic value of the healthy side is maximum in the process of completing the tooth-indicating action by the user can be determined according to the two-dimensional coordinates of the tooth-indicating characteristic points, wherein, as shown in fig. 2, the tooth-indicating characteristic points can comprise a left tooth-indicating characteristic point (48,49,59) and a right tooth-indicating characteristic point (53, 54, 55), and then the left tooth-indicating characteristic value can beThe right side tooth characteristic value may beIf the patient is right side paralysis, the tooth-indicating characteristic value on the healthy side is MouthH = MouthL, and if the patient is left side paralysis, the tooth-indicating characteristic value on the healthy side is MouthH = MouthR. Wherein y represents the ordinate of the corresponding feature point, x represents the abscissa of the corresponding feature point, and the method defines that the tooth-indicating action is carried out from the calm to the calm, and then the calm is restored to complete the tooth-indicating action, and in the process that the user completes the tooth-indicating action once, the maximum value MouthH max of the feature value of the healthy side tooth-indicating in each frame of image is calculated, so that the system can intercept the moment image and store the moment image as a tooth-indicating surface image, and can store the image when the amplitude of the tooth-indicating action is maximum for 5 times.
Further optionally, after the preprocessing the face video image, the method further includes: obtaining a calm face image of a user; the tooth-showing feature points comprise first feature points; correspondingly, the determining, according to the two-dimensional coordinates of the tooth indicating feature points, the tooth indicating face image corresponding to the maximum tooth indicating feature value of the healthy side in the process of completing the tooth indicating action by the user each time includes: comparing a first ordinate of the first feature point in the tooth flank image with a second ordinate of the first feature point in the calm face image; and if the difference between the first ordinate and the second ordinate is greater than a preset threshold, determining that the content displayed by the tooth-indicating surface image is not tooth-indicating motion.
Specifically, a calm facial image in a calm state of the user may be captured from the facial video image, and whether the motion currently performed by the user is a tooth-indicating motion may be determined by a first feature point of the tooth-indicating feature points, where, as shown in fig. 2, the first feature point may include a left first feature point (48) and a right first feature point (54). If the patient is right side paralysis, firstly, it can be determined that the ordinate y 48 of the left first feature point in the tooth-indicating surface image corresponding to the maximum tooth-indicating feature value of the healthy side in the process of completing the tooth-indicating action by the user is obtained, meanwhile, the ordinate y rest-48 of the left first feature point in the calm face image can be obtained, if y 48-yrest-48 > y, it can be determined that the content displayed by the currently determined tooth-indicating surface image is not tooth-indicating action, that is, the user may not perform the tooth-indicating action (may be just opening chin) at present, and at this time, the user may be prompted to redo the tooth-indicating action, wherein y represents a preset threshold value. Similarly, if the patient is left side paralysis, firstly, the ordinate y 54 of the right first feature point in the tooth-indicating face image corresponding to the maximum tooth-indicating feature value of the healthy side in the process of completing the tooth-indicating action by the user can be determined, meanwhile, the ordinate y rest-54 of the right first feature point in the calm face image can be obtained, and if y 54-yrest-54 > gamma, the content displayed by the currently determined tooth-indicating face image can be determined to be not the tooth-indicating action, and the tooth-indicating action needs to be redone.
S13, inputting the facial image into the trained deep learning network model to obtain the facial nerve function grade of the user.
Specifically, after obtaining a face image meeting the preset amplitude requirement, the face image can be input into the trained deep learning network model, so that the prediction result of the facial nerve function level of the user is output through the deep learning network model. The facial images may include an eyebrow lifting facial image, an eye closing facial image, a tooth indicating facial image, and the like, and then various facial images of the same user may be respectively input into the deep learning network model, and corresponding facial nerve function level prediction results may be respectively obtained, and then an arithmetic average value of the various facial nerve function level prediction results may be taken as a final determined facial nerve function level of the user. The final output facial nerve function grade may be divided into 6 grades, from grade 1 to grade 6, and may represent normal, mild dysfunction, moderate dysfunction, and until completely nonfunctional, respectively.
According to the technical scheme provided by the embodiment of the invention, firstly, the face video image of the user after the preset face action is finished is obtained, then the obtained face video image is preprocessed to obtain the face image meeting the preset amplitude requirement when the user finishes the preset face action, and then the obtained face image is input into the trained deep learning network model to obtain the facial nerve function grade of the user. The face video image of the preset face action of the user is automatically obtained, the facial nerve function level of the user is automatically estimated according to the face video image, subjectivity and prejudice of manual estimation are avoided, the estimation standard is unified, the estimation result is more objective and accurate, a more suitable rehabilitation training prescription is conveniently given subsequently, and meanwhile, the workload of a rehabilitation doctor is greatly reduced.
Example two
Fig. 3 is a flowchart of a facial nerve function evaluation method according to a second embodiment of the present invention. The technical solution of the present embodiment is further refined based on the technical solution, and optionally, the face image includes an eyebrow lifting face image, an eye closing face image, and a tooth surface image, and the deep learning network model is a two-way network model; correspondingly, the step of inputting the facial image into the trained deep learning network model to obtain the facial nerve function level of the user comprises the following steps: and inputting two facial images of different types into the trained two-way network model to obtain the facial nerve function grade. Accordingly, as shown in fig. 3, the method specifically includes the following steps:
S31, acquiring a face video image of the user completing the preset face action.
S32, preprocessing the face video image to obtain a face image which meets the requirement of preset amplitude when the user finishes the preset face action; the face images include an eyebrow lifting face image, an eye closing face image, and a tooth indicating face image.
S33, inputting two facial images of different types into the trained two-way network model to obtain the facial nerve function grade.
Specifically, as the facial nerve function assessment requires the patient to make different facial actions (eyebrow lifting, eye closing and tooth showing), when designing the deep learning network model, a two-way network can be used, and different characteristic information can be learned through two different networks by different paths, so that the following classification operation is facilitated. Specifically, two networks VGG19 and ResNet may be used as the feature extraction parts, respectively, but this embodiment is not limited thereto, and after learning two different feature information, the feature information may be combined, and then Global Average Pooling (GAP) is performed once to obtain softmax, and classification is performed, so as to output the final level of facial nerve function.
Optionally, before the two facial images of different types are input into the trained two-way network model to obtain the facial nerve function level, the method further includes: acquiring the facial images of a plurality of facial paralysis patients, and taking facial nerve function grades obtained by corresponding manual evaluation as label data; and training the two-way network model according to the facial images of the facial paralysis patients and the label data.
Specifically, before actual use, a required two-way network model may be first constructed and trained. The method comprises the steps of storing a large number of eyebrow lifting face images, eye closing face images, tooth indicating face images and the like of patients with facial paralysis through the pretreatment method, carrying out facial nerve function assessment on the patients manually by doctors in advance, taking the facial nerve function grade obtained through assessment as label data, then forming a training set by the stored images and the label data, inputting two images of eyebrow lifting, eye closing and tooth indicating under the same facial nerve function grade by using a two-way network model, training by adopting a supervised deep learning method according to the input sequence, taking the facial nerve function grade which corresponds to the patients and is assessed by the doctors as label data, namely outputting the label data as the facial nerve function grade, and obtaining the trained two-way network model after training.
After the trained two-way network model is obtained, the two-way network model can be used for real-time evaluation of the patient. After the patient performs all evaluation actions, for example, 5 eyebrow lifting, 5 eye closing and 5 tooth showing can be performed, the system can randomly divide each of the 5 eyebrow lifting face images, the eye closing face images and the tooth showing face images stored after pretreatment into 5 groups, each group comprises one eyebrow lifting face image, one eye closing face image and one tooth showing face image, then the system can randomly select 2 images from each of the 5 groups of images respectively, namely, each group comprises two images of eyebrow lifting, eye closing and tooth showing, and a new 5 groups of images are formed as input of the two-way network model. And finally, taking arithmetic average value and rounding the results of the 5 outputs obtained by the 5 groups of inputs, namely the facial nerve function grade obtained by final prediction.
According to the technical scheme provided by the embodiment of the invention, the facial nerve function level of the user is predicted by using different types of facial images and the two-way network model, so that the actual condition of the patient is considered from various characteristics, and the accuracy of the evaluation result is further improved.
Example III
Fig. 4 is a schematic structural diagram of a facial nerve function assessment device according to a third embodiment of the present invention, where the device may be implemented in hardware and/or software, and may be generally integrated in a computer device, for executing the facial nerve function assessment method according to any embodiment of the present invention. As shown in fig. 4, the apparatus includes:
A facial video image acquisition module 41, configured to acquire a facial video image of a user completing a preset facial motion;
A facial image obtaining module 42, configured to pre-process the facial video image to obtain a facial image that meets a preset amplitude requirement when the user completes the preset facial motion;
The facial nerve function level obtaining module 43 is configured to input the facial image into a trained deep learning network model to obtain a facial nerve function level of the user.
According to the technical scheme provided by the embodiment of the invention, firstly, the face video image of the user after the preset face action is finished is obtained, then the obtained face video image is preprocessed to obtain the face image meeting the preset amplitude requirement when the user finishes the preset face action, and then the obtained face image is input into the trained deep learning network model to obtain the facial nerve function grade of the user. The face video image of the preset face action of the user is automatically obtained, the facial nerve function level of the user is automatically estimated according to the face video image, subjectivity and prejudice of manual estimation are avoided, the estimation standard is unified, the estimation result is more objective and accurate, a more suitable rehabilitation training prescription is conveniently given subsequently, and meanwhile, the workload of a rehabilitation doctor is greatly reduced.
On the basis of the above technical solution, optionally, the facial image obtaining module 42 includes:
The feature point coordinate acquisition sub-module is used for acquiring the two-dimensional coordinates of the facial feature points on each frame of image in the facial video image;
and the facial image determining sub-module is used for determining facial images meeting the preset amplitude requirement in the process of completing the preset facial action every time of the user according to the two-dimensional coordinates.
On the basis of the technical scheme, optionally, the preset facial actions comprise eyebrow lifting actions, eye closing actions and tooth showing actions, and the facial feature points comprise eyebrow lifting feature points, eye closing feature points and tooth showing feature points;
accordingly, the facial image determination submodule includes:
The eyebrow lifting facial image determining unit is used for determining an eyebrow lifting facial image corresponding to the eyebrow lifting feature value on the healthy side in the process of completing the eyebrow lifting action every time according to the two-dimensional coordinates of the eyebrow lifting feature points;
an eye-closing face image determining unit, configured to determine an eye-closing face image corresponding to a case where the eye-closing feature value of the healthy side is maximum in the process of completing the eye-closing action every time by the user according to the two-dimensional coordinates of the eye-closing feature points; and
And the tooth indicating face image determining unit is used for determining the corresponding tooth indicating face image when the tooth indicating characteristic value of the healthy side is maximum in the process of completing the tooth indicating action every time according to the two-dimensional coordinates of the tooth indicating characteristic points.
On the basis of the above technical solution, optionally, the facial image obtaining module 42 is further configured to: after the preprocessing of the face video image, further comprising: obtaining a calm face image of a user; the tooth-showing feature points comprise first feature points;
correspondingly, the tooth surface image showing determination unit includes:
A coordinate comparison subunit for comparing a first ordinate of the first feature point in the tooth-showing face image with a second ordinate of the first feature point in the calm face image;
And the tooth showing action judging subunit is used for determining that the content displayed by the tooth showing surface image is not tooth showing action if the difference between the first ordinate and the second ordinate is larger than a preset threshold value.
On the basis of the above technical solution, optionally, the facial image obtaining module 42 further includes:
A face alignment sub-module, configured to perform face alignment on each frame of image before the two-dimensional coordinates of the facial feature points on each frame of image in the face video image are acquired, so as to find a region of interest;
And the image cutting sub-module is used for cutting each frame of image according to the region of interest and the preset resolution.
On the basis of the technical scheme, optionally, the face image comprises an eyebrow lifting face image, an eye closing face image and a tooth showing face image, and the deep learning network model is a two-way network model;
accordingly, the facial nerve function level obtaining module 43 specifically functions to:
And inputting two facial images of different types into the trained two-way network model to obtain the facial nerve function grade.
On the basis of the above technical solution, optionally, the facial nerve function assessment device further includes:
the training data acquisition module is used for acquiring the facial images of a plurality of facial paralysis patients before inputting two facial images of different types into the trained two-way network model to obtain the facial nerve function grade, and taking the facial nerve function grade obtained by corresponding manual evaluation as label data;
and the model training module is used for training the two-way network model according to the facial images of the facial paralysis patients and the label data.
The facial nerve function assessment device provided by the embodiment of the invention can execute the facial nerve function assessment method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
It should be noted that, in the above embodiment of the facial nerve function assessment device, each unit and module included is divided according to the functional logic only, but is not limited to the above division, as long as the corresponding function can be realized; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Example IV
Fig. 5 is a schematic structural diagram of a computer device provided in a fourth embodiment of the present invention, and shows a block diagram of an exemplary computer device suitable for implementing an embodiment of the present invention. The computer device shown in fig. 5 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention. As shown in fig. 5, the computer apparatus includes a processor 51, a memory 52, an input device 53, and an output device 54; the number of processors 51 in the computer device may be one or more, in fig. 5, one processor 51 is taken as an example, and the processors 51, the memory 52, the input device 53, and the output device 54 in the computer device may be connected by a bus or other means, in fig. 5, by a bus connection is taken as an example.
The memory 52 is a computer-readable storage medium that can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the facial nerve function assessment method in the embodiment of the present invention (e.g., the facial video image acquisition module 41, the facial image acquisition module 42, and the facial nerve function level acquisition module 43 in the facial nerve function assessment device). The processor 51 executes various functional applications of the computer device and data processing, namely, implements the above-described facial nerve function assessment method by running software programs, instructions, and modules stored in the memory 52.
The memory 52 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the computer device, etc. In addition, memory 52 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 52 may further comprise memory remotely located from processor 51, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 53 may be used to acquire a face video image of a user completing a preset face action, and to generate key signal inputs related to user settings and function controls of the computer device, etc. The output device 54 includes a display screen or the like that may be used to present the user with a prediction of the final facial nerve function level.
Example five
A fifth embodiment of the present invention also provides a storage medium containing computer-executable instructions for performing a facial nerve function assessment method when executed by a computer processor, the method comprising:
acquiring a face video image of a user completing a preset face action;
Preprocessing the face video image to obtain a face image which meets the preset amplitude requirement when the user finishes the preset face action;
And inputting the facial image into a trained deep learning network model to obtain the facial nerve function grade of the user.
The storage medium may be any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, lanbus (Rambus) RAM, etc.; nonvolatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a computer system in which the program is executed, or may be located in a different second computer system connected to the computer system through a network (such as the internet). The second computer system may provide program instructions to the computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations (e.g., in different computer systems connected by a network). The storage medium may store program instructions (e.g., embodied as a computer program) executable by one or more processors.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the above-described method operations, and may also perform the related operations in the facial nerve function assessment method provided in any embodiment of the present invention.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (7)

1. A method for evaluating facial nerve function, comprising:
acquiring a face video image of a user completing a preset face action;
Preprocessing the face video image to obtain a face image which meets the preset amplitude requirement when the user finishes the preset face action;
inputting the facial image into a trained deep learning network model to obtain the facial nerve function grade of the user;
the preprocessing the face video image to obtain a face image meeting a preset amplitude requirement when the user finishes the preset face action comprises the following steps:
Acquiring two-dimensional coordinates of facial feature points on each frame of image in the facial video image; wherein the facial feature points are distributed relative to each facial organ and facial contour;
Determining a face image meeting the preset amplitude requirement in the process of completing the preset face action every time of the user according to the two-dimensional coordinates;
the facial feature points comprise eyebrow lifting feature points, eye closing feature points and tooth showing feature points;
correspondingly, the determining, according to the two-dimensional coordinates, the face image meeting the preset amplitude requirement in the process of completing each preset face action of the user includes:
determining a eyebrow lifting facial image corresponding to the maximum healthy eyebrow lifting characteristic value in the process of completing the eyebrow lifting action every time according to the two-dimensional coordinates of the eyebrow lifting characteristic points;
Determining an eye-closing face image corresponding to the maximum eye-closing characteristic value of the healthy side in the process of completing the eye-closing action every time according to the two-dimensional coordinates of the eye-closing characteristic points; and
Determining a tooth indicating surface image corresponding to the maximum tooth indicating characteristic value of the healthy side in the process of completing the tooth indicating action every time by a user according to the two-dimensional coordinates of the tooth indicating characteristic points;
After the preprocessing of the face video image, further comprising: obtaining a calm face image of a user; the tooth-showing feature points comprise first feature points;
Correspondingly, the determining, according to the two-dimensional coordinates of the tooth indicating feature points, the tooth indicating face image corresponding to the maximum tooth indicating feature value of the healthy side in the process of completing the tooth indicating action by the user each time includes:
Comparing a first ordinate of the first feature point in the tooth flank image with a second ordinate of the first feature point in the calm face image;
If the difference between the first ordinate and the second ordinate is greater than a preset threshold, determining that the content displayed by the tooth-indicating face image is not tooth-indicating action;
the step of determining the tooth indicating surface image corresponding to the maximum tooth indicating characteristic value of the healthy side in the process of completing the tooth indicating action by the user according to the two-dimensional coordinates of the tooth indicating characteristic points, and the step of:
And intercepting a calm face image of the user in a calm state from the face video image, and judging whether the action currently performed by the user is a tooth-showing action or not through a first characteristic point in the tooth-showing characteristic points.
2. The facial nerve function assessment method according to claim 1, further comprising, prior to said acquiring the two-dimensional coordinates of the facial feature points on each frame of image in the facial video image:
carrying out face alignment on each frame of image so as to find out an interested region;
And cutting each frame of image according to the interested region and the preset resolution.
3. The facial nerve function evaluation method according to claim 1, wherein the face image includes an eyebrow lifting face image, an eye closing face image, and a tooth showing face image, and the deep learning network model is a two-way network model;
correspondingly, the step of inputting the facial image into the trained deep learning network model to obtain the facial nerve function level of the user comprises the following steps:
And inputting two facial images of different types into the trained two-way network model to obtain the facial nerve function grade.
4. The facial nerve function assessment method according to claim 3, further comprising, before said inputting two of said facial images of different types into said two-way network model after training to obtain said facial nerve function level:
Acquiring the facial images of a plurality of facial paralysis patients, and taking facial nerve function grades obtained by corresponding manual evaluation as label data;
And training the two-way network model according to the facial images of the facial paralysis patients and the label data.
5. A facial nerve function evaluation device, comprising:
the facial video image acquisition module is used for acquiring facial video images of the user for completing the preset facial actions;
The facial image obtaining module is used for preprocessing the facial video image to obtain a facial image which meets the requirement of preset amplitude when the user finishes the preset facial action;
The facial nerve function grade obtaining module is used for inputting the facial image into the trained deep learning network model so as to obtain the facial nerve function grade of the user;
the face image obtaining module includes:
The feature point coordinate acquisition sub-module is used for acquiring the two-dimensional coordinates of the facial feature points on each frame of image in the facial video image; wherein the facial feature points are distributed relative to each facial organ and facial contour;
The facial image determining submodule is used for determining facial images meeting the preset amplitude requirement in the process of completing the preset facial action every time of the user according to the two-dimensional coordinates;
the facial feature points comprise eyebrow lifting feature points, eye closing feature points and tooth indicating feature points;
Accordingly, the facial image determination submodule includes:
The eyebrow lifting facial image determining unit is used for determining an eyebrow lifting facial image corresponding to the eyebrow lifting feature value on the healthy side in the process of completing the eyebrow lifting action every time according to the two-dimensional coordinates of the eyebrow lifting feature points;
an eye-closing face image determining unit, configured to determine an eye-closing face image corresponding to a case where the eye-closing feature value of the healthy side is maximum in the process of completing the eye-closing action every time by the user according to the two-dimensional coordinates of the eye-closing feature points; and
The tooth indicating face image determining unit is used for determining a tooth indicating face image corresponding to the maximum tooth indicating characteristic value of the healthy side in the process of completing the tooth indicating action every time according to the two-dimensional coordinates of the tooth indicating characteristic points;
The facial image acquisition module is further configured to: after the preprocessing of the face video image, further comprising: obtaining a calm face image of a user; the tooth-showing feature points comprise first feature points;
Correspondingly, the tooth flank image determination unit comprises:
A coordinate comparison subunit for comparing a first ordinate of the first feature point in the tooth-showing face image with a second ordinate of the first feature point in the calm face image;
a tooth indicating action judging subunit, configured to determine that the content displayed by the tooth indicating face image is not tooth indicating action if the difference between the first ordinate and the second ordinate is greater than a preset threshold;
the tooth flank image determination unit is further configured to:
And intercepting a calm face image of the user in a calm state from the face video image, and judging whether the action currently performed by the user is a tooth-showing action or not through a first characteristic point in the tooth-showing characteristic points.
6. A computer device, comprising:
One or more processors;
a memory for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the facial nerve function assessment method of any one of claims 1-4.
7. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the facial nerve function assessment method according to any one of claims 1 to 4.
CN202110793517.0A 2021-07-14 2021-07-14 Facial nerve function assessment method, device, computer equipment and storage medium Active CN113327247B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110793517.0A CN113327247B (en) 2021-07-14 2021-07-14 Facial nerve function assessment method, device, computer equipment and storage medium
PCT/CN2021/113659 WO2023284067A1 (en) 2021-07-14 2021-08-20 Facial nerve function evaluation method and apparatus, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110793517.0A CN113327247B (en) 2021-07-14 2021-07-14 Facial nerve function assessment method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113327247A CN113327247A (en) 2021-08-31
CN113327247B true CN113327247B (en) 2024-06-18

Family

ID=77426265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110793517.0A Active CN113327247B (en) 2021-07-14 2021-07-14 Facial nerve function assessment method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113327247B (en)
WO (1) WO2023284067A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109508644A (en) * 2018-10-19 2019-03-22 陕西大智慧医疗科技股份有限公司 Facial paralysis grade assessment system based on the analysis of deep video data
CN109659006A (en) * 2018-12-10 2019-04-19 深圳先进技术研究院 Facial muscle training method, device and electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101634730B1 (en) * 2014-08-01 2016-07-04 상지대학교산학협력단 Facial Nerve Palsy Grading Apparatus and Method
CN107713984B (en) * 2017-02-07 2024-04-09 王俊 Objective assessment method for facial paralysis
CN109686418A (en) * 2018-12-14 2019-04-26 深圳先进技术研究院 Facial paralysis degree evaluation method, apparatus, electronic equipment and storage medium
CN110084259B (en) * 2019-01-10 2022-09-20 谢飞 Facial paralysis grading comprehensive evaluation system combining facial texture and optical flow characteristics
CN109934173B (en) * 2019-03-14 2023-11-21 腾讯科技(深圳)有限公司 Expression recognition method and device and electronic equipment
CN111126180B (en) * 2019-12-06 2022-08-05 四川大学 Facial paralysis severity automatic detection system based on computer vision
CN113033359B (en) * 2021-03-12 2023-02-24 西北大学 Self-supervision-based pre-training and facial paralysis grading modeling and grading method and system
CN113053517B (en) * 2021-03-29 2023-03-07 深圳大学 Facial paralysis grade evaluation method based on dynamic region quantitative indexes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109508644A (en) * 2018-10-19 2019-03-22 陕西大智慧医疗科技股份有限公司 Facial paralysis grade assessment system based on the analysis of deep video data
CN109659006A (en) * 2018-12-10 2019-04-19 深圳先进技术研究院 Facial muscle training method, device and electronic equipment

Also Published As

Publication number Publication date
WO2023284067A1 (en) 2023-01-19
CN113327247A (en) 2021-08-31

Similar Documents

Publication Publication Date Title
CN110210302B (en) Multi-target tracking method, device, computer equipment and storage medium
US20220415087A1 (en) Method, Device, Electronic Equipment and Storage Medium for Positioning Macular Center in Fundus Images
Grafsgaard et al. Automatically recognizing facial expression: Predicting engagement and frustration
Hu et al. Research on abnormal behavior detection of online examination based on image information
CN110781976B (en) Extension method of training image, training method and related device
US11945125B2 (en) Auxiliary photographing device for dyskinesia analysis, and control method and apparatus for auxiliary photographing device for dyskinesia analysis
CN101604382A (en) A kind of learning fatigue recognition interference method based on human facial expression recognition
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
CN112101123B (en) Attention detection method and device
CN109086676A (en) A kind of attention of student analysis system and its determination method
CN113139439B (en) Online learning concentration evaluation method and device based on face recognition
CN111325144A (en) Behavior detection method and apparatus, computer device and computer-readable storage medium
CN112989947A (en) Method and device for estimating three-dimensional coordinates of human body key points
CN110298569A (en) Learning evaluation method and device based on eye movement identification
CN115546692A (en) Remote education data acquisition and analysis method, equipment and computer storage medium
CN114343577A (en) Cognitive function evaluation method, terminal device, and computer-readable storage medium
CN113327247B (en) Facial nerve function assessment method, device, computer equipment and storage medium
CN110263678A (en) A kind of face direction determination process and system
CN113095260A (en) Intelligent student self-learning state monitoring method
CN116341983A (en) Concentration evaluation and early warning method, system, electronic equipment and medium
CN111588345A (en) Eye disease detection method, AR glasses and readable storage medium
CN114333063A (en) Martial art action correction method and device based on human body posture estimation
CN112686851A (en) Image detection method, device and storage medium
Mehrubeoglu et al. Capturing reading patterns through a real-time smart camera iris tracking system
CN114038045A (en) Cross-modal face recognition model construction method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant