CN109659006B - Facial muscle training method and device and electronic equipment - Google Patents

Facial muscle training method and device and electronic equipment Download PDF

Info

Publication number
CN109659006B
CN109659006B CN201811506296.9A CN201811506296A CN109659006B CN 109659006 B CN109659006 B CN 109659006B CN 201811506296 A CN201811506296 A CN 201811506296A CN 109659006 B CN109659006 B CN 109659006B
Authority
CN
China
Prior art keywords
current
point group
coordinate difference
difference value
action completion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811506296.9A
Other languages
Chinese (zh)
Other versions
CN109659006A (en
Inventor
蒋晟
吴剑煌
胡庆茂
王玲
钟卫正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201811506296.9A priority Critical patent/CN109659006B/en
Publication of CN109659006A publication Critical patent/CN109659006A/en
Priority to PCT/CN2019/124202 priority patent/WO2020119665A1/en
Application granted granted Critical
Publication of CN109659006B publication Critical patent/CN109659006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a facial muscle training method, a facial muscle training device and electronic equipment, and relates to the field of virtual rehabilitation training, wherein the method comprises the following steps: acquiring at least one feature point group of a target face, wherein each feature point group comprises two feature points; calculating a current coordinate difference value corresponding to at least one characteristic point group, wherein the current coordinate difference value is the coordinate difference value of the at least one characteristic point group in the current frame picture; generating a current action completion degree according to the current coordinate difference value, the initial coordinate difference value and a preset action completion value, wherein the initial coordinate difference value represents the coordinate difference value of at least one characteristic point group in an initial state; and when the current action completion degree is greater than a preset action completion degree threshold value, determining that the current training action is completed. The facial muscle training method, the facial muscle training device and the electronic equipment provided by the embodiment of the invention can feed back whether the current training action is finished or not when a user performs facial muscle training, so that the facial muscle training quality is ensured.

Description

Facial muscle training method and device and electronic equipment
Technical Field
The invention relates to the field of virtual rehabilitation training, in particular to a facial muscle training method and device and electronic equipment.
Background
Facial paralysis can skew the eyes of a patient, affect the expression of normal expression of the patient, even affect the appearance and the appearance of a meter of the patient, have great negative effects on the mental health of the patient and hinder social interaction of the patient. The facial paralysis patients in China are numerous and seriously harmed by facial paralysis, the morbidity of the facial paralysis patients is in a trend of rising year by year, and the morbidity of the facial paralysis patients is in a trend of being younger due to the increase of the working pressure of the young people society.
If the facial paralysis patient can be discovered as soon as possible and treated in time, the facial paralysis patient can be completely recovered. The facial muscle function rehabilitation training is generally active rehabilitation training for performing strength exercise on facial eyes, forehead, mouth, nose and the like through a patient, and the patient needs to insist on performing a certain amount of facial muscle function rehabilitation training every day to perform actions such as raising eyebrows, frowning, closing eyes, shrugging nose, showing teeth, pouting mouth and the like.
Disclosure of Invention
The invention aims to provide a facial muscle training method, a facial muscle training device and electronic equipment, which can feed back whether the current training action is finished or not when a user performs facial muscle training, and ensure the facial muscle training quality.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a facial muscle training method, where the method includes: acquiring at least one feature point group of a target face, wherein each feature point group comprises two feature points; calculating a current coordinate difference value corresponding to the at least one characteristic point group, wherein the current coordinate difference value is a coordinate difference value of the at least one characteristic point group in a current frame picture; generating a current action completion degree according to the current coordinate difference value, an initial coordinate difference value and a preset action completion value, wherein the initial coordinate difference value represents a coordinate difference value of the at least one characteristic point group in an initial state; and when the current action completion degree is greater than a preset action completion degree threshold value, determining that the current training action is completed.
In a second aspect, an embodiment of the present invention provides a facial muscle training apparatus, including: the system comprises a characteristic point group extraction module, a characteristic point group extraction module and a characteristic point group extraction module, wherein the characteristic point group extraction module is used for acquiring at least one characteristic point group of a target face, and each characteristic point group comprises two characteristic points; a coordinate difference value calculating module, configured to calculate a current coordinate difference value corresponding to the at least one feature point group, where the current coordinate difference value is a coordinate difference value of the at least one feature point group in a current frame picture; the action completion degree calculation module is used for generating a current action completion degree according to the current coordinate difference value, an initial coordinate difference value and a preset action completion value, wherein the initial coordinate difference value represents the coordinate difference value of the at least one characteristic point group in an initial state; and the judging module is used for judging whether the current action completion degree is greater than a preset action completion degree threshold value or not, wherein when the current action completion degree is greater than the preset action completion degree threshold value, the current training action is determined to be completed.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory configured to store one or more programs; a processor. The one or more programs, when executed by the processor, implement the facial muscle training method described above.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the facial muscle training method described above.
Compared with the prior art, the facial muscle training method, the device and the electronic equipment provided by the embodiment of the invention have the advantages that the current action completion degree is generated by the current coordinate difference value, the initial coordinate difference value and the preset action completion value after the current coordinate difference value corresponding to the at least one feature point group in the current frame picture is obtained through calculation through the at least one feature point group of the target face, and then whether the user completes the current training action or not is judged according to the current action completion degree.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic block diagram of an electronic device provided by an embodiment of the invention;
FIG. 2 shows a schematic flow diagram of a facial muscle training method provided by embodiments of the present invention;
FIG. 3 is a diagram of a distribution model of a face feature point set;
FIG. 4 is a schematic flow chart of the substeps of S300 in FIG. 2;
FIG. 5 is a schematic flow chart of the substeps of S500 in FIG. 2;
FIG. 6 is a schematic flow chart of the substeps of S400 in FIG. 2;
FIG. 7 is a schematic diagram of a polygon formed by the face anchor point set;
FIG. 8 is another schematic diagram of a set of facial anchor points forming a polygon;
FIG. 9 is a schematic flow chart of the substeps of S420 of FIG. 6;
FIG. 10 is a schematic flow chart of a facial muscle training method provided by an embodiment of the present invention;
FIG. 11 is a schematic block diagram of a facial muscle training apparatus provided in accordance with an embodiment of the present invention;
FIG. 12 is a schematic block diagram of a coordinate difference calculation module of a facial muscle training apparatus according to an embodiment of the present invention;
FIG. 13 is a schematic block diagram of an action-completion calculating module of a facial muscle training apparatus according to an embodiment of the present invention;
FIG. 14 is a schematic block diagram of a preset action completion value updating module of the facial muscle training apparatus according to the embodiment of the present invention;
fig. 15 is a schematic structural diagram illustrating an action completion value updating unit of a facial muscle training apparatus according to an embodiment of the present invention.
In the figure: 10-an electronic device; 110-a memory; 120-a processor; 130-a memory controller; 140-peripheral interfaces; 150-a radio frequency unit; 160-communication bus/signal line; 170-a camera unit; 180-a display unit; 200-facial muscle training device; 210-a feature point group extraction module; 220-picture resolution adjustment module; 230-coordinate difference calculation module; 231-sub-coordinate difference value calculating unit; 232-current coordinate difference value calculating unit; 240-preset action completion value updating module; 241-a polygon area calculation unit; 242-action completion value update unit; 2421-quotient value calculation subunit; 2422-done value update subunit; 250-action completion calculation module; 251-an action completion value calculation unit; 252-action completion degree calculation unit; 260-judging module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The method of current prior art to facial muscle function rehabilitation training mainly adopts traditional mirror surface therapy, just is to place a mirror in the front of the patient, and the patient observes the state of oneself face through the mirror, from observing the specific detail condition of facial action to obtain the result of feedback training, and then accomplish the rehabilitation training of facial muscle function.
Facial muscle training can effectively promote the recovery of facial muscle movement function and improve the facial paralysis rehabilitation effect. Although the traditional mirror therapy is simple to operate and is very convenient and easy for a patient to realize, the mirror can not feed back whether the execution degree of the training action of the patient meets the requirement of rehabilitation training or not to the patient; and when the mirror surface therapy is adopted, the mirror and the patient do not have any interaction, so that the training process is monotonous, the patient loses interest in the rehabilitation training easily, and the rehabilitation training effect is poor.
Based on the defects in the prior art, an improvement method provided by the embodiment of the invention is as follows: after a current coordinate difference value corresponding to the at least one feature point group in a current frame picture is obtained through calculation through at least one feature point group of the target face, a current action completion degree is generated through the current coordinate difference value, the initial coordinate difference value and a preset action completion value, and then whether the user completes a current training action or not is judged according to the current action completion degree.
Referring to fig. 1, fig. 1 shows a schematic structural diagram of an electronic device 10 according to an embodiment of the present invention, in the embodiment of the present invention, the electronic device 10 may be, but is not limited to, a smart phone, a Personal Computer (PC), a tablet computer, a laptop portable computer, a Personal Digital Assistant (PDA), and the like. The electronic device 10 includes a memory 110, a memory controller 130, one or more processors 120 (only one shown), a peripheral interface 140, a radio frequency unit 150, a camera unit 170, a display unit 180, and the like. These components communicate with each other via one or more communication buses/signal lines 160.
The memory 110 can be used for storing software programs and modules, such as program instructions/modules corresponding to the facial muscle training device 200 provided in the embodiment of the present invention, and the processor 120 executes various functional applications and image processing, such as the facial muscle training method provided in the embodiment of the present invention, by running the software programs and modules stored in the memory 110.
The Memory 110 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 120 may be an integrated circuit chip having signal processing capabilities. The Processor 120 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), a voice Processor, a video Processor, and the like; but may also be a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor 120 may be any conventional processor or the like.
The peripheral interface 140 couples various input/output devices to the processor 120 as well as to the memory 110. In some embodiments, peripheral interface 140, processor 120, and memory controller 130 may be implemented in a single chip. In other embodiments of the present invention, they may be implemented by separate chips.
The rf unit 150 is used for receiving and transmitting electromagnetic waves, and implementing interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices.
The camera unit 170 is used to take pictures so that the processor 120 processes the taken pictures.
The display unit 180 is configured to provide a graphical output interface for a user, and display image information for the user to perform facial muscle training.
It will be appreciated that the configuration shown in FIG. 1 is merely illustrative and that electronic device 10 may include more or fewer components than shown in FIG. 1 or may have a different configuration than shown in FIG. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
For example, with respect to the electronic device 10, some of the units or devices included therein may exist as separate devices. For example, in some other embodiments of the present invention, the electronic device 10 may also be implemented by a scheme that the electronic device 10 does not include the image capturing unit 170, and the electronic device 10 establishes communication with an image capturing device, where the image capturing device is used to capture a picture, for example, a picture of a patient, and then sends the captured picture to the electronic device 10 through a wired or wireless network to implement the facial muscle training method provided by the embodiments of the present invention.
Optionally, in some other embodiments of the embodiment of the present invention, the electronic device 10 may also be implemented in a manner that the electronic device 10 does not include the display unit 180, and the electronic device 10 establishes communication with a display device, and the image information during training is sent to the display device by the electronic device 10 in a wired or wireless network manner, so that the user can complete facial muscle training by referring to the image information.
Referring to fig. 2, fig. 2 is a schematic flow chart of a facial muscle training method according to an embodiment of the present invention, in which the facial muscle training method includes the following steps:
s100, at least one feature point group of the target face is obtained.
When the facial muscle training is performed, the electronic device 10 determines at least one feature point group according to scene selection information of a user, where each feature point group includes two feature points, and information of the two feature points included in each feature point group is pre-configured in the electronic device 10, and when the electronic device 10 determines at least one feature point group according to the scene selection information of the user, the electronic device 10 also determines all feature points participating in the facial muscle training.
The scene selection information of the user represents a training scene corresponding to at least one feature point group, a plurality of training scenes and a corresponding relationship between each training scene and the corresponding feature point group are preset in the electronic device 10, when the electronic device 10 receives the training scene selected by the user, the training scene is used as the scene selection information of the user, and the feature point group corresponding to the scene selection information of the user is determined according to the selected training scene and the preset corresponding relationship between each training scene and the corresponding feature point group.
For example, assume that six training scenes, namely, raising eyebrows, frowning, closing eyes, cocking nose, showing teeth, and puckering puckered lips, are preset in the electronic device 10, and the preset corresponding relationship between each training scene and the respective corresponding feature point group is as follows: lifting eyebrow corresponding feature point group 1, wrinkling eyebrow corresponding feature point group 2 and feature point group 3, closing eye corresponding feature point group 4, feature point group 5 and feature point group 6, cocking nose corresponding feature point group 7, tooth indicating corresponding feature point group 8 and feature point group 9, pucking mouth corresponding feature point group 10, feature point group 11 and feature point group 12; when the scene selection information received by the electronic device 10 is the eyebrow lifting, determining at least one feature point group as a feature point group 1 by combining the eyebrow lifting and a preset corresponding relation; when the scene selection information received by the electronic device 10 is closed-eye, at least one feature point group determined by combining the closed-eye and the preset corresponding relationship is the feature point group 4, the feature point group 5, and the feature point group 6.
And S300, calculating a current coordinate difference value corresponding to at least one characteristic point group.
When performing facial muscle training on the user, the electronic device 10 processes the captured user picture to determine whether the user has completed the selected training scenario. Therefore, after the electronic device 10 acquires at least one feature point group, according to the acquired at least one feature point group, a current coordinate difference corresponding to the at least one feature point group is calculated from current coordinate values of all feature points included in each feature point group in a current frame picture, where a coordinate system is established in the current frame picture, and each feature point has a corresponding current coordinate value in the established coordinate system.
The current coordinate value of each feature point in the current frame picture may be obtained by using a feature point set model preset in the electronic device 10. For example, please refer to fig. 3, fig. 3 is a schematic diagram of a distribution model of a face feature point set, where all feature point distributions included in the face model feature point set can be obtained from a Dlib open source library, and after obtaining at least one feature point group of a target face, the electronic device 10, in combination with the face feature point set distribution model and the Dlib open source library, may obtain a current coordinate value of each feature point in a current frame picture, and may further calculate a current coordinate difference value corresponding to the at least one feature point group.
For example, taking the left face part training nose shrugging action as an example, assume that in the human face model shown in fig. 3, one feature point group corresponding to the left face part nose shrugging action includes two feature points, namely, feature point 31 and feature point 27, and the current coordinate value of the feature point 31 in the current frame picture is D31(x31,y31) The current coordinate value of the feature point 27 in the current frame picture is D27(x27,y27) Then the difference between the current coordinates obtained at this time can be calculated as Δ27-31=y27-y31
Optionally, in some application scenarios of the embodiment of the present invention, the obtained at least one feature point group includes at least two feature point groups, for example, in the above example, the frown corresponding feature point group 2 and the feature point group 3, the eye-closing corresponding feature point group 4, the feature point group 5, and the feature point group 6. Therefore, as an embodiment, please refer to fig. 4, fig. 4 is a schematic flow chart of the sub-steps of S300 in fig. 2, and in the embodiment of the present invention, S300 includes the following sub-steps:
and S310, respectively calculating the coordinate difference value corresponding to each characteristic point group in the at least two characteristic point groups.
When the electronic device 10 acquires at least two feature point groups of the target face according to the scene selection information of the user, the coordinate difference value corresponding to each feature point group of the at least two feature point groups is calculated respectively. For example, in the above example, when training frown, the determined feature point groups include feature point group 2 and feature point group 3, and at this time, the coordinate difference Δ corresponding to feature point group 2 is obtained by calculation first2Coordinate difference value delta corresponding to feature point group 33
And S320, generating a current coordinate difference value according to the respective corresponding coordinate difference values of all the feature point groups.
In the above example, when the determined feature point groups include the feature point group 2 and the feature point group 3, and the coordinate difference Δ corresponding to the feature point group 2 is calculated and obtained respectively2Coordinate difference value delta corresponding to feature point group 33The electronic device 10 further determines the coordinate difference value Δ corresponding to the feature point group 2 and the feature point group 3, respectively2And the coordinate difference delta3And generating a current coordinate difference value.
Alternatively, as an implementation manner, the manner of generating the current coordinate difference may be to find an arithmetic average of the coordinate differences corresponding to all feature point groups. For example, in the above example, the current coordinate difference value
Figure BDA0001899530810000111
It should be noted that in some other embodiments of the present invention, the current coordinate difference may be generated by taking a geometric mean, such as the above-mentioned methodIn the example, the current coordinate difference value
Figure BDA0001899530810000112
Alternatively, when the data size of the current frame picture obtained by the electronic device 10 is large, the efficiency of the electronic device 10 in calculating the current coordinate difference value may be reduced. Therefore, as an embodiment, before performing S300, the facial muscle training method further includes:
and S200, reducing the resolution of the current frame picture.
Before calculating the current coordinate difference value of the at least one feature point group in the current frame picture, the electronic device 10 first reduces the resolution of the current frame picture, and then reduces the data size of the current frame picture, so that the current frame picture with the reduced resolution is used for calculating the current coordinate difference value corresponding to the at least one feature point group in S300, and the operation rate of the electronic device 10 is improved.
Optionally, as an embodiment, the manner in which the electronic device 10 reduces the resolution of the current frame picture may adopt: the pixel size of the current frame picture in the width direction and the height direction is reduced by half so as to reduce the resolution of the current frame picture.
Optionally, as an embodiment, when the electronic device 10 performs facial muscle training on the user, the speed of processing the continuous multiple frames of pictures by the electronic device 10 may be increased by taking only one frame per two continuous frames of pictures for image processing and abandoning processing of another frame of picture.
Based on the above design, in the facial muscle training method provided in the embodiment of the present invention, the resolution of the current frame picture is reduced, so that the current frame picture with the reduced resolution is used to calculate the current coordinate difference corresponding to the at least one feature point group in the current frame picture, thereby reducing the data calculation amount of the current frame picture during facial muscle training and further increasing the processing speed of the picture during facial muscle training.
Referring to fig. 2, in step S500, a current motion completion degree is generated according to the current coordinate difference value, the initial coordinate difference value, and a preset motion completion value.
Before performing facial muscle training on the user, the electronic device 10 needs to determine an initial coordinate difference value, where the initial coordinate difference value represents a coordinate difference value of the at least one feature point group in an initial state, and the initial state may be understood as a face state of the user before performing facial muscle training, for example, it is assumed that the current content of the user performing facial muscle training is eyebrow lifting, and the initial state is a face state of the user before performing eyebrow lifting, and is generally a face state of the user when the expression is natural.
Optionally, as an embodiment, the initial coordinate difference is a coordinate difference of the at least one feature point group in a preset frame picture. That is, before the user performs facial muscle training using the electronic device 10, the electronic device 10 obtains a preset frame picture as the face state of the user in the initial state, and then the electronic device 10 calculates the coordinate difference of the at least one feature point group in the preset frame picture as the initial coordinate difference.
Also, optionally, as an embodiment, each time facial muscle training is performed, the electronic device 10 may retrieve a new preset frame picture for calculating the initial coordinate difference, for example, when the electronic device 10 is used for eyebrow raising training for the user in one cycle, the electronic device 10 uses the first preset frame picture for calculating the initial coordinate difference in the eyebrow raising training, and when the electronic device 10 is used for nose raising training for the user in another cycle, the electronic device 10 uses the second preset frame picture for calculating the initial coordinate difference in the nose raising training.
It should be noted that, in some other embodiments of the embodiment of the present invention, a fixed value preset in the electronic device 10 may also be used as the initial coordinate difference, and at this time, in all training cycles, for the same training scenario, for example, when the nose is shrunken in multiple cycles of training, all the initial coordinate differences are the same. Moreover, for different training scenes, such as the training shrug and the training teeth, the initial coordinate difference may also be set to be different, depending on the initial coordinate difference set by the user for different training scenes.
After obtaining the current coordinate difference, the electronic device 10 calculates and generates a current action completion degree according to the current coordinate difference, the initial coordinate difference and a preset action completion value, where the current action completion degree represents a completion degree of the user on the current facial muscle training action.
Optionally, as an implementation manner, please refer to fig. 5, fig. 5 is a schematic flowchart of the sub-steps of S500 in fig. 2, in an embodiment of the present invention, S500 includes the following sub-steps:
and S510, generating a current action completion value according to the current coordinate difference value and the initial coordinate difference value.
Optionally, as an embodiment, a difference between the current coordinate difference and the initial coordinate difference is calculated as the current action completion value. That is, the current action completion value Dt=|Δt0L, wherein DtFor the current action completion value, ΔtAs difference value of current coordinate, Δ0Is the initial coordinate difference.
It should be noted that, in some other implementations of the embodiment of the present invention, the current action completion value may also be obtained by using another manner according to the current coordinate difference and the initial coordinate difference, for example, calculating a quotient of the current coordinate difference and the initial coordinate difference as the current action completion value.
And S520, generating a current action completion degree according to the current action completion value and a preset action completion value.
Optionally, as an embodiment, a quotient of the current action completion value and a preset action completion value is calculated as the current action completion degree. That is, the current action completion
Figure BDA0001899530810000141
Wherein, VtTo the current action completion, DtFor the current action completion value, D0Is a preset action completion value.
It should be noted that, in some other embodiments of the embodiment of the present invention, the current action completion level may also be obtained by using another method according to the current action completion value and the preset action completion value, for example, calculating a difference between the current action completion value and the preset action completion value as the current action completion level.
Generally, the distance between the electronic device 10 and the face of the user may change at any time for different users or the same user at different times, and in the electronic device 10, the preset motion completion value is a fixed value, and when the distance between the electronic device 10 and the face of the user changes, the sizes of the different frame pictures of the target face may be different, especially the pictures used in different training cycles, which results in that the current motion completion degree may be affected by the distance between the electronic device 10 and the face of the user.
Therefore, as an embodiment, before performing S500, the facial muscle training method further includes:
s400, updating a preset action completion value according to the face positioning point group acquired from the current frame picture.
In the facial muscle training, the electronic device 10 further selects a face anchor point group, where the face anchor point group includes at least two feature points, for example, two feature points, three feature points, four feature points, five feature points, or even more feature points. The preset action completion value is further updated according to the position information of all the feature points contained in the face positioning point group in the current frame picture, so that the updated action completion value is used for calculating and generating the current action completion degree, and the influence of the distance between the electronic equipment 10 and the face of the user on the current action completion degree is further reduced.
Optionally, as an implementation manner, please refer to fig. 6, fig. 6 is a schematic flowchart of sub-steps of S400 in fig. 2, in an embodiment of the present invention, S400 includes the following sub-steps:
s410, calculating the area of the current polygon formed by all the feature points contained in the face positioning point group in the current frame picture.
When the preset action completion value is updated, a polygon area is formed by all the feature points contained in the face positioning point group, and because each feature point has a unique coordinate in the coordinate system established in the current frame picture, the current polygon area corresponding to the polygon formed by all the feature points contained in the face positioning point group is calculated according to the respective coordinate value of each feature point.
For example, in the schematic diagram shown in fig. 3, the feature points 3, 5, 24, and 15 may also be selected to form a face anchor point group, or include more feature points.
Moreover, the manner of forming a polygon by all the feature points included in the face anchor point group may be as shown in fig. 7, when the face anchor point group only includes two feature points, such as feature point 0 and feature point 8 in fig. 7, it is assumed that the coordinate of the feature point 0 in the current frame picture is D0(x0,y0) The coordinate of the feature point 8 in the current frame picture is D8(x8,y8) Then, at this time, straight lines X parallel to the X-axis can be respectively drawn along the feature points 00And a line Y parallel to the Y-axis0Similarly, straight lines X parallel to the X-axis are respectively drawn along the characteristic points 88And a line Y parallel to the Y-axis8From X0、Y0、X8And Y8The enclosed rectangle is used as a polygon formed by all the feature points contained in the face feature point group in the current frame picture.
Of course, in the schematic diagram shown in fig. 7, other ways to form the polygon may be adopted, such as connecting the feature point 0 and the feature point 8 to obtain a straight line l0-8From a straight line l0-8And the straight line Y0And a straight line X8The enclosed triangle is used as the face featureAnd all the characteristic points contained in the point group form a polygon in the current frame picture.
When the number of feature points included in the face anchor point group exceeds two, for example, when three feature points are included, the manner of forming a polygon by all the feature points included in the face anchor point group may be as shown in fig. 8, and it is assumed that the face anchor point group includes three feature points, namely, feature point 0, feature point 8 and feature point 16, and the coordinate of the feature point 0 in the current frame picture is D0(x0,y0) The coordinate of the feature point 8 in the current frame picture is D8(x8,y8) The coordinate of the feature point 16 in the current frame picture is D16(x16,y16) Also, straight lines X parallel to the X-axis are respectively drawn along the characteristic points 00And a line Y parallel to the Y-axis0Drawing a line X parallel to X along the characteristic point 88A line Y parallel to the Y-axis is drawn along the feature point 1616From X0、Y0、X8And Y16The enclosed rectangle can be used as a polygon formed by all the feature points included in the face feature point group in the current frame picture.
As shown in the schematic diagram of fig. 8, a triangle formed by sequentially connecting feature points 0, 8, and 16 may be used as a polygon formed by all feature points included in the face feature point group in the current frame picture.
It should be understood that the above-described method of forming the polygon is merely an example, and another method of forming the polygon may be used, for example, a rectangle formed by averaging the coordinates of each of the plurality of feature points and finally obtaining two average positioning coordinates may be used as the polygon formed by the face feature point group, as long as all the feature points included in the face feature point group can form one specific polygon.
S420, updating the preset action completion value according to the current polygon area and the initial polygon area.
As mentioned above, the current polygon area is the polygon area formed by all the feature points included in the face positioning point group in the current frame picture, such as the initial coordinate difference value, before performing facial muscle training on the user, the electronic device 10 also needs to determine an initial polygon area, the initial polygon area is the polygon area formed by all the feature points included in the face positioning point group in the preset frame picture, the preset frame picture may be a picture that is the same as the picture for calculating the initial coordinate difference, and the initial polygon is configured in the same manner as the current polygon, for example, the current polygon is a triangle obtained by sequentially connecting three feature points in the current frame picture as shown in fig. 8, and the initial polygon is a triangle obtained by sequentially connecting three feature points in a preset frame picture as an initial polygon.
Therefore, after the current polygon is obtained through calculation, the preset action completion value is updated according to the obtained current polygon area and the initial polygon area.
Optionally, as an implementation manner, please refer to fig. 9, where fig. 9 is a schematic flowchart of sub-steps of sub-S420 in fig. 6, and in an embodiment of the present invention, S420 includes the following sub-steps:
s421, calculating the quotient of the current polygon area and the initial polygon area.
And S422, updating the preset action completion value according to the quotient obtained by calculation.
As an implementation manner, in the embodiment of the present invention, when the preset action completion value is updated, the quotient obtained by the calculation may be first set to the root, and then the preset action completion value is updated according to the result obtained by setting the root, that is, the calculation formula for updating the preset action completion value is as follows:
Figure BDA0001899530810000171
wherein S isnIs the current polygon area, S0Is the initial polygon area, D0Is a preset action completion value, D0' is the updated action completion value.
It is to be understood that, in some other embodiments of the embodiment of the present invention, the preset action completion value may also be updated in other manners, for example, a product of a quotient obtained by calculating the current polygon area and the initial polygon area and the preset action completion value is directly used as the updated action completion value, or a product of a quotient obtained by calculating the current polygon area and the initial polygon area and a preset scaling factor may also be used to update the preset action completion value.
Based on the above design, in the facial muscle training method provided in the embodiments of the present invention, the preset action completion value is updated according to the position information in the current frame picture of the face positioning point group, and the updated action completion value is further used for calculating and generating the current action completion degree, so that the influence of the distance between the electronic device 10 and the face of the user on the current action completion degree can be reduced, and the facial muscle training accuracy is improved.
Continuing to refer to fig. 2, S600, determining whether the current action completion is greater than a preset action completion threshold; if so, determining that the current training action is completed; if not, taking the subsequent frame picture of the current frame picture as a new current frame picture, and continuing to execute S300.
The electronic device 10 compares the current motion completion degree with a preset motion completion degree threshold value, determines whether the current motion completion degree is greater than the preset motion completion degree threshold value, and when the current motion completion degree is greater than the preset motion completion degree threshold value, represents that the current facial muscle training motion of the user is completed, and can end the current facial muscle training motion, so as to execute the next circulation of facial muscle training motions, or end the training task; otherwise, when the current action completion degree is less than or equal to the preset action completion degree threshold, it represents that the current facial muscle training action of the user is not completed yet, and the training needs to be continued, and at this time, the subsequent frame picture of the current frame picture is taken as a new current frame picture, for example, a picture next to the current frame picture or a second frame picture next to the current frame picture, and S300 is continuously executed.
It should be noted that, in the embodiment of the present invention, when the facial muscle training method includes S200, when it is determined that the current motion completion degree is less than or equal to the preset motion completion degree threshold, the subsequent frame picture of the current frame picture is used as a new current frame picture, and S200 is continuously executed.
Based on the above design, in the facial muscle training method provided in the embodiments of the present invention, after the current coordinate difference corresponding to the at least one feature point group in the current frame picture is obtained through calculation by using the at least one feature point group of the target face, the current action completion degree is generated from the current coordinate difference, the initial coordinate difference, and the preset action completion value, and then whether the user has completed the current training action is determined according to the current action completion degree.
Based on the facial muscle training method provided by the above embodiment, a possible implementation manner of the complete method flow is given below, please refer to fig. 10, and fig. 10 shows a schematic completion flow chart of the facial muscle training method provided by the embodiment of the present invention, which includes all the steps provided by the above embodiment.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a facial muscle training device 200 according to an embodiment of the present invention, in which the facial muscle training device 200 includes a feature point group extracting module 210, a coordinate difference calculating module 230, an action completion calculating module 250, and a determining module 260.
The feature point group extracting module 210 is configured to obtain at least one feature point group of the target face, where each feature point group includes two feature points.
The coordinate difference calculation module 230 is configured to calculate a current coordinate difference corresponding to the at least one feature point group, where the current coordinate difference is a coordinate difference of the at least one feature point group in the current frame picture.
Optionally, as an implementation manner, please refer to fig. 12, where fig. 12 is a schematic structural diagram of a coordinate difference calculation module 230 of a facial muscle training apparatus 200 according to an embodiment of the present invention, in the embodiment of the present invention, the coordinate difference calculation module 230 includes a sub-coordinate difference calculation unit 231 and a current coordinate difference calculation unit 232.
The sub-coordinate difference calculating unit 231 is configured to calculate a coordinate difference corresponding to each of the at least two feature point groups, respectively.
The current coordinate difference calculation unit 232 is configured to generate the current coordinate difference according to the respective corresponding coordinate differences of all the feature point groups.
Referring to fig. 10, the action completion calculation module 250 is configured to generate a current action completion according to the current coordinate difference, the initial coordinate difference and a preset action completion value, where the initial coordinate difference represents a coordinate difference of the at least one feature point group in the initial state.
Alternatively, referring to fig. 13 as an implementation manner, fig. 13 shows a schematic structural diagram of an action completion calculating module 250 of a facial muscle training device 200 according to an embodiment of the present invention, in which the action completion calculating module 250 includes an action completion calculating unit 251 and an action completion calculating unit 252.
The action completion value calculating unit 251 is configured to generate a current action completion value according to the current coordinate difference value and the initial coordinate difference value.
The action completion calculating unit 252 is configured to generate the current action completion according to the current action completion value and the preset action completion value.
Referring to fig. 11, the determining module 260 is configured to determine whether the current action completion degree is greater than a preset action completion degree threshold, wherein when the current action completion degree is greater than the preset action completion degree threshold, it is determined that the current training action is completed; when the current motion completion degree is less than or equal to the preset motion completion degree threshold, taking a subsequent frame picture of the current frame picture as a new current frame picture, and the coordinate difference value calculating module 230 re-performs calculation of the current coordinate difference value corresponding to the at least one feature point group.
Optionally, as an implementation manner, please continue to refer to fig. 11, in an embodiment of the present invention, the facial muscle training apparatus 200 further includes a picture resolution adjusting module 220, and the picture resolution adjusting module 220 is configured to reduce the resolution of the current frame picture, so that the current frame picture with the reduced resolution is used to calculate a current coordinate difference value corresponding to the at least one feature point group.
Optionally, as an implementation manner, please continue to refer to fig. 11, in an embodiment of the present invention, the facial muscle training apparatus 200 further includes a preset action completion value updating module 240, where the preset action completion value updating module 240 is configured to update the preset action completion value according to a face positioning point group obtained from the current frame picture, so that the updated action completion value is used to calculate and generate the current action completion degree, where the face positioning point group includes at least two feature points.
Optionally, as an implementation manner, please refer to fig. 14, fig. 14 shows a schematic structural diagram of a preset action completion value updating module 240 of the facial muscle training device 200 according to an embodiment of the present invention, in which in the embodiment of the present invention, the preset action completion value updating module 240 includes a polygon area calculating unit 241 and an action completion value updating unit 242.
The polygon area calculating unit 241 is configured to calculate a current polygon area formed by all feature points included in the face anchor point group in the current frame picture.
The action completion value updating unit 242 is configured to update the preset action completion value according to the current polygon area and an initial polygon area, where the initial polygon area is a polygon area formed by all feature points included in the face positioning point group in the preset frame picture.
Optionally, as an implementation manner, please refer to fig. 15, fig. 15 shows a schematic structural diagram of an action completion value updating unit 242 of the facial muscle training apparatus 200 according to an embodiment of the present invention, in which the action completion value updating unit 242 includes a quotient value operator unit 2421 and a completion value updating subunit 2422 according to the embodiment of the present invention.
The quotient calculation subunit 2421 is configured to calculate a quotient of the current polygon area and the initial polygon area.
The completion value updating subunit 2422 is configured to update the preset action completion value according to the calculated quotient value.
Alternatively, the function of the facial muscle training apparatus 200 according to the present embodiment may be implemented by the electronic device 10 described above. For example, the relevant data, instructions and functional modules in the above embodiments are stored in the memory 110, and then executed by the processor 120, so as to implement the facial muscle training method in the above embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiment of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In summary, according to the facial muscle training method, the facial muscle training device and the electronic device provided by the embodiments of the present invention, after the current coordinate difference corresponding to the at least one feature point group in the current frame picture is obtained through calculation by using the at least one feature point group of the target face, the current action completion degree is generated by using the current coordinate difference, the initial coordinate difference and the preset action completion value, and then whether the user has completed the current training action is determined by using the current action completion degree, compared with the prior art, whether the current training action is completed or not can be fed back when the user performs facial muscle training, so that the facial muscle training quality is ensured; the resolution of the current frame picture is reduced, so that the current frame picture with the reduced resolution is used for calculating a current coordinate difference value corresponding to at least one feature point group in the current frame picture, the data calculation amount of the current frame picture during facial muscle training is reduced, and the processing speed of the picture during facial muscle training is increased; the preset action completion value is updated according to the position information in the current frame picture of the face positioning point group, and the updated action completion value is used for calculating and generating the current action completion degree, so that the influence of the distance between the electronic equipment 10 and the face of the user on the current action completion degree can be reduced, and the facial muscle training accuracy is improved.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A facial muscle training method, the method comprising:
acquiring at least one feature point group of a target face, wherein each feature point group comprises two feature points;
calculating a current coordinate difference value corresponding to the at least one characteristic point group, wherein the current coordinate difference value is a coordinate difference value of the at least one characteristic point group in a current frame picture; the at least one feature point group includes at least two feature point groups, and the step of calculating the current coordinate difference value corresponding to the at least one feature point group includes:
respectively calculating the coordinate difference value corresponding to each characteristic point group in the at least two characteristic point groups;
generating the current coordinate difference value according to the coordinate difference values corresponding to all the feature point groups respectively;
generating a current action completion degree according to the current coordinate difference value, an initial coordinate difference value and a preset action completion value, wherein the initial coordinate difference value represents a coordinate difference value of the at least one characteristic point group in an initial state;
when the current action completion degree is larger than a preset action completion degree threshold value, determining that the current training action is completed;
before the step of generating the current action completion degree according to the current coordinate difference value, the initial coordinate difference value and a preset action completion value, the method further includes:
updating the preset action completion value according to a face positioning point group acquired from the current frame picture, so that the updated action completion value is used for calculating and generating the current action completion degree, wherein the face positioning point group comprises at least two feature points;
the step of updating the preset action completion value according to the face positioning point group obtained from the current frame picture comprises the following steps:
calculating the area of a current polygon formed by all feature points contained in the face positioning point group in the current frame picture;
and updating the preset action completion value according to the current polygon area and an initial polygon area, wherein the initial polygon area is a polygon area formed by all feature points contained in the face positioning point group in the preset frame picture.
2. The method of claim 1, wherein the at least one feature point group is determined according to user context selection information, wherein the user context selection information characterizes a training context corresponding to the at least one feature point group.
3. The method of claim 1, wherein prior to the step of calculating the current coordinate difference value corresponding to the at least one feature point group, the method further comprises:
and reducing the resolution of the current frame picture so that the current frame picture with the reduced resolution is used for calculating a current coordinate difference value corresponding to the at least one characteristic point group.
4. The method of claim 3, wherein the step of reducing the resolution of the current frame picture comprises:
and reducing the pixel size of the current frame picture in the width direction and the height direction by half so as to reduce the resolution of the current frame picture.
5. The method of claim 1, wherein the step of generating a current motion completion degree according to the current coordinate difference value, the initial coordinate difference value and a preset motion completion value comprises:
generating a current action completion value according to the current coordinate difference value and the initial coordinate difference value;
and generating the current action completion degree according to the current action completion value and the preset action completion value.
6. The method of claim 1, wherein the step of updating the preset action completion value according to the current polygon area and the initial polygon area comprises:
calculating a quotient value of the current polygon area and the initial polygon area;
and updating the preset action completion value according to the quotient value obtained by calculation.
7. The method of claim 1, wherein the method further comprises:
and when the current action completion degree is smaller than or equal to the preset action completion degree threshold value, taking a subsequent frame picture of the current frame picture as a new current frame picture, and continuously executing the step of calculating the current coordinate difference value corresponding to the at least one characteristic point group.
8. The method of claim 1, wherein the initial coordinate difference value is a coordinate difference value of the at least one feature point group in a preset frame picture.
9. A facial muscle training apparatus, the apparatus comprising:
the system comprises a characteristic point group extraction module, a characteristic point group extraction module and a characteristic point group extraction module, wherein the characteristic point group extraction module is used for acquiring at least one characteristic point group of a target face, and each characteristic point group comprises two characteristic points;
a coordinate difference value calculating module, configured to calculate a current coordinate difference value corresponding to the at least one feature point group, where the current coordinate difference value is a coordinate difference value of the at least one feature point group in a current frame picture;
the action completion degree calculation module is used for generating a current action completion degree according to the current coordinate difference value, an initial coordinate difference value and a preset action completion value, wherein the initial coordinate difference value represents the coordinate difference value of the at least one characteristic point group in an initial state; the at least one feature point group includes at least two feature point groups, and the step of calculating the current coordinate difference value corresponding to the at least one feature point group includes:
respectively calculating the coordinate difference value corresponding to each characteristic point group in the at least two characteristic point groups;
generating the current coordinate difference value according to the coordinate difference values corresponding to all the feature point groups respectively;
the judging module is used for judging whether the current action completion degree is greater than a preset action completion degree threshold value or not, wherein when the current action completion degree is greater than the preset action completion degree threshold value, the current training action is determined to be completed;
before the step of generating the current action completion degree according to the current coordinate difference value, the initial coordinate difference value and the preset action completion value,
updating the preset action completion value according to a face positioning point group acquired from the current frame picture, so that the updated action completion value is used for calculating and generating the current action completion degree, wherein the face positioning point group comprises at least two feature points;
the step of updating the preset action completion value according to the face positioning point group obtained from the current frame picture comprises the following steps:
calculating the area of a current polygon formed by all feature points contained in the face positioning point group in the current frame picture;
and updating the preset action completion value according to the current polygon area and an initial polygon area, wherein the initial polygon area is a polygon area formed by all feature points contained in the face positioning point group in the preset frame picture.
10. An electronic device, comprising:
a memory for storing one or more programs;
a processor;
the one or more programs, when executed by the processor, implement the method of any of claims 1-8.
CN201811506296.9A 2018-12-10 2018-12-10 Facial muscle training method and device and electronic equipment Active CN109659006B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811506296.9A CN109659006B (en) 2018-12-10 2018-12-10 Facial muscle training method and device and electronic equipment
PCT/CN2019/124202 WO2020119665A1 (en) 2018-12-10 2019-12-10 Facial muscle training method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811506296.9A CN109659006B (en) 2018-12-10 2018-12-10 Facial muscle training method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109659006A CN109659006A (en) 2019-04-19
CN109659006B true CN109659006B (en) 2021-03-23

Family

ID=66113947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811506296.9A Active CN109659006B (en) 2018-12-10 2018-12-10 Facial muscle training method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN109659006B (en)
WO (1) WO2020119665A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109659006B (en) * 2018-12-10 2021-03-23 深圳先进技术研究院 Facial muscle training method and device and electronic equipment
CN113327247B (en) * 2021-07-14 2024-06-18 中国科学院深圳先进技术研究院 Facial nerve function assessment method, device, computer equipment and storage medium
CN113837018B (en) * 2021-08-31 2024-06-14 北京新氧科技有限公司 Cosmetic progress detection method, device, equipment and storage medium
CN113837019B (en) * 2021-08-31 2024-05-10 北京新氧科技有限公司 Cosmetic progress detection method, device, equipment and storage medium
CN113837016B (en) * 2021-08-31 2024-07-02 北京新氧科技有限公司 Cosmetic progress detection method, device, equipment and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013040443A2 (en) * 2011-09-15 2013-03-21 Sigma Instruments Holdings, Llc System and method for treating skin and underlying tissues for improved health, function and/or appearance
KR102094723B1 (en) * 2012-07-17 2020-04-14 삼성전자주식회사 Feature descriptor for robust facial expression recognition
CN104331685A (en) * 2014-10-20 2015-02-04 上海电机学院 Non-contact active calling method
CN107483834B (en) * 2015-02-04 2020-01-14 Oppo广东移动通信有限公司 Image processing method, continuous shooting method and device and related medium product
CN105678702B (en) * 2015-12-25 2018-10-19 北京理工大学 A kind of the human face image sequence generation method and device of feature based tracking
CN107169397B (en) * 2016-03-07 2022-03-01 佳能株式会社 Feature point detection method and device, image processing system and monitoring system
CN106980815A (en) * 2017-02-07 2017-07-25 王俊 Facial paralysis objective evaluation method under being supervised based on H B rank scores
CN107633206B (en) * 2017-08-17 2018-09-11 平安科技(深圳)有限公司 Eyeball motion capture method, device and storage medium
CN108211241A (en) * 2017-12-27 2018-06-29 复旦大学附属华山医院 A kind of facial muscles rehabilitation training system based on mirror image visual feedback
CN108460345A (en) * 2018-02-08 2018-08-28 电子科技大学 A kind of facial fatigue detection method based on face key point location
CN109659006B (en) * 2018-12-10 2021-03-23 深圳先进技术研究院 Facial muscle training method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Kinect-based oral rehabilitation system;Tse-Yu Pan 等;《2015 International Conference on Orange Technologies (ICOT)》;20151222;71-74 *
面肌功能训练法在口僻患者麻痹面肌功能康复中的作用评价;聂志慧等;《遵义医学院学报》;20110820;第34卷(第04期);369-371 *

Also Published As

Publication number Publication date
CN109659006A (en) 2019-04-19
WO2020119665A1 (en) 2020-06-18

Similar Documents

Publication Publication Date Title
CN109659006B (en) Facial muscle training method and device and electronic equipment
US10832039B2 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
CN108510437B (en) Virtual image generation method, device, equipment and readable storage medium
US11900557B2 (en) Three-dimensional face model generation method and apparatus, device, and medium
KR102491140B1 (en) Method and apparatus for generating virtual avatar
CN105096353B (en) Image processing method and device
CN109064387A (en) Image special effect generation method, device and electronic equipment
CN111008935B (en) Face image enhancement method, device, system and storage medium
TWI780919B (en) Method and apparatus for processing face image, electronic device and storage medium
KR20230098244A (en) Adaptive skeletal joint facilitation
CN110148191A (en) The virtual expression generation method of video, device and computer readable storage medium
US20220300728A1 (en) True size eyewear experience in real time
WO2020224136A1 (en) Interface interaction method and device
CN111507889A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US20230154084A1 (en) Messaging system with augmented reality makeup
US20230120037A1 (en) True size eyewear in real time
CN109087240B (en) Image processing method, image processing apparatus, and storage medium
WO2022016996A1 (en) Image processing method, device, electronic apparatus, and computer readable storage medium
CN107886568B (en) Method and system for reconstructing facial expression by using 3D Avatar
WO2021218650A1 (en) Adaptive rigid prior model training method and training apparatus, and face tracking method and tracking apparatus
CN115393487B (en) Virtual character model processing method and device, electronic equipment and storage medium
CN115223240B (en) Motion real-time counting method and system based on dynamic time warping algorithm
WO2023035725A1 (en) Virtual prop display method and apparatus
CN112802162B (en) Face adjusting method and device for virtual character, electronic equipment and storage medium
CN112348069B (en) Data enhancement method, device, computer readable storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant