CN114917544A - Visual auxiliary training method and equipment for function of orbicularis oris - Google Patents

Visual auxiliary training method and equipment for function of orbicularis oris Download PDF

Info

Publication number
CN114917544A
CN114917544A CN202210519592.2A CN202210519592A CN114917544A CN 114917544 A CN114917544 A CN 114917544A CN 202210519592 A CN202210519592 A CN 202210519592A CN 114917544 A CN114917544 A CN 114917544A
Authority
CN
China
Prior art keywords
training
signal
signals
computer
electromyographic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210519592.2A
Other languages
Chinese (zh)
Other versions
CN114917544B (en
Inventor
朱敏
吴艳棋
赵翠莲
汪兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
University of Shanghai for Science and Technology
Original Assignee
Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine, University of Shanghai for Science and Technology filed Critical Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Priority to CN202210519592.2A priority Critical patent/CN114917544B/en
Publication of CN114917544A publication Critical patent/CN114917544A/en
Application granted granted Critical
Publication of CN114917544B publication Critical patent/CN114917544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B23/00Exercising apparatus specially adapted for particular parts of the body
    • A63B23/025Exercising apparatus specially adapted for particular parts of the body for the head or the neck
    • A63B23/03Exercising apparatus specially adapted for particular parts of the body for the head or the neck for face muscles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Otolaryngology (AREA)
  • Biophysics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a visual method and equipment for assisting the training of the function of orbicularis oris muscle, wherein the method comprises the following steps: acquiring oral state information of a trainer during training; displaying or triggering to display a corresponding target graph on a display device according to the training items; and defining the form of the target graph according to the oral state information.

Description

Visual auxiliary training method and equipment for function of orbicularis oris
Technical Field
The invention relates to the field of medical treatment and rehabilitation equipment and biomedical engineering, in particular to a visual method and equipment for assisting the function training of orbicularis oris.
Background
The upper airway refers to the passage through which the airflow is inhaled from the nostrils to the entrance of the trachea, and comprises nasal cavities, nasopharyngeal cavities, oropharyngeal cavities and laryngopharyngeal cavities. Obstruction of the upper airway is likely to occur in all segments, with rhinitis, tonsils and/or adenoids being the leading causes. Adenoid hypertrophy occurs in children and young adults 5-14 years old at a rate of about 34% and the incidence of allergic rhinitis is even higher. Because the nasal cavity and the nasopharyngeal cavity are completely or incompletely blocked, the airflow completely or partially enters the lower airway through the oral cavity, the oropharyngeal cavity and the laryngopharyngeal cavity, namely the mouth breathing compensation of the infant patient occurs.
The long-term mouth opening habit causes the lip muscle of the children patient to relax, even the upper lip and the lower lip are outwards turned and can not be closed, the lower jaw is retracted, the airflow stimulates the oral cavity to form a hard palate with high arch and stenosis, and the lower jaw grows backwards and downwards. In addition, the impact of the airflow forces the tongue body to droop, and the muscle force on the cheek and palate side of the maxillary posterior teeth is unbalanced, so that the upper dental arch is narrowed; in addition, the weakness of the labial muscles makes the upper anterior teeth protruded to form deep coverage of the anterior teeth, which is often called "adenoid face appearance".
The cause of upper airway obstruction can be removed by surgical removal of tonsils and/or adenoids, but the function of the lip closing muscle is reduced due to long-term mouth opening respiration, and the lips of most children patients cannot be closed naturally after surgery, so that the mouth opening habit is broken through the training of the function of the orbicularis oris muscle after the surgery, which is one of the key points of treatment.
The training action needs to reach a certain standard, and the evaluation of the training at the present stage mainly depends on subjectively judging whether the action is in place or not, so that the objective and unified standard is lacked. In addition, the habit formation is a long-term process, and the single and repeated training actions are often too boring for children to arouse their interest and make them unable to concentrate on the training, so that the adherence to the training is difficult and the training effect is poor.
Disclosure of Invention
In view of the above technical problem, the present invention provides a method for visually assisting the training of the function of orbicularis oris muscle, the method comprising:
acquiring oral state information of a trainer during training;
displaying or triggering to display a corresponding target graph on a display device according to the training items;
and defining the form of the target graph according to the oral state information.
Further, the oral status information is defined by electromyographic signals collected from the surface of the orbicularis oris muscle of the trainee and/or pressure signals between the upper and lower lips.
Further, when the training item is a sipping training, the reduction of the length of the target pattern is limited according to the electromyogram signal.
Further, when the training item is sipping, defining a length of the target graphic according to the pressure signal.
Further, when the training item is a kiss voice, the size of the target pattern is defined according to the pressure signal, and the movement distance of the target pattern is defined according to the electromyographic signal.
Furthermore, before training, the electromyographic signals and the pressure signals of the actions related to the training items are collected to obtain the threshold value of each action.
Furthermore, the form of the corresponding target graph is limited according to the myoelectric energy value or the sample entropy value corresponding to the myoelectric signal.
And further, limiting the form of the corresponding target graph according to the amplitude average value corresponding to the pressure signal.
The invention also provides a device for assisting the training of the function of the orbicularis oris muscle, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of the above-described method.
The present invention also provides a computer readable medium storing instructions that, when executed, cause a system to perform the operations of the above-described method.
Aiming at the problems of subjectivity of the existing lip muscle training action standard evaluation means, singleness and weariness of the training process and difficulty in monitoring the training process, the visual auxiliary training method and device for the function of the orbicularis oris muscle, disclosed by the invention, are characterized in that an orbicularis oris muscle information acquisition device based on multi-channel surface electromyographic signals and pressure signals is used for preprocessing and extracting characteristics of different training actions by taking game images as carriers and utilizing the electromyographic signals on the surface of the orbicularis oris muscle of a patient and the pressure signals between the upper lip and the lower lip, which are acquired by electromyographic and pressure sensors, and mapping the extracted characteristics into training games respectively, so that the different training actions correspond to different training games, thereby controlling the activities of people or objects in a game environment through signal characteristics, providing visual real-time training feedback and realizing an effective quantitative training process, help the patient to recover.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 shows a flow diagram of a method of visual auxiliary orbicularis oris muscle function training according to an embodiment of the invention;
fig. 2 shows a schematic flow diagram of the sipping line training in an embodiment of the invention;
fig. 3 illustrates a graphical user interface for sipping line training in one embodiment of the invention;
fig. 4 illustrates a schematic flow diagram of sipping lip training in an embodiment of the present invention;
fig. 5 illustrates a graphical user interface for sipping lip training in one embodiment of the present invention;
FIG. 6 illustrates a flow diagram of Bo training in an embodiment of the present invention;
FIG. 7 illustrates a graphical user interface for training on a "kiss" tone in one embodiment of the invention;
FIG. 8 illustrates functional modules of an exemplary system that may be used in various embodiments of the invention.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the invention, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include forms of volatile Memory, Random Access Memory (RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of RAM, Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash Memory (Flash Memory) or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (Digital Versatile Disc, DVD), magnetic tape, or any other optical storage, magnetic tape, or other non-magnetic storage medium, may be used to store information that may be accessed by the computing device.
The device referred to in the present invention includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, etc. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or network servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a VPN Network, a wireless Ad Hoc Network (Ad Hoc Network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use with the present invention, are also within the scope of the present invention and are hereby incorporated by reference.
In the description of the embodiments of the present invention, "a plurality" means two or more unless specifically defined otherwise.
As shown in fig. 1, a method for visually assisting training of the function of orbicularis oris muscle according to an embodiment of the present invention includes:
s100, initializing the action of a training project;
s200, selecting a training item;
s300, collecting oral state information;
s400, controlling a visual target graph according to the oral state information;
and S500, finishing training.
In this embodiment, myoelectric electrodes are arranged on the surface of the orbicularis oris of the trainee to collect myoelectric signals, preferably, a plurality of myoelectric electrodes are arranged at different positions on the surface of the orbicularis oris, for example, 8 myoelectric electrodes are adopted, 4 of the myoelectric electrodes are arranged on the surface of the orbicularis oris above the upper lip, and 4 of the myoelectric electrodes are arranged on the surface of the orbicularis oris below the lower lip, and the specific number of the myoelectric electrodes is not limited herein; pressure sensors are arranged between the upper lip and the lower lip of the trainer to collect pressure signals, so that the mouth state information of the trainer during training is obtained based on the electromyographic signals and the pressure signals.
For original signals collected from the electromyographic electrode and the pressure sensor, signal conditioning is needed, for example, the frequency range of effective signals of the original electromyographic signals collected by the electromyographic electrode is 20-500 Hz, the effective signals are intercepted by arranging a corresponding amplifying filter, and for power frequency interference of 50Hz/60Hz, a corresponding notch filter is arranged for removing the interference; a signal amplification circuit is often provided for the output signal of the pressure sensor. After the conditioned signal is subjected to digital-to-analog conversion, data can be further processed in a data processing system, and the electromyographic signal and the pressure signal in this embodiment are referred to as digital signals subjected to digital-to-analog conversion.
The invention provides a lip sensor device combining myoelectric and pressure signals, which is provided with application number 202110359579.0 and is named as a lip sensor device combining myoelectric and pressure signals, is convenient for a trainer to wear, integrally arranges a myoelectric electrode and a pressure sensor at a detection position of a mouth, can provide a conditioned digital signal, and is suitable for being used as a device for acquiring the state information of the mouth of the trainer. Of course, when the method of the present invention is applied, the myoelectric electrode and the pressure sensor may be arranged in other manners, which are not limited herein.
The contents of the training of the function of the orbicularis oris muscle in the embodiment include:
1. sipping line training: taking a piece of sterilized cotton thread (recommended to be 50 cm long), putting one end into the mouth, and tucking the cotton thread into the mouth by the force of lip muscles;
2. sipping lip training: the trainer places an article (such as a jade pendant) with slight thickness, weight and smooth surface between the two lips, tucks the article forcefully (can not be bitten by the front teeth), and can drop naturally when the lips are loosened;
3. kiss sound: the upper lip and the lower lip of a trainer respectively wrap the upper front tooth and the lower front tooth, tuck the two lips (the upper lip and the lower lip cannot be seen through front observation) and stay outwards with force after 3-5 seconds, and then send out a 'kiss' sound.
After the myoelectric electrode and the pressure sensor of the trainer are arranged, before the first training, the initialization operation of the action related to the training item is needed to obtain the action threshold value of the myoelectric signal corresponding to the corresponding action and the pressure threshold value of the pressure signal corresponding to the corresponding action. Preferably, the collected electromyographic signals and pressure signals may be subjected to further filtering and denoising, for example, using wavelet denoising. The action threshold of the electromyographic signal can be obtained by a Gaussian distribution method or a sample entropy method; the pressure threshold value of the pressure signal can be obtained by an amplitude mean value method.
After the initialization operation of the action is completed, a corresponding training item is selected on the terminal equipment executing the method of the embodiment, and the terminal equipment can be connected with a display device or a display device carried by the terminal equipment, so that a graphical interface matched with the function training of the orbicularis oris muscle can be displayed for a trainer. The terminal equipment can adopt a computer, a tablet, a mobile phone, a set-top box and the like.
When the selected training item is sipping training, the graphical interface displayed on the display device comprises a target graph matched with the sipping training, and the reduction of the length of the target graph is limited according to the collected electromyographic signals.
The embodiment provides a 'eating noodles' game matched with sipping line training, the control principle is based on the control of myoelectric signals, specifically, the length change of people eating noodles in the game environment is controlled through the characteristics of surface myoelectric signals acquired by a quantitative sensor.
As shown in fig. 2 and 3, the graphical interface displays the pictures of the child mouth sucking noodles, and in the process of performing the sipping training by the trainer, feature extraction is performed on the collected electromyographic signals, such as the myoelectric energy value or the sample entropy value, and when the extracted feature value reaches the action threshold value of the corresponding action, the action of the child mouth sucking noodles in the graphical interface is reflected, so that the length of the noodles is correspondingly reduced, and the reduction of the length of the noodles in the graphical interface can be controlled through the extracted feature value. For the handler, as he or she introduces the cotton thread into the mouth, the length of the noodle strings in the graphical interface is also correspondingly shortened. After training is finished, the training data is stored, statistical data related to training, such as training duration, total length of the eaten noodles, estimated average muscle force value, training score and the like, is displayed in a graphical interface, and an entrance for viewing historical training results and statistical analysis results can also be arranged in the interface.
In the sipping training process, the feedback of the real-time images is helpful for a trainer to standardize the training action, if the action is not satisfactory, the reduction of the length of the noodles is influenced, the continuous standardization action is helpful for forming the training habit, and the training effect is enhanced; visual assistance also helps to improve the attention and motivation of the trainer, and especially adds interest to children.
Preferably, the collected electromyographic signals may be subjected to filtering and denoising in advance, for example, by using wavelet denoising.
In some embodiments, the characteristic value of the electromyographic signal is extracted from an optimal channel, wherein the optimal channel is a channel with the highest signal-to-noise ratio in multiple channels.
In some embodiments, the intensity of the training can also be set when selecting the sipping training, such as setting training time, total length of noodle reduction, and the like. And when the training target is reached, displaying a training end prompt or a training result on the graphical interface.
When the selected training item is sipping lip training, the graphical interface displayed on the display device comprises a target graph matched with the sipping lip training, and the length of the target graph is limited according to the acquired pressure signal.
The embodiment provides a 'spring pressing' game for matching with sipping lip training, and the control principle is based on the control of a pressure signal, in particular to control the height of a character spring pressing in a game environment by quantifying the average amplitude of the pressure signal acquired by a sensor so as to keep the spring within a specified pressed height range.
As shown in fig. 4 and 5, a picture of the spring trojan is displayed in the graphical interface, and during the process of performing the sipping training by the trainer, feature extraction is performed on the acquired pressure signal, such as an amplitude average value, and when the extracted feature value reaches a pressure threshold value of the action, the spring in the graphical interface is compressed and the length is reduced, so that the extracted feature value can be reflected in the length of the spring in the graphical interface. For the trainer, the strength can be reflected by the length of the spring (or the compression amount of the spring) in the graphical interface in real time in the process of closing the lips. After training is finished, the training data is stored, statistical data related to training, such as training duration, standard reaching time, estimated average muscle force value, training score and the like, is displayed in a graphical interface, and an entrance for viewing historical training results and statistical analysis results can also be arranged in the interface.
In the lip puckering training process, the feedback of the real-time images helps a trainer to adjust and keep the force for puckering the lips during training, if the force is insufficient, the spring is not compressed or the compression amount is very small, and if the force for puckering the lips is weakened, the length of the spring in the graphical interface is correspondingly recovered, so that the trainer is reminded to strengthen the force for puckering the lips. Likewise, the visual assistance can help to improve the attention and the enthusiasm of the trainer, and especially, the visual assistance can increase the interest of children.
Preferably, the collected pressure signal may be subjected to filtering and denoising in advance, for example, by using wavelet denoising.
In some embodiments, the intensity of the training may also be set when selecting the sipping lip training, such as setting training targets of training time, standard reaching time, difficulty level, and the like. The difficulty level can be set to be related to the compression amount of the spring, the action is determined to reach the standard when the compression amount preset by the corresponding level is reached, and the standard reaching time is calculated according to the action. And when the training target is reached, displaying a training end prompt or a training result on the graphical interface.
When the selected training item is Bo voice, a target graph matched with the Bo voice is displayed or triggered and displayed on a graphical interface displayed on the display device, the size of the target graph is limited according to the collected pressure signal, and the moving distance of the target graph is limited according to the collected myoelectric signal.
The embodiment provides a 'bubbles blowing' game to be matched with the training of a 'kiss' sound, the control principle is the control of an electromyographic signal and a pressure signal, specifically, the size of bubbles blown out by characters in a game environment, namely the diameter of the bubbles is controlled by the average amplitude of the pressure signal through the characteristics of the electromyographic signal and the average amplitude of the pressure signal acquired by a quantitative sensor, and the distance of the bubbles blown out by the characters in the game environment is controlled by the characteristics of the electromyographic signal.
As shown in fig. 6 and 7, a picture of blowing bubbles for children is displayed in the graphical interface, and during the training process of "kiss" sound, feature extraction is performed on the collected electromyographic signals and pressure signals, such as an electromyographic energy value or a sample entropy value of the electromyographic signals, and an amplitude average value of the pressure signals.
When the trainer sips the lips, a child in the graphical interface can blow out the bubbles when the characteristic value extracted by the pressure signal reaches the pressure threshold value of the action, and the size of the characteristic value can control the size of the bubbles (such as changing the diameter of the bubbles). When the two lips of the trainer send out the Bo sound outwards, the distance of the bubbles blown out in the graphical interface is reflected according to the action threshold value of the corresponding action reached by the characteristic value extracted by the electromyographic signal. For the trainer, the action of "Bo" sound can be reflected by the size of the bubble and the distance blown out in the graphical interface. After training is finished, the training data is stored, statistical data related to training, such as training times, standard reaching times, estimated average muscle force values, training scores and the like, is displayed in a graphical interface, and an entrance for viewing historical training results and statistical analysis results can also be arranged in the interface.
In the training process of sending 'kiss' voice, the feedback of the real-time images is helpful for a trainer to adjust the force of closing lips and the action of pronouncing when training, and meanwhile, the attention and the enthusiasm of the trainer can be improved, and particularly, the trainer is more interesting for children.
Preferably, the collected electromyographic signals and pressure signals may be subjected to filtering and denoising in advance, for example, by using wavelet denoising.
In some embodiments, the characteristic values of the electromyographic signals are extracted from an optimal channel, wherein the optimal channel is a channel with the highest signal-to-noise ratio in the multiple channels.
In some embodiments, the intensity of training may also be set when selecting the Bo voice training, such as setting training targets of training times, standard reaching times, difficulty level, and the like. The difficulty level can be set to be related to the size and/or blowing distance of the bubbles, and the action is determined to reach the standard when the size and/or blowing distance of the bubbles preset by the corresponding level is reached, and the standard reaching times are calculated according to the action. And when the training target is reached, displaying a training ending prompt or a training result on the graphical interface.
The present embodiments also provide a computer readable storage medium having stored thereon computer code which, when executed, performs a method as in any one of the preceding.
The present embodiment also provides a computer program product, which when executed by a computer device performs the method of any of the preceding claims.
The present embodiment further provides a computer device, including:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method as recited in any preceding claim.
FIG. 8 illustrates an exemplary system that can be used to implement the various embodiments described in this disclosure.
As shown in fig. 8, in some embodiments, the system 1000 may be configured as any of the user terminal devices in the various embodiments described herein. In some embodiments, system 1000 may include one or more computer-readable media (e.g., system memory or NVM/storage 1020) having instructions and one or more processors (e.g., processor(s) 1005) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform actions described in this disclosure.
For one embodiment, system control module 1010 may include any suitable interface controllers to provide any suitable interface to at least one of the processor(s) 1005 and/or to any suitable device or component in communication with system control module 1010.
The system control module 1010 may include a memory controller module 1030 to provide an interface to the system memory 1015. Memory controller module 1030 may be a hardware module, a software module, and/or a firmware module.
System memory 1015 may be used to load and store data and/or instructions, for example, for system 1000. For one embodiment, system memory 1015 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, system memory 1015 may include double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 1010 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 1020 and communication interface(s) 1025.
For example, NVM/storage 1020 may be used to store data and/or instructions. NVM/storage 1020 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk drive(s) (HDD (s)), one or more Compact Disc (CD) drive(s), and/or one or more Digital Versatile Disc (DVD) drive (s)).
NVM/storage 1020 may include storage resources that are physically part of a device on which system 1000 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 1020 may be accessed over a network via communication interface(s) 1025.
Communication interface(s) 1025 may provide an interface for system 1000 to communicate over one or more networks and/or with any other suitable device. System 1000 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 1005 may be packaged together with logic for one or more controller(s) of the system control module 1010, such as the memory controller module 1030. For one embodiment, at least one of the processor(s) 1005 may be packaged together with logic for one or more controller(s) of the system control module 1010 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1005 may be integrated on the same die with logic for one or more controller(s) of the system control module 1010. For one embodiment, at least one of the processor(s) 1005 may be integrated on the same die with logic of one or more controllers of the system control module 1010 to form a system on a chip (SoC).
In various embodiments, system 1000 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 1000 may have more or fewer components and/or different architectures. For example, in some embodiments, system 1000 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
It should be noted that the present invention may be implemented in software and/or in a combination of software and hardware, for example, as an Application Specific Integrated Circuit (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. Also, the software programs (including associated data structures) of the present invention can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Further, some of the steps or functions of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present invention can be applied as a computer program product, such as computer program instructions, which when executed by a computer, can invoke or provide the method and/or technical solution according to the present invention through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules or other data may be embodied in a modulated data signal, such as a carrier wave or similar mechanism that is embodied in a wireless medium, such as part of spread-spectrum techniques, for example. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the invention comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or solution according to embodiments of the invention as described above.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not to denote any particular order.

Claims (10)

1. A visual auxiliary training method for the function of orbicularis oris muscle is characterized by comprising the following steps:
acquiring oral state information of a trainer during training;
displaying or triggering to display a corresponding target graph on a display device according to the training items;
and defining the form of the target graph according to the oral state information.
2. A method according to claim 1, wherein the oral status information is defined by electromyographic signals collected from the surface of the orbicularis oris muscle of the trainee and/or pressure signals between the upper and lower lips.
3. The method according to claim 2, characterized in that the training items are the reduction of the length of the target pattern defined from the electromyographic signals when the line closing training is performed.
4. The method of claim 2, wherein the length of the target graphic is defined in accordance with the pressure signal when the training item is sipping.
5. The method according to claim 2, wherein when the training item is Bo's voice, the size of the target pattern is defined according to the pressure signal, and the moving distance of the target pattern is defined according to the electromyographic signal.
6. A method according to claim 2, characterised in that before training the electromyographic signals and the pressure signals of the movements involved in a training program are collected to obtain a threshold value for each movement.
7. The method according to claim 2, characterized in that the morphology of the corresponding target pattern is defined according to the myoelectric energy value or the sample entropy value of the myoelectric signal.
8. The method of claim 2, wherein the morphology of the respective target pattern is defined based on a corresponding amplitude mean of the pressure signal.
9. An apparatus for assisting training of orbicularis oris muscle function, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform operations according to the method of any of claims 1 to 8.
10. A computer-readable medium storing instructions that, when executed, cause a system to perform operations according to any one of claims 1 to 8.
CN202210519592.2A 2022-05-13 2022-05-13 Visual method and device for assisting orbicularis stomatitis function training Active CN114917544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210519592.2A CN114917544B (en) 2022-05-13 2022-05-13 Visual method and device for assisting orbicularis stomatitis function training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210519592.2A CN114917544B (en) 2022-05-13 2022-05-13 Visual method and device for assisting orbicularis stomatitis function training

Publications (2)

Publication Number Publication Date
CN114917544A true CN114917544A (en) 2022-08-19
CN114917544B CN114917544B (en) 2023-09-22

Family

ID=82808325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210519592.2A Active CN114917544B (en) 2022-05-13 2022-05-13 Visual method and device for assisting orbicularis stomatitis function training

Country Status (1)

Country Link
CN (1) CN114917544B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017184274A1 (en) * 2016-04-18 2017-10-26 Alpha Computing, Inc. System and method for determining and modeling user expression within a head mounted display
CN108415560A (en) * 2018-02-11 2018-08-17 广东欧珀移动通信有限公司 Electronic device, method of controlling operation thereof and Related product
CN109646889A (en) * 2019-02-18 2019-04-19 河南翔宇医疗设备股份有限公司 Tongue muscle training system and tongue muscle training equipment
CN109885173A (en) * 2018-12-29 2019-06-14 深兰科技(上海)有限公司 A kind of noiseless exchange method and electronic equipment
CN110865705A (en) * 2019-10-24 2020-03-06 中国人民解放军军事科学院国防科技创新研究院 Multi-mode converged communication method and device, head-mounted equipment and storage medium
US20210216821A1 (en) * 2020-01-09 2021-07-15 Fujitsu Limited Training data generating method, estimating device, and recording medium
CN113274038A (en) * 2021-04-02 2021-08-20 上海大学 Lip sensor device combining myoelectricity and pressure signals
CN113362924A (en) * 2021-06-05 2021-09-07 郑州铁路职业技术学院 Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017184274A1 (en) * 2016-04-18 2017-10-26 Alpha Computing, Inc. System and method for determining and modeling user expression within a head mounted display
CN108415560A (en) * 2018-02-11 2018-08-17 广东欧珀移动通信有限公司 Electronic device, method of controlling operation thereof and Related product
CN109885173A (en) * 2018-12-29 2019-06-14 深兰科技(上海)有限公司 A kind of noiseless exchange method and electronic equipment
CN109646889A (en) * 2019-02-18 2019-04-19 河南翔宇医疗设备股份有限公司 Tongue muscle training system and tongue muscle training equipment
CN110865705A (en) * 2019-10-24 2020-03-06 中国人民解放军军事科学院国防科技创新研究院 Multi-mode converged communication method and device, head-mounted equipment and storage medium
US20210216821A1 (en) * 2020-01-09 2021-07-15 Fujitsu Limited Training data generating method, estimating device, and recording medium
CN113274038A (en) * 2021-04-02 2021-08-20 上海大学 Lip sensor device combining myoelectricity and pressure signals
CN113362924A (en) * 2021-06-05 2021-09-07 郑州铁路职业技术学院 Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system

Also Published As

Publication number Publication date
CN114917544B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN102895002B (en) Ultrasonic transducer system
CN107833611A (en) A kind of self-closing disease recovery training method based on virtual reality
CN103190905B (en) Multi-channel surface electromyography signal collection system based on wireless fidelity (Wi-Fi) and processing method thereof
Gick et al. Speech function of the oropharyngeal isthmus: A modelling study
KR20130121854A (en) Simulator for learning tracheal intubation
WO2004068406A3 (en) A method and system for image processing and contour assessment
JP2011521764A (en) How to find the desired state in a subject
CN104768588A (en) Controlling coughing and swallowing
JP6562454B2 (en) Chewing sensation feedback device
JP2015531643A (en) Automatic analysis of uterine activity signals and application to improving labor and childbirth experiences
JP6288548B2 (en) Medical training device
Orlandi et al. Effective pre-processing of long term noisy audio recordings: An aid to clinical monitoring
CN109637252A (en) A kind of neurosurgery virtual operation training system
Sejdić et al. Vocalization removal for improved automatic segmentation of dual-axis swallowing accelerometry signals
TWI418334B (en) System for physiological signal and environmental signal detection, analysis and feedback
KR20210076561A (en) Recognition Training System For Preventing Dementia Using Virtual Reality Contents
Shu et al. Anterior–posterior distension of maximal upper esophageal sphincter opening is correlated with high-resolution cervical auscultation signal features
CN114917544A (en) Visual auxiliary training method and equipment for function of orbicularis oris
CN204814357U (en) Blood oxygen monitoring snore relieving appearance
CN105228576A (en) For the predictability nerve exploitation therapy of oral feeding
JPS6132669B2 (en)
McCaughey et al. Non-intrusive real-time breathing pattern detection and classification for automatic abdominal functional electrical stimulation
JP2007117639A (en) Rearing device and program of virtual living body
CN207024063U (en) A kind of active multiple spot enterocinesia monitoring device for lifting gurgling sound discrimination
US20230038875A1 (en) Chewing assistance system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant