CN114399709A - Child emotion recognition model training method and child emotion recognition method - Google Patents

Child emotion recognition model training method and child emotion recognition method Download PDF

Info

Publication number
CN114399709A
CN114399709A CN202111667789.2A CN202111667789A CN114399709A CN 114399709 A CN114399709 A CN 114399709A CN 202111667789 A CN202111667789 A CN 202111667789A CN 114399709 A CN114399709 A CN 114399709A
Authority
CN
China
Prior art keywords
video data
child
emotion recognition
marking
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111667789.2A
Other languages
Chinese (zh)
Inventor
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Peking University Medical Brain Health Technology Co ltd
Original Assignee
Beijing Peking University Medical Brain Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Peking University Medical Brain Health Technology Co ltd filed Critical Beijing Peking University Medical Brain Health Technology Co ltd
Priority to CN202111667789.2A priority Critical patent/CN114399709A/en
Publication of CN114399709A publication Critical patent/CN114399709A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a child emotion recognition model training method and a child emotion recognition method, wherein the model training method comprises the following steps: collecting video data containing facial expressions of children to be tested, wherein the video data are collected based on emotion induction materials or teaching videos; filtering the video data, and marking the filtered video data; extracting the facial features of the tested child in the filtered video data; and training a support vector machine according to the facial features and the marking results to obtain a child emotion recognition model. According to the method and the device, the child emotion recognition model can be obtained through training, and the child emotion recognition model assists in knowing that the teacher recognizes the emotion of the autism child, so that the teacher can be guided to intervene and treat the autism child as early as possible.

Description

Child emotion recognition model training method and child emotion recognition method
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a child emotion recognition model training method and a child emotion recognition method.
Background
Autism is a disease characterized by social communication disorders and repetitive sensory movements. The current incidence rate of autism continuously rises in the global scope, the number of autism children is continuously increased, and data show that the earlier the diagnosis is, the better the intervention and treatment effect is. The treatment of the domestic autism is mainly based on intervention, but there is a lack of a sufficient experienced autism instructor, so a scheme capable of assisting the instructor in identifying the emotion of an autism child is needed.
Disclosure of Invention
Without solving the above problems in the background art, embodiments of the present application provide a child emotion recognition model training method, a child emotion recognition method, an apparatus, an electronic device, and a computer-readable storage medium.
In a first aspect of the present application, there is provided a child emotion recognition model training method, including:
collecting video data containing facial expressions of children to be tested, wherein the video data are collected based on emotion induction materials or teaching videos;
filtering the video data, and marking the filtered video data;
extracting the facial features of the tested child in the filtered video data;
and training a support vector machine according to the facial features and the marking results to obtain a child emotion recognition model.
In a possible implementation manner, the filtering the video data, and marking the filtered video data includes:
marking the video data by a plurality of researchers;
if the marking results of at least two researchers on the same video data are the same, the video data are reserved; otherwise, the video data is removed;
wherein the marking results include distraction, fear, heart injury, calmness, anger generation and nausea.
In one possible implementation manner, the extracting facial features of the child under test in the filtered video data includes:
selecting the center of each video frame in the filtered video data, respectively calculating the LBP value of a first preset plane, the LBP value of a second preset plane and the LBP value of a third preset plane, and representing the LBP values by using a histogram;
and cascading the histogram of the first preset plane, the histogram of the second preset plane and the histogram of the third preset plane to obtain the facial features.
In a second aspect of the present application, there is provided a child emotion recognition method including:
acquiring a video image to be processed, wherein the video image to be processed contains the facial expression of a tested child;
according to the video image to be processed, a child emotion recognition model obtained through training by the training method of claim 1 is used for determining the emotion of the tested child.
In a third aspect of the present application, there is provided a child emotion recognition model training apparatus, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring video data containing the facial expression of a tested child, and the video data is acquired based on emotion induction materials or teaching videos;
the marking module is used for filtering the video data and marking the filtered video data;
the extraction module is used for extracting the facial features of the tested child in the filtered video data;
and the training module is used for training a support vector machine according to the facial features and the marking results to obtain a child emotion recognition model.
In a possible implementation manner, the marking module is specifically configured to:
marking the video data by a plurality of researchers;
if the marking results of at least two researchers on the same video data are the same, the video data are reserved; otherwise, the video data is removed;
wherein the marking results include distraction, fear, heart injury, calmness, anger generation and nausea.
In a possible implementation manner, the extraction module is specifically configured to:
selecting the center of each video frame in the filtered video data, respectively calculating the LBP value of a first preset plane, the LBP value of a second preset plane and the LBP value of a third preset plane, and representing the LBP values by using a histogram;
and cascading the histogram of the first preset plane, the histogram of the second preset plane and the histogram of the third preset plane to obtain the facial features.
In a fourth aspect of the present application, there is provided a child emotion recognition apparatus including:
the acquisition module is used for acquiring a video image to be processed, wherein the video image to be processed contains the facial expression of the tested child;
a determining module, configured to determine, according to the to-be-processed video image, an emotion of the child to be tested, using the child emotion recognition model obtained through the training method according to claim 1.
In a fifth aspect of the present application, there is provided an electronic device comprising a memory having stored thereon a computer program that, when executed by the processor, implements the method of any one of the first aspect or the method of the second aspect.
In a sixth aspect of the present application, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when executed by a processor, implements the method according to any one of the first aspect or the method according to the second aspect.
In the children emotion recognition model training method and the children emotion recognition method provided by the embodiment of the application, video data containing facial expressions of tested children are collected; filtering the video data, and marking the filtered video data; extracting the facial features of the tested child in the filtered video data; according to the method, the support vector machine is trained according to the facial features and the marking results to obtain the child emotion recognition model, the child emotion recognition model can be obtained through training, and the child emotion recognition model assists in knowing that a teacher recognizes the emotion of the autism child, so that guidance of the teacher for early intervention treatment of the autism child is facilitated.
It should be understood that what is described in this summary section is not intended to limit key or critical features of the embodiments of the application, nor is it intended to limit the scope of the application. Other features of the present application will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present application will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
fig. 1 shows a flowchart of a child emotion recognition model training method provided by an embodiment of the present application.
Fig. 2 shows a flowchart of a child emotion recognition method provided by an embodiment of the present application.
Fig. 3 shows a block diagram of a child emotion recognition model training apparatus provided in an embodiment of the present application.
Fig. 4 shows a block diagram of a child emotion recognition apparatus provided by an embodiment of the present application.
Fig. 5 shows a schematic structural diagram of an electronic device suitable for implementing embodiments of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Fig. 1 shows a flowchart of a child emotion recognition model training method provided by an embodiment of the present application. Referring to fig. 1, the method comprises the steps of:
step 101, collecting video data containing facial expressions of the children to be tested, wherein the video data are collected based on emotion induction materials or collected based on teaching videos.
The test child may be a child identified after screening. Wherein, the screening conditions can be as follows: the patient is confirmed to be the patient with autism spectrum disorder through an authoritative hospital; the vision or the corrected vision is normal, and the vision is normal; age between 3 and 7 years; the tested children agreed with the test data for scientific research.
When the video data of the facial expression of the tested child is collected, the following two modes can be adopted for collection:
(1) mood-inducing material based
The emotion induction material can be common learning materials of the autism children, emotion changes of the autism children can be effectively caused, a researcher reads the learning materials, and facial expression screen data of the tested children listening to the learning materials read by the researcher are collected through the camera device.
(2) Video based teaching
The research personnel plays teaching videos to the children to be tested, and facial expression screen data of the children to be tested reading the learning materials when listening to the research personnel are collected through the camera equipment.
And 102, filtering the video data, and marking the filtered video data.
Before the video data is filtered, the video data needs to be screened to remove the video data with the types of face occlusion, face image incompleteness, face deviation and the like in the video data.
In one possible embodiment, the filtering of the video data may be performed in the following manner:
marking the video data by a plurality of researchers, and if the marking results of at least two researchers on the same video data are the same, reserving the video data; otherwise, the video data is removed. Marking results may include distraction, fear, heart injury, calm, anger, and nausea.
And 103, extracting the facial features of the tested child in the filtered video data.
In the embodiment of the application, the LBP is selected to extract the facial features of the tested child, the center of each video frame in the filtered video data is selected, the LBP value of the first preset plane, the LBP value of the second preset plane and the LBP value of the third preset plane are respectively calculated and represented by a histogram; and cascading the histogram of the first preset plane, the histogram of the second preset plane and the histogram of the third preset plane to obtain the facial features.
The first preset plane is an XY plane, the second preset plane is an XT plane, and the third preset plane is a YT plane. The facial features of the child under test can then be expressed as follows:
Figure BDA0003448763050000061
i=0,1,…nj-1;j=0,1,2
Figure BDA0003448763050000062
wherein j is a plane number, j is 0,1,2 are respectively an XY plane, an XT plane and a YT plane, njRepresenting the number of binary patterns on the j-plane.
And 104, training a support vector machine according to the facial features and the marking results to obtain a child emotion recognition model.
In training a support vector machine, a training set comprising N samples is given
X={(x1,y1),…,(xN,yN)}
Wherein, the K-dimensional feature vector
Figure BDA0003448763050000063
Class label yn∈{1,2,…,M},n=1,2,…,N。
In the embodiment of the application, the facial features and the marking results (namely the labels) are used as training sets to train the support vector machine, so that the emotion recognition model of the child can be obtained.
Fig. 2 shows a flowchart of a child emotion recognition method provided by an embodiment of the present application. Referring to fig. 2, the method comprises the steps of:
step 201, a video image to be processed is obtained, and the video image to be processed contains the facial expression of the tested child.
It should be noted that, when the video image to be processed is obtained, the image of the tested child is directly acquired by adopting the video shooting device without the need of the emotion induction material or the teaching video.
Step 202, according to the video image to be processed, the emotion recognition model of the child obtained through training by the training method is used for determining the emotion of the tested child.
In the embodiment of the application, the emotion recognition model of the child is obtained through training, and the emotion recognition model of the child assists in knowing that the teacher recognizes the emotion of the autism child, so that the teaching of the teacher to perform intervention treatment on the autism child as early as possible is facilitated.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
The above is a description of method embodiments, and the embodiments of the present application are further described below by way of apparatus embodiments.
Fig. 3 shows a block diagram of a child emotion recognition model training apparatus provided in an embodiment of the present application. Referring to fig. 3, the apparatus includes:
the acquisition module 301 is used for acquiring video data containing facial expressions of the children to be tested, wherein the video data is acquired based on emotion induction materials or acquired based on teaching videos.
A marking module 302, configured to filter the video data and mark the filtered video data.
And the extracting module 303 is configured to extract facial features of the child to be tested in the filtered video data.
And the training module 304 is used for training a support vector machine according to the facial features and the marking results to obtain a child emotion recognition model.
In some embodiments, the tagging module 302 is specifically configured to:
marking the video data by a plurality of researchers;
if the marking results of at least two researchers on the same video data are the same, the video data are reserved; otherwise, the video data is removed;
wherein the marking results include distraction, fear, heart injury, calmness, anger generation and nausea.
In some embodiments, the extracting module 303 is specifically configured to:
selecting the center of each video frame in the filtered video data, respectively calculating the LBP value of a first preset plane, the LBP value of a second preset plane and the LBP value of a third preset plane, and representing the LBP values by using a histogram;
and cascading the histogram of the first preset plane, the histogram of the second preset plane and the histogram of the third preset plane to obtain the facial features.
Fig. 4 shows a block diagram of a child emotion recognition apparatus provided by an embodiment of the present application. Referring to fig. 4, the apparatus includes:
the acquiring module 401 is configured to acquire a video image to be processed, where the video image to be processed includes a facial expression of a child to be tested.
A determining module 402, configured to determine an emotion of the child to be tested according to the to-be-processed video image and the child emotion recognition model obtained through the training method according to claim 1.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Fig. 5 shows a schematic structural diagram of an electronic device suitable for implementing embodiments of the present application.
As shown in fig. 5, the electronic device 500 includes: a processor 501 and a memory 503. Wherein the processor 501 is coupled to the memory 503, such as via the bus 502. Optionally, the electronic device 500 may also include a transceiver 504. It should be noted that the transceiver 504 is not limited to one in practical applications, and the structure of the electronic device 500 is not limited to the embodiment of the present application.
The Processor 501 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 501 may also be a combination of implementing computing functionality, e.g., comprising one or more microprocessors, a combination of DSPs and microprocessors, and the like.
Bus 302 may include a path that transfers information between the above components. The bus 502 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 502 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
The Memory 503 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 503 is used for storing application program codes for executing the scheme of the application, and the processor 501 controls the execution. The processor 501 is configured to execute application program code stored in the memory 503 to implement the content shown in the foregoing method embodiments.
Among them, electronic devices include but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. Compared with the prior art, in the embodiment of the application, the child emotion recognition model is obtained through training, and the child emotion recognition model assists in knowing that the teacher recognizes the emotion of the autism child, so that the teaching of the teacher to perform intervention treatment on the autism child as early as possible is facilitated.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. A child emotion recognition model training method is characterized by comprising the following steps:
collecting video data containing facial expressions of children to be tested, wherein the video data are collected based on emotion induction materials or teaching videos;
filtering the video data, and marking the filtered video data;
extracting the facial features of the tested child in the filtered video data;
and training a support vector machine according to the facial features and the marking results to obtain a child emotion recognition model.
2. The training method of claim 1, wherein the filtering the video data and marking the filtered video data comprises:
marking the video data by a plurality of researchers;
if the marking results of at least two researchers on the same video data are the same, the video data are reserved; otherwise, the video data is removed;
wherein the marking results include distraction, fear, heart injury, calmness, anger generation and nausea.
3. The training method of claim 1, wherein extracting facial features of the child under test in the filtered video data comprises:
selecting the center of each video frame in the filtered video data, respectively calculating the LBP value of a first preset plane, the LBP value of a second preset plane and the LBP value of a third preset plane, and representing the LBP values by using a histogram;
and cascading the histogram of the first preset plane, the histogram of the second preset plane and the histogram of the third preset plane to obtain the facial features.
4. A method for recognizing emotion of a child, comprising:
acquiring a video image to be processed, wherein the video image to be processed contains the facial expression of a tested child;
according to the video image to be processed, a child emotion recognition model obtained through training by the training method of claim 1 is used for determining the emotion of the tested child.
5. A child emotion recognition model training device, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring video data containing the facial expression of a tested child, and the video data is acquired based on emotion induction materials or teaching videos;
the marking module is used for filtering the video data and marking the filtered video data;
the extraction module is used for extracting the facial features of the tested child in the filtered video data;
and the training module is used for training a support vector machine according to the facial features and the marking results to obtain a child emotion recognition model.
6. The training device of claim 5, wherein the labeling module is specifically configured to:
marking the video data by a plurality of researchers;
if the marking results of at least two researchers on the same video data are the same, the video data are reserved; otherwise, the video data is removed;
wherein the marking results include distraction, fear, heart injury, calmness, anger generation and nausea.
7. The training device of claim 5, wherein the extraction module is specifically configured to:
selecting the center of each video frame in the filtered video data, respectively calculating the LBP value of a first preset plane, the LBP value of a second preset plane and the LBP value of a third preset plane, and representing the LBP values by using a histogram;
and cascading the histogram of the first preset plane, the histogram of the second preset plane and the histogram of the third preset plane to obtain the facial features.
8. A child emotion recognition apparatus, comprising:
the acquisition module is used for acquiring a video image to be processed, wherein the video image to be processed contains the facial expression of the tested child;
a determining module, configured to determine, according to the to-be-processed video image, an emotion of the child to be tested, using the child emotion recognition model obtained through the training method according to claim 1.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor, when executing the computer program, implements the method of any of claims 1-3 or the method of claim 4.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 3 or the method of claim 4.
CN202111667789.2A 2021-12-30 2021-12-30 Child emotion recognition model training method and child emotion recognition method Pending CN114399709A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111667789.2A CN114399709A (en) 2021-12-30 2021-12-30 Child emotion recognition model training method and child emotion recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111667789.2A CN114399709A (en) 2021-12-30 2021-12-30 Child emotion recognition model training method and child emotion recognition method

Publications (1)

Publication Number Publication Date
CN114399709A true CN114399709A (en) 2022-04-26

Family

ID=81229586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111667789.2A Pending CN114399709A (en) 2021-12-30 2021-12-30 Child emotion recognition model training method and child emotion recognition method

Country Status (1)

Country Link
CN (1) CN114399709A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115064275A (en) * 2022-08-19 2022-09-16 山东心法科技有限公司 Method, equipment and medium for quantifying and training children computing capacity

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107785061A (en) * 2017-10-10 2018-03-09 东南大学 Autism-spectrum disorder with children mood ability interfering system
CN108520012A (en) * 2018-03-21 2018-09-11 北京航空航天大学 Mobile Internet user comment method for digging based on machine learning
CN110619301A (en) * 2019-09-13 2019-12-27 道和安邦(天津)安防科技有限公司 Emotion automatic identification method based on bimodal signals
CN112220455A (en) * 2020-10-14 2021-01-15 深圳大学 Emotion recognition method and device based on video electroencephalogram signals and computer equipment
CN113657168A (en) * 2021-07-19 2021-11-16 西安理工大学 Convolutional neural network-based student learning emotion recognition method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107785061A (en) * 2017-10-10 2018-03-09 东南大学 Autism-spectrum disorder with children mood ability interfering system
CN108520012A (en) * 2018-03-21 2018-09-11 北京航空航天大学 Mobile Internet user comment method for digging based on machine learning
CN110619301A (en) * 2019-09-13 2019-12-27 道和安邦(天津)安防科技有限公司 Emotion automatic identification method based on bimodal signals
CN112220455A (en) * 2020-10-14 2021-01-15 深圳大学 Emotion recognition method and device based on video electroencephalogram signals and computer equipment
CN113657168A (en) * 2021-07-19 2021-11-16 西安理工大学 Convolutional neural network-based student learning emotion recognition method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115064275A (en) * 2022-08-19 2022-09-16 山东心法科技有限公司 Method, equipment and medium for quantifying and training children computing capacity

Similar Documents

Publication Publication Date Title
CN111754541B (en) Target tracking method, device, equipment and readable storage medium
CN111563502B (en) Image text recognition method and device, electronic equipment and computer storage medium
CN107239666A (en) A kind of method and system that medical imaging data are carried out with desensitization process
Rahman et al. A framework for fast automatic image cropping based on deep saliency map detection and gaussian filter
CN111046969A (en) Data screening method and device, storage medium and electronic equipment
CN108932533A (en) Identification model construction method and device, character identifying method and device
CN114399709A (en) Child emotion recognition model training method and child emotion recognition method
Lang et al. Dual low-rank pursuit: Learning salient features for saliency detection
CN109784207B (en) Face recognition method, device and medium
CN111797704B (en) Action recognition method based on related object perception
Li et al. Bone age assessment based on deep neural networks with annotation-free cascaded critical bone region extraction
CN111563399A (en) Method and device for acquiring structured information of electronic medical record
CN113762237B (en) Text image processing method, device, equipment and storage medium
CN110223718B (en) Data processing method, device and storage medium
CN113821689A (en) Pedestrian retrieval method and device based on video sequence and electronic equipment
CN113469053A (en) Eye movement track identification method and system
CN115510931A (en) Method for generating abnormality detection model, abnormality detection method and electronic device
CN112614562A (en) Model training method, device, equipment and storage medium based on electronic medical record
CN112232282A (en) Gesture recognition method and device, storage medium and electronic equipment
CN113505665B (en) Student emotion interpretation method and device in school based on video
Rani et al. Microexpression Analysis: A Review
CN113111689A (en) Sample mining method, device, equipment and storage medium
CN116257622B (en) Label rendering method and device, storage medium and electronic equipment
CN111680722B (en) Content identification method, device, equipment and readable storage medium
CN116311316A (en) Medical record classification method, system, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220426