CN111814718A - Attention detection method integrating multiple discrimination technologies - Google Patents

Attention detection method integrating multiple discrimination technologies Download PDF

Info

Publication number
CN111814718A
CN111814718A CN202010691056.1A CN202010691056A CN111814718A CN 111814718 A CN111814718 A CN 111814718A CN 202010691056 A CN202010691056 A CN 202010691056A CN 111814718 A CN111814718 A CN 111814718A
Authority
CN
China
Prior art keywords
face
attention
frames
facial
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010691056.1A
Other languages
Chinese (zh)
Inventor
高飞
李帅
葛一粟
卢书芳
翁立波
张元鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010691056.1A priority Critical patent/CN111814718A/en
Publication of CN111814718A publication Critical patent/CN111814718A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Mathematical Physics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Developmental Disabilities (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Fuzzy Systems (AREA)
  • Signal Processing (AREA)
  • Child & Adolescent Psychology (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)

Abstract

The invention discloses an attention detection method fusing multiple discrimination techniques, which comprises three stages of face pose estimation, face expression analysis and upper body behavior analysis, wherein the collected images are respectively subjected to the face pose estimation, the face expression analysis and the upper body behavior analysis by fusing multiple discrimination techniques, and then the analysis results of the three stages are combined to obtain a final attention detection result. By adopting the technology, the detection cost is greatly reduced without any tool, the economic benefit is improved, the quantitative analysis of the attention is realized, the attention is divided into four conditions of being not concentrated, being concentrated and being concentrated according to the quantitative result, and the accuracy of the attention detection is greatly improved.

Description

Attention detection method integrating multiple discrimination technologies
Technical Field
The invention relates to the technical field of computer vision and education informatization, in particular to an attention detection method integrating multiple discrimination technologies.
Background
Whether the attention of the students is concentrated in class is an important index reflecting the learning efficiency and the teaching quality. The traditional analysis of student attention mainly adopts means such as classroom observation or scale comparison, and has obvious defects and lags in the aspects of informatization, standardization, intellectualization and the like. With the rapid development of information technology, especially computer vision technology, information-based intelligence has been widely applied to various fields in daily life of people, it is possible to deploy an information system in a classroom to observe and record behavior and actions of students, and a technical approach is provided for automatically observing the attention of the students.
With the wide popularization of intelligent classes, the attention detection of students is an important component part and is widely researched by broad students and institutions. The invention patent (inventor: Sunning, Liu \20342; Xin, application number: 201910728874.1, name: student attention analysis system based on wearable equipment and multi-mode intelligent analysis) discloses a student attention analysis system based on wearable equipment and multi-mode intelligent analysis, which utilizes a multi-target detection algorithm based on a deep neural network to carry out curtain detection on a teacher and a projector curtain in a video and determine the positions of the two targets in the video; and finally, integrating the attitude data and the video detection data to analyze the attention of the current student. The invention patent (inventor: Huangliya, Wangshen, etc., application number: 201810530784.7, name: learning attention evaluation system based on electroencephalogram) discloses a learning attention evaluation system based on electroencephalogram, which is characterized in that a portable electroencephalogram acquisition device is equipped for each student, the wearable electroencephalogram acquisition device is used for acquiring electroencephalogram data of the student, and the learning attention state of the student is evaluated in real time by analyzing the electroencephalogram data of the class student. The invention patent (inventor: Liuwei, Xujing, etc., application No. 201910451277.9, name: a student attention measurement system based on a moving seat.) discloses a student attention measurement system based on a moving seat, which comprises three parts of student face orientation detection, moving seat positioning and attention orientation measurement, wherein the attention orientation measurement is combined with the position coordinates of the moving seat, the orientation of the moving seat and the face orientation of the student to calculate the orientation of the visual attention of the student. The invention patent (inventor: Yankee and Chili, application number: 201810297899.6, name: classroom attention detection method, device, equipment and computer readable medium) provides a classroom attention detection method, device, equipment and computer readable medium, which can accurately obtain the attention of the current student through face orientation analysis, pupil analysis and fatigue analysis of the eyes of the student. The invention patent (inventor: Dongshi, Zhangshuo, etc., application number: 201910435766.5, name: a method for analyzing the learning state of students in class in a natural teaching environment in real time) provides a method for analyzing the learning state of students in class in a natural teaching environment in real time, which judges the current learning state of the students by detecting whether human faces appear in static position areas of the students and combining expression analysis and head postures. The invention patent (inventor: Chen Liangying, Liu le Yuan, etc., application number: 201410836650.X, name: student classroom attention detection method and system) discloses a student classroom attention detection method, which converts the two-position of a face in an image into the two-dimensional position of a sitting high reference plane in a teacher, adds a student sitting high prior value to obtain the three-dimensional space position of the face in a classroom, and calculates the attention point of a student on a teaching display board by combining the face orientation posture.
The above-mentioned partial method needs every student to wear head-mounted equipment and measures, has the problem that cost is high, and influences student's teaching experience and teaching effect greatly, and other methods have the great problem of error.
Disclosure of Invention
In view of the above problems of the conventional attention detection method, it is an object of the present invention to provide an attention detection method that combines a plurality of discrimination techniques.
The attention detection method fusing multiple discrimination technologies is characterized by comprising the following steps:
step 1: in the face pose estimation stage, the face poses are classified by using a convolutional neural network, and the face is classified into specific pose categories, specifically:
step 1.1: training the facial images of five different posture categories by utilizing a deep convolutional neural network to obtain a trained facial posture classification model M1The face pose class label C ═ { C ═ C1,c2,c3,c4,c5In which c is1、c2、c3、c4、c5Representing five different human face postures which respectively correspond to a front face, a 45-degree side face, a head-up face, a head-down face and a 75-degree side face;
step 1.2: setting the total frame number of the images collected in the time period t as N, wherein the front face frame number is N1The number of 45-degree side frames is N2The number of head-up frames is N3The number of low frames is N475 degree side frame number N5(ii) a Calculating the attention detection result r in the face pose estimation stage according to the formula (1)1
r1=(N1+N2+N3)/N (1)
Wherein r is1∈[0,1.0]0 means inattention, 1.0 means complete attention;
step 2: in the facial expression analysis stage, the convolutional neural network is used for classifying the facial expressions, and the facial expressions are classified into specific categories, specifically:
step 2.1: training facial images of five different expressions by utilizing a deep convolutional neural network to obtain a trained facial expression classification model M2The facial expression category label L ═ L1,l2,l3,l4,l5In which l1、l2、l3、l4、l5The facial expression system is characterized by showing five different facial expressions which respectively correspond to normal expressions, happy feeling, sadness, confusion and surprise;
step 2.2: setting the total number of the image frames acquired within the time t as N, wherein the number of the normal expression frames is F1The number of the happy expression frames is F2The number of the sad expression frames is F3Number of suspected expression frames is F4The number of surprised expression frames is F5(ii) a Calculating an attention detection result in a facial expression analysis stage according to the formula (2);
r2=(F1+F4)/N (2)
wherein r is2∈[0,1.0]0 means inattention, 1.0 means complete attention;
and step 3: in the upper half body behavior analysis stage, the upper half body behaviors are classified by using a convolutional neural network, and the upper half body behaviors are classified into specific categories, specifically:
step 3.1: training images of four different upper half body behaviors by utilizing a deep convolutional neural network to obtain a trained upper half body behavior classification model M3The upper body behavior category label K ═ K1,k2,k3,k4In which k is1、k2、k3、k4The method comprises the following steps of representing four different upper body behaviors, which respectively correspond to a normal behavior, a hand-held face, drinking water and playing a mobile phone;
step 3.2: the total frame number of the images collected in the time t is N, wherein the normal action number is P1The number of the hand-held face frames is P2The number of drinking frames is P3The number of playing frames is P4(ii) a Calculating the attention detection result of the upper half body behavior analysis stage according to the formula (3);
r3=P1/N (3)
wherein r is3∈[0,1.0]0 means inattention, 1.0 means complete attention;
and 4, step 4: obtaining an attention comprehensive evaluation result RLT according to the formulas (4) and (5);
R=r1×r2×r3(4)
Figure BDA0002589384120000041
wherein R ∈ [0,1.0], 0 means inattention, and 1.0 means complete attention.
The invention has the beneficial effects that:
1) the invention does not need to wear head-wearing equipment, and only needs to acquire data through the camera, thereby greatly reducing the cost;
2) the invention combines the convolutional neural network to carry out attention analysis, classifies the input images by using three classification models, namely a face posture classification model, a face expression classification model and an upper body behavior classification model, classifies the images into corresponding classes, gives class labels to the images, carries out comprehensive evaluation on attention by fusing various discrimination techniques, and can further improve the accuracy of attention discrimination.
Drawings
FIG. 1 is a schematic diagram of a frame of an attention detection method with multiple discrimination techniques integrated therein according to the present invention.
Detailed Description
The invention is further described below with reference to the drawings and examples of the specification. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, the attention detection method combining multiple discrimination techniques of the present invention specifically includes the following steps:
step 1: in the face pose estimation stage, the face poses are classified by using a convolutional neural network, and the face is classified into specific pose categories, specifically:
step 1.1: training the facial images of five different posture categories by utilizing a deep convolutional neural network to obtain a trained facial posture classification model M1The face pose class label C ═ { C ═ C1,c2,c3,c4,c5In which c is1、c2、c3、c4、c5Representing five different human face postures which respectively correspond to a front face, a 45-degree side face, a head-up face, a head-down face and a 75-degree side face;
step 1.2: setting the total frame number of the images collected in the time period t as N, wherein the front face frame number is N1The number of 45-degree side frames is N2The number of head-up frames is N3The number of low frames is N475 degree side frame number N5(ii) a Calculating the attention detection result r in the face pose estimation stage according to the formula (1)1
r1=(N1+N2+N3)/N (1)
Wherein r is1∈[0,1.0]0 means inattention, 1.0 means complete attention;
step 2: in the facial expression analysis stage, the convolutional neural network is used for classifying the facial expressions, and the facial expressions are classified into specific categories, specifically:
step 2.1: training facial images of five different expressions by utilizing a deep convolutional neural network to obtain a trained facial expression classification model M2The facial expression category label L ═ L1,l2,l3,l4,l5In which l1、l2、l3、l4、l5The facial expression system is characterized by showing five different facial expressions which respectively correspond to normal expressions, happy feeling, sadness, confusion and surprise;
step 2.2: setting the total number of the image frames acquired within the time t as N, wherein the number of the normal expression frames is F1The number of the happy expression frames is F2The number of the sad expression frames is F3Number of suspected expression frames is F4The number of surprised expression frames is F5(ii) a Calculating a face table according to equation (2)Attention detection results of the situation analysis stage;
r2=(F1+F4)/N (2)
wherein r is2∈[0,1.0]0 means inattention, 1.0 means complete attention;
and step 3: in the upper half body behavior analysis stage, the upper half body behaviors are classified by using a convolutional neural network, and the upper half body behaviors are classified into specific categories, specifically:
step 3.1: training images of four different upper half body behaviors by utilizing a deep convolutional neural network to obtain a trained upper half body behavior classification model M3The upper body behavior category label K ═ K1,k2,k3,k4In which k is1、k2、k3、k4The method comprises the following steps of representing four different upper body behaviors, which respectively correspond to a normal behavior, a hand-held face, drinking water and playing a mobile phone;
step 3.2: the total frame number of the images collected in the time t is N, wherein the normal action number is P1The number of the hand-held face frames is P2The number of drinking frames is P3The number of playing frames is P4(ii) a Calculating the attention detection result of the upper half body behavior analysis stage according to the formula (3);
r3=P1/N (3)
wherein r is3∈[0,1.0]0 means inattention, 1.0 means complete attention;
and 4, step 4: obtaining an attention comprehensive evaluation result RLT according to the formulas (4) and (5);
R=r1×r2×r3(4)
Figure BDA0002589384120000061
wherein R ∈ [0,1.0], 0 means inattention, and 1.0 means complete attention.

Claims (1)

1. An attention detection method fusing multiple discrimination techniques is characterized by comprising the following steps:
step 1: in the face pose estimation stage, the face poses are classified by using a convolutional neural network, and the face is classified into specific pose categories, specifically:
step 1.1: training the facial images of five different posture categories by utilizing a deep convolutional neural network to obtain a trained facial posture classification model M1The face pose class label C ═ { C ═ C1,c2,c3,c4,c5In which c is1、c2、c3、c4、c5Representing five different human face postures which respectively correspond to a front face, a 45-degree side face, a head-up face, a head-down face and a 75-degree side face;
step 1.2: setting the total frame number of the images collected in the time period t as N, wherein the front face frame number is N1The number of 45-degree side frames is N2The number of head-up frames is N3The number of low frames is N475 degree side frame number N5(ii) a Calculating the attention detection result r in the face pose estimation stage according to the formula (1)1
r1=(N1+N2+N3)/N (1)
Wherein r is1∈[0,1.0]0 means inattention, 1.0 means complete attention;
step 2: in the facial expression analysis stage, the convolutional neural network is used for classifying the facial expressions, and the facial expressions are classified into specific categories, specifically:
step 2.1: training facial images of five different expressions by utilizing a deep convolutional neural network to obtain a trained facial expression classification model M2The facial expression category label L ═ L1,l2,l3,l4,l5In which l1、l2、l3、l4、l5The facial expression system is characterized by showing five different facial expressions which respectively correspond to normal expressions, happy feeling, sadness, confusion and surprise;
step 2.2: let the total frame of the image acquired within time tThe number is N, wherein the number of the normal expression frames is F1The number of the happy expression frames is F2The number of the sad expression frames is F3Number of suspected expression frames is F4The number of surprised expression frames is F5(ii) a Calculating an attention detection result in a facial expression analysis stage according to the formula (2);
r2=(F1+F4)/N (2)
wherein r is2∈[0,1.0]0 means inattention, 1.0 means complete attention;
and step 3: in the upper half body behavior analysis stage, the upper half body behaviors are classified by using a convolutional neural network, and the upper half body behaviors are classified into specific categories, specifically:
step 3.1: training images of four different upper half body behaviors by utilizing a deep convolutional neural network to obtain a trained upper half body behavior classification model M3The upper body behavior category label K ═ K1,k2,k3,k4In which k is1、k2、k3、k4The method comprises the following steps of representing four different upper body behaviors, which respectively correspond to a normal behavior, a hand-held face, drinking water and playing a mobile phone;
step 3.2: the total frame number of the images collected in the time t is N, wherein the normal action number is P1The number of the hand-held face frames is P2The number of drinking frames is P3The number of playing frames is P4(ii) a Calculating the attention detection result of the upper half body behavior analysis stage according to the formula (3);
r3=P1/N (3)
wherein r is3∈[0,1.0]0 means inattention, 1.0 means complete attention;
and 4, step 4: obtaining an attention comprehensive evaluation result RLT according to the formulas (4) and (5);
R=r1×r2×r3(4)
Figure FDA0002589384110000021
wherein R ∈ [0,1.0], 0 means inattention, and 1.0 means complete attention.
CN202010691056.1A 2020-07-17 2020-07-17 Attention detection method integrating multiple discrimination technologies Withdrawn CN111814718A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010691056.1A CN111814718A (en) 2020-07-17 2020-07-17 Attention detection method integrating multiple discrimination technologies

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010691056.1A CN111814718A (en) 2020-07-17 2020-07-17 Attention detection method integrating multiple discrimination technologies

Publications (1)

Publication Number Publication Date
CN111814718A true CN111814718A (en) 2020-10-23

Family

ID=72865425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010691056.1A Withdrawn CN111814718A (en) 2020-07-17 2020-07-17 Attention detection method integrating multiple discrimination technologies

Country Status (1)

Country Link
CN (1) CN111814718A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255572A (en) * 2021-06-17 2021-08-13 华中科技大学 Classroom attention assessment method and system
CN114159077A (en) * 2022-02-09 2022-03-11 浙江强脑科技有限公司 Meditation scoring method, device, terminal and storage medium based on electroencephalogram signals
CN114366103A (en) * 2022-01-07 2022-04-19 北京师范大学 Attention assessment method and device and electronic equipment
WO2023000838A1 (en) * 2021-07-22 2023-01-26 北京有竹居网络技术有限公司 Information detection method and apparatus, medium, and electronic device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255572A (en) * 2021-06-17 2021-08-13 华中科技大学 Classroom attention assessment method and system
WO2023000838A1 (en) * 2021-07-22 2023-01-26 北京有竹居网络技术有限公司 Information detection method and apparatus, medium, and electronic device
CN114366103A (en) * 2022-01-07 2022-04-19 北京师范大学 Attention assessment method and device and electronic equipment
CN114159077A (en) * 2022-02-09 2022-03-11 浙江强脑科技有限公司 Meditation scoring method, device, terminal and storage medium based on electroencephalogram signals

Similar Documents

Publication Publication Date Title
CN111814718A (en) Attention detection method integrating multiple discrimination technologies
CN111528859B (en) Child ADHD screening and evaluating system based on multi-modal deep learning technology
CN110349667B (en) Autism assessment system combining questionnaire and multi-modal model behavior data analysis
CN113762133A (en) Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition
CN111563452B (en) Multi-human-body gesture detection and state discrimination method based on instance segmentation
Ludl et al. Enhancing data-driven algorithms for human pose estimation and action recognition through simulation
CN109885595A (en) Course recommended method, device, equipment and storage medium based on artificial intelligence
CN110363129A (en) Autism early screening system based on smile normal form and audio-video behavioural analysis
CN111523445B (en) Examination behavior detection method based on improved Openpost model and facial micro-expression
CN113486744B (en) Student learning state evaluation system and method based on eye movement and facial expression paradigm
Zaletelj Estimation of students' attention in the classroom from kinect features
CN116645721B (en) Sitting posture identification method and system based on deep learning
Panetta et al. Software architecture for automating cognitive science eye-tracking data analysis and object annotation
Tang et al. Automatic facial expression analysis of students in teaching environments
CN116109455A (en) Language teaching auxiliary system based on artificial intelligence
Xu et al. Wayfinding design in transportation architecture–are saliency models or designer visual attention a good predictor of passenger visual attention?
CN114022918A (en) Multi-posture-based learner excitement state label algorithm
Enadula et al. Recognition of student emotions in an online education system
Guo et al. PhyCoVIS: A visual analytic tool of physical coordination for cheer and dance training
CN117542121A (en) Computer vision-based intelligent training and checking system and method
CN112836945A (en) Teaching state quantitative evaluation system for teaching and teaching of professor
CN107862246A (en) A kind of eye gaze direction detection method based on various visual angles study
Xu et al. Analyzing students' attention by gaze tracking and object detection in classroom teaching
Pang et al. Recognition of Academic Emotions in Online Classes
CN115797829A (en) Online classroom learning state analysis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20201023

WW01 Invention patent application withdrawn after publication