CN115797829A - Online classroom learning state analysis method - Google Patents

Online classroom learning state analysis method Download PDF

Info

Publication number
CN115797829A
CN115797829A CN202211484014.6A CN202211484014A CN115797829A CN 115797829 A CN115797829 A CN 115797829A CN 202211484014 A CN202211484014 A CN 202211484014A CN 115797829 A CN115797829 A CN 115797829A
Authority
CN
China
Prior art keywords
classroom
data
generation model
learning
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211484014.6A
Other languages
Chinese (zh)
Inventor
应放天
周梦萍
姚琤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202211484014.6A priority Critical patent/CN115797829A/en
Publication of CN115797829A publication Critical patent/CN115797829A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an online classroom learning state analysis method, which comprises the following steps: (1) Collecting online education classroom videos, intercepting images according to frames, and establishing an image data set; (2) Extracting a face data set and an eye data set from the image data set, and further carrying out preprocessing; (3) constructing an expression generation model; (4) constructing an eye movement generation model; (5) Training the expression generation model and the eye movement generation model; (6) On the online class, recording the interaction times and the interaction duration of teachers and students, and simultaneously recording the expression data and the eye movement data of the students based on the expression generation model and the eye movement generation model; (7) And performing multi-mode feature fusion analysis on the expression data, the eye movement data and the classroom interaction data of the students, obtaining visual learning state data by using an online learning state visualization system, and feeding back the visual learning state data to the teacher and the students. The invention can assist teachers to know the whole learning atmosphere of a classroom in real time and adjust the teaching mode in time.

Description

Online classroom learning state analysis method
Technical Field
The invention belongs to the field of online education, and particularly relates to an online classroom learning state analysis method.
Background
The rapid development of modern information technology promotes the important changes of knowledge structure, propagation carriers and the like, which brings about the corresponding change of learning modes. The online education and the intelligent education are becoming new forms of education.
Online education has incomparable advantages compared to traditional offline education modes. The emerging technology brings space-time integration for the online education field, breaks the limit of time and learning space, is no longer limited in the traditional mode, and can use the mobile equipment to learn at any time and any place. The online education promotes the education industry to generate wide and profound changes, improves the class participation of learners through a more visual and vivid teacher-student interaction mode, and optimizes the teaching mode of teachers.
While there are many advantages to smart education, there are some deficiencies in the developmental stages. Chinese patent publication No. CN201810124929 discloses an intelligent analysis method and system for student classroom learning interest, wherein a binary emotion recognition model is adopted, smiling faces represent positive emotions, and non-smiling faces are classified as negative emotions. In fact, human emotional expression is far more than two types, and emotional intensity level is not accurately measured and is not representative for student emotion recognition.
Chinese patent publication No. CN110287790 discloses a learning state analysis method for static multi-person scenes, and chinese patent publication No. CN110287792 discloses a real-time analysis method for learning states of students in a classroom in a natural teaching environment, the classroom attention mechanism mainly analyzes the head posture of a learner to determine the attention mechanism of the learner, and when a student stays in a free state and stares at a classroom screen, the head posture remains unchanged, the current attention state cannot be determined, and in the course of going to the classroom, the feedback of the monitoring posture of the head data of the student is slow, so that it is difficult to accurately monitor the attention state of the student through a single head state.
The educator can greatly improve the teaching effect by using the efficient teaching tool, and is helpful for the learner to easily master the knowledge. In the course of giving lessons, the learning interest and the course participation of students have direct influence on the learning effect. The online education platform on the market at present does not integrate the learning interest and classroom participation of students in real time classroom to solve the problem of low-efficiency online teaching. Therefore, the online education field needs a method which can acquire the learning state information of students in the course of giving lessons in real time and display the information to teachers in real time, so that the students can conveniently adjust and optimize the teaching mode in time and learners can master knowledge quickly.
Disclosure of Invention
The invention provides an online classroom learning state analysis method, which can accurately analyze and monitor the learning interest, concentration and classroom participation of students in real time, help the students to adjust the learning interest, concentration and classroom participation in the classroom process, help teachers to know the whole classroom learning atmosphere in real time, adjust the teaching mode in time and visualize the teaching process.
An online classroom learning state analysis method comprises the following steps:
(1) Collecting online education classroom videos from an open source database, intercepting images according to frames, and establishing an image data set; the image is labeled with six types of emotion labels corresponding to the expressions: aversion, fear, depression, relaxation, surprise, pleasure, and three types of labels corresponding to the eyes: a gazing area, a gazing point and gazing duration;
(2) Extracting a face data set and an eye data set from the image data set, and further preprocessing the face data set and the eye data set;
(3) Constructing a cyclic neural network expression generation model based on a long-term and short-term memory network;
(4) Constructing a cyclic neural network eye movement generation model based on a long-term and short-term memory network;
(5) Training an expression generation model and an eye movement generation model by utilizing the preprocessed face data set and the preprocessed eye data set;
(6) On the online class, recording the interaction times and the interaction duration of teachers and students, and simultaneously recording the expression data and the eye movement data of the students based on the expression generation model and the eye movement generation model;
(7) And performing multi-mode feature fusion analysis on the expression data, the eye movement data and the classroom interaction data of the students, obtaining visual learning state data by using an online learning state visualization system, and feeding back the visual learning state data to teachers and the students for adjusting the overall classroom atmosphere.
In the step (2), the specific process of extracting the face data set and the eye data set from the image data set is as follows:
obtaining a face region image of a person in the image data set using an Adaboost algorithm based on haar-like features; the method comprises the steps of intercepting a real-time online classroom video according to frames, calculating Haar-like features of different sizes and positions in images to obtain a weak classifier of the Haar-like features of each image, selecting an optimal classifier through an AdaBoost iterative algorithm to construct a strong classifier, identifying and extracting human faces through a rectangular frame, accurately detecting the human eyes by using an AdaBoost human eye detector, and collecting a human face data set and an eye data set.
The process of preprocessing the face data set comprises the following steps:
calling a resize method in an OpenCV library, and normalizing the images with different sizes into images with the size of 64 × 64 pixels; meanwhile, the image is grayed, the RGB space is converted into the YUV space, and the Gamma correction method is adopted to standardize the space color of the image.
In the step (3), the recurrent neural network expression generation model adopts a long-short term memory network (LSTM), the number of layers of the neural network is set to be 3, the number of network nodes is set to be 256, the input step length is set to be 6, and the learning rate is set to be 0.005.
In the step (4), the eye movement generation model of the recurrent neural network adopts a mode of combining an attention mechanism and a long-short term memory network, selectively learns the intermediate output result by learning some parameters, correlates the output sequence with the model when the model is output, and adopts a full-connection mode to carry out output mapping, and the activation function of an output layer is softmax.
In the step (5), the specific process of training the expression generation model is as follows:
A. building a recurrent neural network expression generation model by using a Tensorflow2.0 framework;
B. calculating parameters needing training, and selecting training set samples and test set samples from the preprocessed human face data set;
C. the training samples are expanded by using a data enhancement technology, so that the over-fitting phenomenon in the training process is avoided;
D. during training, the number of graphics sequences input at one time is 16, the number of frames of images in a group of image sequences is 10, and the size of a training batch is set to be 128;
E. in the training process, the step length is set to be 6, the initial value of the learning rate is set to be 0.005, the learning rate is reduced to be 0.9 every ten epochs from the twentieth epoch, and the test and the verification are carried out every 1000 step lengths.
The specific process of training the eye movement generation model is as follows:
a. building a cyclic neural network eye movement generation model by using a keras framework;
b. calculating parameters needing training, selecting 80% of data as training set samples and 20% of data as test set samples for verification;
c. expanding a training sample by using a data enhancement technology to prevent the phenomenon of data overfitting in the training process;
d. the length of an input window and the length of an output window adopted in the model are respectively 260 and 254;
e. during training, for the LSTM network with three layers, a learning rate attenuation strategy is used for training a model, parameters of the network are initialized randomly in a normal distribution mode, the batch size is set to be 5000, eye features are subjected to network coding, and a loss function of a classified cross entropy and an Adam optimizer are used.
In the step (7), the online learning state visualization system comprises a classroom atmosphere evaluation module, a learning interest evaluation module, a classroom concentration evaluation module and a classroom participation evaluation module;
the process of obtaining visual learning state data by the classroom atmosphere evaluation module is as follows: the three-dimensional scores of learning interest, classroom concentration and classroom participation are divided into S1, S2 and S3; setting the weight of the expression data decision as 0.4, the weight of the eye movement data decision as 0.3 and the decision of the classroom interaction data as 0.3, and then the total score of the online classroom learning state after multi-mode information fusion is as follows: s =0.4 + S1+0.3 + S2+0.3 + S3; the change of the total learning state of the current classroom is reflected in real time in a line graph form, the independent variable of a transverse axis is time T, and the dependent variable of a longitudinal axis is classroom scoring S;
the learning interest evaluation module is used for determining the score S1 of the learning interest, and the specific process is as follows: respectively setting scores of six emotions of aversion, fear, depression, relaxation, surprise and happiness as-3, -2, -1,0,1 and 2; recording the number of students of various emotions in real time, and summing scores of all the emotions to obtain a score S1 of the current classroom learning interest;
the classroom concentration evaluation module is used for determining the grading S2 of classroom concentration, and the concrete process is as follows: the method comprises the steps that an eye movement generation model is called to monitor the real-time condition of eyes through a built-in camera of a mobile device, the changes of a watching region, a watching point and watching duration of students in an online classroom process are concerned, independent sample T in statistics is used for testing and analyzing data, main concerned regions of the students in the current classroom are marked with yellow highlighting, the length of a bar chart is used for representing the change of the concerned duration, meanwhile, the blink frequency and the pupil diameter change of the students in fixed time are fused, the classroom concentration of each student is judged, and the score S2 of the classroom concentration is obtained through calculation;
the classroom participation evaluation module is used for determining the grading S3 of classroom participation, and the concrete process is as follows: in the online classroom process, the teacher question number, the student active answer number, the student passive roll call answer number and the student answer duration are counted in real time, the current classroom participation is counted by using a bar-shaped statistical chart, the horizontal axis is time, the vertical axis is interaction frequency, and the classroom participation score S3 is calculated.
Compared with the prior art, the invention has the following beneficial effects:
the online data are converted into visual information, the learning state can be analyzed according to historical data and the current expression data and eye movement data, and time dimension transverse comparison analysis is performed. The student learning interest, the concentration degree and the classroom participation degree can be accurately analyzed and monitored in real time, the student is helped to adjust the learning interest, the concentration degree and the classroom participation degree in the classroom process, a teacher is helped to know the whole learning atmosphere of a classroom in real time, the teaching mode is adjusted in time, and the teaching process is visualized.
Drawings
FIG. 1 is a flow chart of an online classroom learning state analysis method of the present invention;
FIG. 2 is a block diagram of the preferred embodiment of the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention and are not intended to limit it in any way.
As shown in fig. 1, an online classroom learning state analysis method includes the following steps:
step 1, collecting face images of an online education classroom and establishing a face image data set;
step 2, preprocessing an image data set;
step 3, constructing a recurrent neural network expression generation model based on the long-term and short-term memory network;
step 4, constructing a recurrent neural network eye movement generation model based on the long-term and short-term memory network;
step 5, respectively training an expression generation model and an eye movement generation model by utilizing the preprocessed data set;
and 6, recording the interaction times and the interaction duration of the students in the online process, and acquiring the expression states and the eye movement states of the students based on the expression generation model and the eye movement generation model.
And 7, performing multi-mode feature fusion analysis on the expression data, the eye movement data and the classroom interaction data of the students, visualizing the learning state data on a system platform, feeding back the learning state data to teachers and students in real time, and adjusting the atmosphere of the whole classroom.
Specifically, the implementation manner of preprocessing the face image data set in step 2 is as follows:
the primary task of preprocessing is to extract the face external matrix. The invention uses an Adaboost algorithm based on haar-like features to obtain an image of a face region of a person. The real-time online classroom video is intercepted according to frames, haar-like features of different sizes and positions in images are calculated, weak classifiers of the Haar-like features of each image are obtained, an optimal classifier is selected through an iterative algorithm of AdaBoost to construct a strong classifier, a face is identified and extracted through a rectangular frame, and then an AdaBoost eye detector is used for accurately detecting the eyes. A face dataset and an eye dataset are collected.
After the face is accurately extracted, the face area is preprocessed. The gray processing mainly converts an RGB space into a YUV space, and performs space color standardization on an input image by adopting a Gamma correction method, so that the gray image reduces the image data volume and clearly displays all the characteristics of the face. Meanwhile, the size of the face picture is related to the distance between the face and the camera, a resize method in an OpenCV library is called, original images with different sizes are converted into images with the same size, and the size of the images is normalized to be 64 x 64 pixels.
In the invention, the facial expression data set of the face is divided into six types of emotions, namely aversion, fear, depression, relaxation, surprise and happiness. To further simplify the display of the overall classroom atmosphere, the first three classes of emotions are classified as negative emotions, relaxation is classified as neutral emotions, and the latter two classes are classified as positive emotions.
In the invention, a general network camera is used for collecting eye movement data, such as a notebook network camera and a front camera of a mobile phone, and a watching region, a watching point, an interest area and a watching time length in the learning process, as well as the blinking frequency and the pupil diameter change in the watching process are collected. A low frequency of blinks generally means that attention is highly focused, and a high frequency of blinks means that attention is distracted. And multi-dimensional data are collected in real time, so that multi-granularity analysis of the model is facilitated.
In step 3, HOG features and geometric features of facial expressions are extracted as input, a recurrent neural network expression generation model based on a long-term and short-term memory network is constructed, and the specific structure of the expression generation model is as follows:
the model input of the invention is HOG characteristic and geometric characteristic extracted in step 2, and linear normalization is applied to process data. The recurrent neural network comprises a long-short term memory network LSTM, the number of layers of the neural network is set to be 3, the number of network nodes is set to be 256, the input step length is set to be 6, and the learning rate is set to be 0.0001. The Adam algorithm is selected as the algorithm for optimizing the model, and the survival rate of Dropout is set to be 0.7, so that overfitting of the model is prevented.
In step 4, eye data are extracted, and a recurrent neural network eye movement generation model based on a long-term and short-term memory network is constructed, wherein the eye movement generation model has the specific structure as follows:
according to a self-attention (self-attention) mechanism proposed by a Google team, the model adopts a mode of combining the self-attention mechanism and a long-short term memory network, selectively learns the intermediate output result by learning some parameters, associates an output sequence with the intermediate output result when the model is output, and adopts a full-connection mode to carry out output mapping, wherein an activation function of an output layer is softmax.
The input of the model is that the main characteristic points of the eye region are extracted in the step 2: the locations of the corners of the eyes and the pupil. Assuming that coordinate points of the four corners of the eyes from left to right are (x 1, y 1), (x 2, y 2), (x 3, y 3), (x 4, y 4), respectively, the average width of the eyes is: w = (x 2-x1+ x4-x 3)/2.
In step 5, the specific process of training the expression generation model is as follows:
A. building a recurrent neural network expression generation model by using a Tensorflow2.0 framework;
B. calculating parameters to be trained, and selecting a training set sample and a test set sample from the preprocessed face data set;
C. the training sample is expanded by using a data enhancement technology, so that an overfitting phenomenon in the training process is avoided;
D. during training, the number of the graph sequences input at one time is 16, the number of the frames of the images in one group of the image sequences is 10, and the size of a training batch is set to be 128;
E. in the training process, the step length is set to be 6, the initial value of the learning rate is set to be 0.005, the learning rate is reduced to be 0.9 every ten epochs from the twentieth epoch, and the test and the verification are carried out every 1000 step lengths.
The specific process of training the eye movement generation model is as follows:
a. building a cyclic neural network eye movement generation model by using a keras framework;
b. calculating parameters needing training, selecting 80% of data as training set samples and 20% of data as test set samples for verification;
c. expanding a training sample by using a data enhancement technology to prevent the phenomenon of data overfitting in the training process;
d. the length of an input window and the length of an output window adopted in the model are respectively 260 and 254;
e. during training, for the LSTM network with three layers, a learning rate attenuation strategy is used for training a model, parameters of the network are initialized randomly in a normal distribution mode, the batch size is set to be 5000, eye features are subjected to network coding, and a loss function of a classification cross entropy and an Adam optimizer are used.
As shown in fig. 2, in step 7, the online learning state visualization system specifically implements:
in the invention, the online learning state visualization system mainly comprises four modules: the system comprises a classroom atmosphere evaluation module, a learning interest evaluation module, a classroom concentration evaluation module and a classroom participation evaluation module.
The specific visual information of the classroom atmosphere evaluation module is as follows. The online learning state mainly consists of three dimensions of information: learning interest, classroom concentration degree, classroom participation degree and three-dimensional score are S1, S2 and S3. Setting the weight of the expression data decision as 0.4, the weight of the eye movement data decision as 0.3 and the decision of the classroom interaction data as 0.3, and then the total score of the online classroom learning state after multi-mode information fusion is as follows: s = 0.4S 1+ 0.3S 2+ 0.3S 3, the line graph reflects the total learning state of the current classroom in real time, the independent variable of the horizontal axis is time T, and the dependent variable of the vertical axis is classroom score S.
The specific visualization information of the learning interest evaluation module is as follows. In the invention, six types of emotions of the facial expression data set are as follows: aversion, fear, depression, relaxation, surprise and happiness. The scores are respectively set as follows according to the influence value scores on the class attendance states of students: -3, -2, -1,0,1,2, recording the number of students with various emotions, and calculating the proportion of the students with various emotions to prejudge the learning interest of the classroom. The whole class has m students, the total number of the current positive emotion students is m1, the total score of the positive emotion is s1, the total score of the neutral emotion is m2, the total score of the neutral emotion is s2, the total score of the negative emotion is m3, and the total score of the negative emotion is s3. The current online classroom learning interest evaluation value is noted as S1= S1+ S2+ S3. The learning interest visualization module is displayed in real time by a line graph, the horizontal axis is the extension of the class time, the vertical axis records the current classroom learning interest score S1, and the change of the learning interest of the students along with the progress change of the course is transversely compared. The positive mood fraction p1 is: p1= m1/m; the neutral mood ratio p2 is: p2= m2/m; the negative emotion ratio p3 is: p3= m3/m; the proportion of each emotion is displayed in real time by a pie chart, so that a teacher can know the learning interest of the current classroom in real time.
The specific visual information of the classroom concentration evaluation module is as follows. In the invention, an eye movement model is called to monitor the real-time condition of eyes by a built-in camera of a mobile device, the changes of a gazing area, a gazing point and gazing duration of a student in an online classroom process are concerned, independent sample T in statistics is used for testing and analyzing data, a main concerned area of the student in the current classroom is marked by yellow highlighting, and the change of the concerned duration is represented by the length of a bar graph. Meanwhile, the blinking frequency and the pupil diameter change of the students in a fixed time are fused, the classroom concentration of each student is judged, and the classroom concentration score S2 is calculated.
The specific visual information of the classroom participation evaluation module is as follows. In the online classroom process, the teacher question number, the student active answer number, the student passive roll call answer number and the student answer duration are counted in real time, the current classroom participation is counted by using a bar-shaped statistical chart, the horizontal axis is time, the vertical axis is interaction frequency, and the classroom participation score S3 is calculated.
The invention provides an artificial intelligence-based online classroom learning state analysis system, which belongs to the field of intelligent education application and comprises an online classroom learning interest visualization module, an online classroom learning concentration visualization module, an online classroom teacher-student interaction visualization module and an overall classroom learning state visualization module.
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. An online classroom learning state analysis method is characterized by comprising the following steps:
(1) Collecting online education classroom videos from an open source database, intercepting images according to frames, and establishing an image data set; the image is labeled with six types of emotion labels corresponding to the expressions: aversion, fear, depression, relaxation, surprise, pleasure, and three types of labels corresponding to the eyes: a gazing area, a gazing point and gazing duration;
(2) Extracting a face data set and an eye data set from the image data set, and further preprocessing the face data set and the eye data set;
(3) Constructing a cyclic neural network expression generation model based on a long-term and short-term memory network;
(4) Constructing a cyclic neural network eye movement generation model based on a long-term and short-term memory network;
(5) Training an expression generation model and an eye movement generation model by utilizing the preprocessed face data set and the preprocessed eye data set;
(6) On the online class, recording the interaction times and the interaction duration of teachers and students, and simultaneously recording the expression data and the eye movement data of the students based on the expression generation model and the eye movement generation model;
(7) And performing multi-mode feature fusion analysis on the expression data, the eye movement data and the classroom interaction data of the students, obtaining visual learning state data by using an online learning state visualization system, and feeding back the visual learning state data to teachers and the students for adjusting the overall classroom atmosphere.
2. The online classroom learning state analysis method according to claim 1, wherein in step (2), the specific process of extracting the face data set and the eye data set from the image data set comprises:
obtaining a face region image of a person in the image data set using an Adaboost algorithm based on haar-like features; the method comprises the steps of intercepting a real-time online classroom video according to frames, calculating Haar-like features of different sizes and positions in images to obtain a weak classifier of the Haar-like features of each image, selecting an optimal classifier through an AdaBoost iterative algorithm to construct a strong classifier, identifying and extracting human faces through a rectangular frame, accurately detecting the human eyes by using an AdaBoost human eye detector, and collecting a human face data set and an eye data set.
3. The on-line classroom learning state analysis method according to claim 1, wherein in the step (2), the face data set is preprocessed by:
calling a resize method in an OpenCV library, and normalizing images with inconsistent sizes into images with the size of 64 x 64 pixels; meanwhile, the image is grayed, the RGB space is converted into a YUV space, and the image is subjected to space color standardization by adopting a Gamma correction method.
4. The on-line classroom learning state analysis method of claim 1, wherein in step (3), the recurrent neural network expression generation model employs a long-short term memory network (LSTM), the number of neural network layers is set to 3, the number of network nodes is set to 256, the input step size is set to 6, and the learning rate is set to 0.005.
5. The on-line classroom learning state analysis method as claimed in claim 1, wherein in step (4), the eye movement generation model of the recurrent neural network adopts a combination of a self-attention mechanism and a long-short term memory network, and selectively learns the intermediate output result by learning some parameters, and associates the output sequence with the model when the model is output, and performs output mapping by adopting a fully-connected mode, and the activation function of the output layer is softmax.
6. The on-line classroom learning state analysis method of claim 1, wherein in step (5), the specific process of training the expression generation model is as follows:
A. building a recurrent neural network expression generation model by using a Tensorflow2.0 framework;
B. calculating parameters to be trained, and selecting a training set sample and a test set sample from the preprocessed face data set;
C. the training sample is expanded by using a data enhancement technology, so that an overfitting phenomenon in the training process is avoided;
D. during training, the number of the graph sequences input at one time is 16, the number of the frames of the images in one group of the image sequences is 10, and the size of a training batch is set to be 128;
E. in the training process, the step length is set to be 6, the initial value of the learning rate is set to be 0.005, the learning rate is reduced to 0.9 every ten epochs from the twentieth epoch, and the test and the verification are carried out once every 1000 step lengths.
7. The on-line classroom learning state analysis method of claim 1, wherein in step (5), the eye movement generation model is trained as follows:
a. building a recurrent neural network eye movement generation model by using a keras framework;
b. calculating parameters needing training, selecting 80% of data as training set samples, and selecting 20% of data as test set samples for verification;
c. expanding a training sample by using a data enhancement technology, and preventing the data overfitting phenomenon in the training process;
d. the length of an input window and the length of an output window adopted in the model are respectively 260 and 254;
e. during training, for the LSTM network with three layers, a learning rate attenuation strategy is used for training a model, parameters of the network are initialized randomly in a normal distribution mode, the batch size is set to be 5000, eye features are subjected to network coding, and a loss function of a classified cross entropy and an Adam optimizer are used.
8. The on-line classroom learning state analysis method of claim 1, wherein in step (7), the on-line learning state visualization system comprises a classroom atmosphere evaluation module, a learning interest evaluation module, a classroom concentration evaluation module and a classroom participation evaluation module;
the process of obtaining visual learning state data by the classroom atmosphere evaluation module is as follows: the three-dimensional scores of learning interest, classroom concentration and classroom participation are divided into S1, S2 and S3; setting the weight of the expression data decision as 0.4, the weight of the eye movement data decision as 0.3 and the decision of the classroom interaction data as 0.3, and then the total score of the online classroom learning state after multi-mode information fusion is as follows:
s =0.4 + S1+0.3 + S2+0.3 + S3; the change of the total learning state of the current classroom is reflected in real time in a line graph form, the independent variable of a transverse axis is time T, and the dependent variable of a longitudinal axis is classroom scoring S;
the learning interest evaluation module is used for determining a score S1 of learning interest, and the specific process is as follows: respectively setting scores of six emotions of aversion, fear, depression, relaxation, surprise and happiness as-3, -2, -1,0,1 and 2; recording the number of students of various emotions in real time, and summing scores of all the emotions to obtain a score S1 of the current classroom learning interest;
classroom concentration evaluation module is used for confirming the grade S2 of classroom concentration, and concrete process is: the method comprises the steps that an eye movement generation model is called to monitor the real-time condition of eyes through a built-in camera of a mobile device, the changes of a watching region, a watching point and watching duration of students in an online classroom process are concerned, independent sample T in statistics is used for testing and analyzing data, main concerned regions of the students in the current classroom are marked with yellow highlighting, the length of a bar chart is used for representing the change of the concerned duration, meanwhile, the blink frequency and the pupil diameter change of the students in fixed time are fused, the classroom concentration of each student is judged, and the score S2 of the classroom concentration is obtained through calculation;
the classroom participation evaluation module is used for determining the score S3 of classroom participation, and the specific process is as follows: in the online classroom process, the teacher question number, the student active answer number, the student passive roll call answer number and the student answer duration are counted in real time, the current classroom participation is counted by using a bar-shaped statistical chart, the horizontal axis is time, the vertical axis is interaction frequency, and the classroom participation score S3 is calculated.
CN202211484014.6A 2022-11-24 2022-11-24 Online classroom learning state analysis method Pending CN115797829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211484014.6A CN115797829A (en) 2022-11-24 2022-11-24 Online classroom learning state analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211484014.6A CN115797829A (en) 2022-11-24 2022-11-24 Online classroom learning state analysis method

Publications (1)

Publication Number Publication Date
CN115797829A true CN115797829A (en) 2023-03-14

Family

ID=85441108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211484014.6A Pending CN115797829A (en) 2022-11-24 2022-11-24 Online classroom learning state analysis method

Country Status (1)

Country Link
CN (1) CN115797829A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117408564A (en) * 2023-11-01 2024-01-16 海道(深圳)教育科技有限责任公司 Online academic counseling system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117408564A (en) * 2023-11-01 2024-01-16 海道(深圳)教育科技有限责任公司 Online academic counseling system
CN117408564B (en) * 2023-11-01 2024-06-04 海道(深圳)教育科技有限责任公司 Online academic counseling system

Similar Documents

Publication Publication Date Title
CN110334610B (en) Multi-dimensional classroom quantification system and method based on computer vision
CN110175501B (en) Face recognition-based multi-person scene concentration degree recognition method
CN113657168A (en) Convolutional neural network-based student learning emotion recognition method
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
CN112883867A (en) Student online learning evaluation method and system based on image emotion analysis
CN110363114A (en) A kind of person works' condition detection method, device and terminal device
CN111507227A (en) Multi-student individual segmentation and state autonomous identification method based on deep learning
CN115797829A (en) Online classroom learning state analysis method
CN114821742A (en) Method and device for identifying facial expressions of children or teenagers in real time
Enadula et al. Recognition of student emotions in an online education system
CN111523445A (en) Examination behavior detection method based on improved Openpos model and facial micro-expression
CN111666829A (en) Multi-scene multi-subject identity behavior emotion recognition analysis method and intelligent supervision system
CN114187640A (en) Learning situation observation method, system, equipment and medium based on online classroom
Tang et al. Automatic facial expression analysis of students in teaching environments
CN117237766A (en) Classroom cognition input identification method and system based on multi-mode data
US20230290118A1 (en) Automatic classification method and system of teaching videos based on different presentation forms
Cao et al. Facial Expression Study Based on 3D Facial Emotion Recognition
CN116029581A (en) Concentration evaluation method for online education based on multi-source data fusion
Kousalya et al. Prediction of Best Optimizer for Facial Expression Detection using Convolutional Neural Network
CN113688789A (en) Online learning investment recognition method and system based on deep learning
CN113469001A (en) Student classroom behavior detection method based on deep learning
CN113792626A (en) Teaching process evaluation method based on teacher non-verbal behaviors
Su Design of intelligent classroom teaching scheme using artificial intelligence
CN114898449B (en) Foreign language teaching auxiliary method and device based on big data
Khattar et al. Using computer vision for student-centred remote lab in electronics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination