CN118097761A - Classroom teaching difficulty analysis method and system for attention analysis - Google Patents

Classroom teaching difficulty analysis method and system for attention analysis Download PDF

Info

Publication number
CN118097761A
CN118097761A CN202410521513.0A CN202410521513A CN118097761A CN 118097761 A CN118097761 A CN 118097761A CN 202410521513 A CN202410521513 A CN 202410521513A CN 118097761 A CN118097761 A CN 118097761A
Authority
CN
China
Prior art keywords
facial expression
image data
difficulty
student
expression image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410521513.0A
Other languages
Chinese (zh)
Other versions
CN118097761B (en
Inventor
万佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Tourism Business Vocational College
Original Assignee
Jiangxi Tourism Business Vocational College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Tourism Business Vocational College filed Critical Jiangxi Tourism Business Vocational College
Priority to CN202410521513.0A priority Critical patent/CN118097761B/en
Publication of CN118097761A publication Critical patent/CN118097761A/en
Application granted granted Critical
Publication of CN118097761B publication Critical patent/CN118097761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of classroom teaching difficulty analysis, and discloses a classroom teaching difficulty analysis method and system for attention analysis, wherein the method comprises the following steps: classifying the facial expression image data sequence by utilizing a facial expression classification model to obtain an expression classification result sequence of students in the classroom teaching process and positioning difficulties; clustering the difficulty time stamps of all students, extracting clusters exceeding the number of preset difficulty time stamps, and taking the classroom teaching content corresponding to the extracted clusters as the classroom teaching difficulty of all students in the clusters. The invention carries out the coding processing of blocking and combining the position information on the facial expression image data, carries out the perception processing of combining the attention mechanism on the coding processing result to obtain the perception information representing the current attention state of the students, realizes the classification processing of the expressions of the students, carries out the difficulty positioning and the cluster analysis, and realizes the analysis and the extraction processing of the teaching difficulty of the students in different classes.

Description

Classroom teaching difficulty analysis method and system for attention analysis
Technical Field
The invention relates to the field of classroom teaching difficulty analysis, in particular to a classroom teaching difficulty analysis method and system for attention analysis.
Background
With the rapid development of information technology and the popularization of intelligent equipment, the learning behavior and the cognitive state of students in a class can be recorded and analyzed more accurately. By monitoring the attention of the student, the learner can learn the learning process of the student in depth, thereby better guiding the teaching practice. Meanwhile, the application of the big data analysis technology provides strong support for attention analysis, so that an educational can evaluate the learning state of students more scientifically and perform personalized teaching and classroom teaching difficulty analysis according to the learning state. Current attention analysis research is mainly focused on monitoring and analyzing the attention of students using sensors, eye-movement instruments or intelligent devices, etc. However, these methods have certain limitations such as high equipment cost, infringement of student privacy, and the like. Aiming at the problem, the scheme provides a classroom teaching difficulty analysis method for attention analysis, which deeply digs the attention characteristics of students in the classroom through artificial intelligence and big data analysis, explores a more effective classroom teaching difficulty analysis method and provides more powerful support for education and teaching improvement and personalized education.
Disclosure of Invention
In view of this, the present invention provides a classroom teaching difficulty analysis method for attention analysis, which aims to: 1) Acquiring facial expression image data and marking the student expressions, training a facial expression classification model based on the marked data, constructing an objective function based on a difference value between a real marking result and a facial expression classification result output by the model in the training process, iterating model parameter vectors by combining a first moment and a second moment of an objective function gradient to obtain a model for facial expression classification, wherein the facial expression classification model performs block processing on the facial expression image data and encoding processing by combining position information, and performs sensing processing by combining an attention mechanism on the encoding processing result to obtain sensing information representing the current attention state of the student, and realizing classification processing of the student's attention expression and confusing expression in different classroom teaching moments to obtain an expression classification result sequence of the student in the classroom teaching process; 2) Based on the classroom teaching time when the puzzled expression appears in the students, difficulty locating is carried out on the expression classification result sequence of each student, the sequence marked with the difficulty time is used as the difficulty time stamp of the students in the classroom teaching process, the distance calculation of the difficulty time stamps of different students is carried out by combining the same difficulty time proportion and the nonlinear distance of different students, the difficulty time stamps of the students are clustered to obtain a plurality of cluster clusters, the cluster clusters exceeding the preset number of the difficulty time stamps are extracted, and the classroom teaching content corresponding to the extracted cluster clusters is used as the classroom teaching difficulty of all students in the cluster clusters, so that the classroom teaching difficulty analysis and the extraction processing of the students of different categories are realized.
In order to achieve the above purpose, the invention provides a classroom teaching difficulty analysis method for attention analysis, which comprises the following steps:
s1: collecting facial expression image data of students in a real classroom environment by using cameras, and labeling the collected facial expression image data to form a facial expression classification training data set;
S2: constructing a facial expression classification model, and training the facial expression classification model based on a facial expression classification training data set, wherein the facial expression classification model takes facial expression image data of a student as input and a facial expression classification result of the student as output;
S3: collecting a facial expression image data sequence of a student in the classroom teaching process, and classifying the facial expression image data sequence by utilizing a facial expression classification model to obtain an expression classification result sequence of the student in the classroom teaching process;
S4: performing difficulty positioning on the expression classification result sequence of each student to obtain a difficulty time stamp of the student in the classroom teaching process;
S5: clustering the difficulty time stamps of all students to obtain a plurality of cluster clusters, extracting the cluster clusters exceeding the number of preset difficulty time stamps, and taking the classroom teaching content corresponding to the extracted cluster clusters as the classroom teaching difficulty of all students in the cluster clusters.
As a further improvement of the present invention:
Optionally, in the step S1, capturing facial expression image data of the student in a real classroom environment by using a camera, and labeling the captured facial expression image data, including:
The method comprises the steps of collecting facial expression images of students in a real classroom environment by using cameras, and carrying out multi-color channel pixel matrix separation processing on the collected facial expression images to form facial expression image data, wherein the collected facial expression image data set is as follows:
Wherein:
representing facial expression image data constituted by the N-th collected facial expression image, N representing the total number of collected facial expression image data;
representing the n-th facial expression image captured at/> Pixel matrixes corresponding to the color channels respectively, wherein the pixel matrixes are in a matrix form of X rows and Y columns,/>Representing that the x-th row and y-th column pixels in the n-th facial expression image are respectively in/>Color values of the color channels, X represents the number of rows of pixels of the collected facial expression image, and Y represents the number of columns of pixels of the collected facial expression image;
and labeling the collected facial expression image data, wherein the labeling result of the facial expression image data is the expression category of the facial expression image corresponding to the facial expression image data, the expression category comprises concentration and confusion, and the labeled facial expression image data forms a facial expression classification training data set.
Optionally, the forming the labeled facial expression image data into a facial expression classification training data set includes:
the facial expression classification training data set is as follows:
Wherein:
representing a facial expression classification training dataset;
Representing facial expression image data/> Wherein the labeling result is divided into 1,0,/>Representing facial expression image data/>The expression category of the corresponding facial expression image is concentration,/>Representing facial expression image data/>The expression category of the corresponding facial expression image is confusing.
Optionally, the step S2 of constructing a facial expression classification model includes:
Constructing a facial expression classification model, wherein the facial expression classification model takes facial expression image data of a student as input and takes a facial expression classification result of the student as output, and the facial expression classification model comprises an input layer, an image blocking layer, a coding layer, a perception layer and an output layer;
the input layer is used for receiving facial expression image data of the student;
the image blocking layer is used for blocking the facial expression image data to obtain a facial expression image data block sequence;
The coding layer is used for carrying out coding processing of combining position information on the facial expression image data block sequence to obtain a facial expression coding vector;
The perception layer is used for perceiving the facial expression coding vector in a mode of combining an attention mechanism and perception calculation to obtain a facial expression perception vector of the student;
the output layer is used for carrying out vector mapping on the facial expression perception vector to obtain a facial expression classification result of the student corresponding to the facial expression image data;
the facial expression classification model is trained based on the facial expression classification training dataset.
Optionally, the training the facial expression classification model includes:
the training process for training the facial expression classification model based on the facial expression classification training data set data comprises the following steps:
s21: initializing model parameter vectors for generating facial expression classification model The model parameter vector comprises a coding matrix of a coding layer and a connection matrix of an output layer;
s22: setting model parameter vectors The current iteration number of the model parameter vector is u, the maximum iteration number is Max, the initial value of u is 0, and the result of the nth iteration of the model parameter vector is/>
S23: constructing a training objective function of a facial expression classification model, wherein the input of the training objective function is a model parameter vector and a facial expression classification training data set data, and the constructed training objective function is as follows:
Wherein:
Representing the constructed training objective function,/> Representing input values of the training objective function, namely model parameter vectors;
Representing the facial expression image data/> As a basis of/>In the constructed facial expression classification model, a facial expression classification result is output by the model;
s24: vector model parameters As an input value of the training objective function, a training objective function/>, is calculatedGradient/>, of; In an embodiment of the invention, gradient/>In a manner of/>Performing bias guide processing on the training objective function for the variables, and carrying out model parameter vector/>Substitution variable/>
S25: calculating to obtain model parameter vectorIs the first moment of (2):
Wherein:
Representing model parameter vector/> Is the first moment of (2);
Attenuation coefficient representing first moment, will/> Set to 0.9;
S26: calculating to obtain model parameter vector Is the second moment of (2):
Wherein:
Representing model parameter vector/> Is a second moment of (2);
Attenuation coefficient representing second moment, will/> Set to 0.92;
S27: weight decay is carried out on the first moment and the second moment:
Wherein:
Representing the first moment/> Weight decay results of (2);
Representing the second moment/> Weight decay results of (2);
S28: for model parameter vector Performing iteration, wherein an iteration formula is as follows:
Wherein:
represents an iteration control parameter, will/> Set to 0.01;
S29: let u=u+1, return to step S24 until the maximum number of iterations is reached, and use And constructing a facial expression classification model.
Optionally, the step S3 collects a facial expression image data sequence of the student in the classroom teaching process, and classifies the facial expression image data sequence by using a facial expression classification model, including:
collecting facial expression image data sequences of students in the classroom teaching process, obtaining the facial expression image data sequences of each student and forming a set:
Wherein:
Representing a sequence of facial expression image data of an mth student in the course of classroom teaching, t representing time sequence information, The teaching time of H classes is represented, and M represents the total number of students in the class teaching process;
Facial expression image data representing m-th student in H classroom teaching time during classroom teaching Facial expression image data representing mth student at h class teaching time in class teaching process
Receiving and classifying any facial expression image data in a facial expression image data sequence by utilizing the facial expression classification model trained in the step S2 to obtain an expression classification result sequence of each student, wherein the facial expression image dataThe classification flow of (2) is as follows:
s31: the input layer receives facial expression image data
S32: image segmentation layer facial expression image dataPerforming blocking processing to obtain a facial expression image data block sequence:
Wherein:
Representing the S-th facial expression image data block obtained by segmentation, S representing the total number of facial expression image data blocks obtained by segmentation,/> Corresponds to/>, in turnA pixel matrix of color channels;
s33: the coding layer performs coding processing combining with position information on the facial expression image data block sequence to obtain a facial expression coding vector:
Wherein:
Representing a facial expression encoding vector;
Representing a block/>, of facial expression image data Is a result of the encoding process;
Representing a block/>, of facial expression image data In facial expression image data/>Wherein the row position is the horizontal image data block rank of the facial expression image data block in the facial expression image data, and the column position is the vertical image data block rank of the facial expression image data block in the facial expression image data;
a coding matrix for RGB color channels;
representing a convolution process;
S34: sense layer combines attention mechanism and sense calculation mode to facial expression coding vector Sensing to obtain facial expression sensing vector/>, at the h class teaching moment, of the mth student in the class teaching process
Wherein:
an exponential function that is based on a natural constant;
Representing the encoding process results/> Corresponding perception information;
S35: the output layer performs vector mapping on the facial expression perception vector to obtain a facial expression classification result of the mth student at the h classroom teaching time in the classroom teaching process
Wherein:
W represents the connection matrix in the output layer, Representing L1 norm,/>Representing a preset characteristic threshold value;
the facial expression classification result of the mth student at the h class teaching time in the class teaching process is shown, Expressed as concentrating expression,/>Then the expression is indicated as confusing.
Optionally, in the step S4, performing difficulty positioning on the expression classification result sequence of each student includes:
obtaining expression classification result sequences of M students and forming a set:
Wherein:
Representing the expression classification result sequence of the mth student;
representing the facial expression classification result of the mth student at the h classroom teaching time in the classroom teaching process;
Difficulty positioning is carried out on the expression classification result sequence of each student to obtain a difficulty time stamp of each student, wherein the expression classification result sequence The difficult time stamp calculation flow of (1) is as follows:
s41: traversing to obtain expression classification result sequence The teaching moment of the classroom where the confusing expression is located;
S42: calculating the time interval between any classroom teaching time obtained by traversing and the classroom teaching time at which the other confused expression is located, and marking all classroom teaching time between two classroom teaching time as difficult point time if the time interval is smaller than a preset threshold value;
s43: repeating the step S42 to obtain the difficulty time stamp of the mth student:
Wherein:
a difficulty time stamp representing the mth student;
the h class teaching time is the m-th student difficult time in the class teaching process, The h class teaching time is not the difficult time of the m student in the class teaching process.
Optionally, in the step S5, the difficulty time stamps of all students are clustered to obtain a plurality of clusters, the clusters exceeding the number of preset difficulty time stamps are extracted, and the corresponding classroom teaching content is used as the classroom teaching difficulty of all students in the clusters, including:
Clustering the difficulty time stamps of all students to obtain a plurality of clustering clusters, wherein the clustering flow of the difficulty time stamps is as follows:
S51: calculating to obtain a difficulty time stamp distance between any two students, wherein the difficulty time stamp distance between the mth student and the q student is as follows
Wherein:
represent scale factors, will/> Set to 1;
represents an L2 norm;
Representation/> Class teaching time number of same difficult point timeThe h class teaching time in the class teaching process is the difficult time of the m-th student and the q-th student, and is expressed by/>
S52: randomly selecting the difficulty time stamps of K students as initial clustering centers, wherein each clustering center corresponds to one cluster;
s53: calculating the distance from the difficulty time stamp corresponding to the non-clustering center to the clustering center, and merging the difficulty time stamp corresponding to the non-clustering center into the cluster where the closest clustering center is located; wherein the formula of calculation of the distance is the formula in step S51;
S54: calculating the average value of all the difficulty time stamps in the cluster, selecting the difficulty time stamp closest to the average value in the cluster, taking the difficulty time stamp as the updated cluster center, and returning to the step S53; obtaining K clustering clusters until all K clustering centers are unchanged and the time stamps of the difficulties in the K clustering clusters are unchanged;
Calculating the number of difficulty time stamps in each cluster, extracting the cluster exceeding the preset number of difficulty time stamps, calculating the average value of all the difficulty time stamps in the extracted cluster, obtaining the difficulty time stamp after the averaging treatment, taking the class teaching time corresponding to more than 0.3 in the difficulty time stamp after the averaging treatment as the difficulty time of the cluster, and taking the class teaching content corresponding to the difficulty time as the class teaching difficulty of all students in the cluster.
In order to solve the above problems, the present invention provides a classroom teaching difficulty analysis system for attention analysis, the system comprising:
The facial expression classification module is used for acquiring facial expression image data of students in a real classroom environment by using cameras, labeling the acquired facial expression image data to form a facial expression classification training data set, constructing a facial expression classification model, training the facial expression classification model based on the facial expression classification training data set, acquiring a facial expression image data sequence of the students in the classroom teaching process, and classifying the facial expression image data sequence by using the facial expression classification model to obtain an expression classification result sequence of the students in the classroom teaching process;
The classroom teaching difficulty positioning module is used for positioning the difficulty of the expression classification result sequence of each student to obtain a difficulty time stamp of the student in the classroom teaching process;
The classroom teaching difficulty content positioning device is used for clustering the difficulty time stamps of all students to obtain a plurality of cluster clusters, extracting the cluster clusters exceeding the number of preset difficulty time stamps, and taking the classroom teaching content corresponding to the extracted cluster clusters as the classroom teaching difficulty of all students in the cluster clusters.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
A memory storing at least one instruction;
the communication interface is used for realizing the communication of the electronic equipment; and
And the processor executes the instructions stored in the memory to realize the classroom teaching difficulty analysis method for analyzing the attention.
In order to solve the above-mentioned problems, the present invention also provides a computer-readable storage medium having stored therein at least one instruction that is executed by a processor in an electronic device to implement the above-mentioned classroom teaching difficulty analysis method of attention analysis.
Compared with the prior art, the invention provides a classroom teaching difficulty analysis method for attention analysis, and the technology has the following advantages:
Firstly, the scheme provides a facial expression classification mode, any facial expression image data in a facial expression image data sequence is received and classified by utilizing a facial expression classification model obtained through training, and an expression classification result sequence of each student is obtained, wherein the facial expression image data The classification flow of (2) is as follows: the input layer receives facial expression image data/>; Image blocking layer/>, facial expression image dataPerforming blocking processing to obtain a facial expression image data block sequence:
Wherein: Representing the S-th facial expression image data block obtained by segmentation, S representing the total number of facial expression image data blocks obtained by segmentation,/> Corresponds to/>, in turnA pixel matrix of color channels; the coding layer performs coding processing combining with position information on the facial expression image data block sequence to obtain a facial expression coding vector:
Wherein: representing a facial expression encoding vector; /(I) Representing a block/>, of facial expression image dataIs a result of the encoding process; /(I)Representing a block/>, of facial expression image dataIn facial expression image data/>Wherein the row position is the horizontal image data block rank of the facial expression image data block in the facial expression image data, and the column position is the vertical image data block rank of the facial expression image data block in the facial expression image data; /(I)A coding matrix for RGB color channels; /(I)Representing a convolution process; the perception layer combines a attention mechanism and a perception calculation mode to encode the facial expression vector/>Sensing to obtain facial expression sensing vector/>, at the h class teaching moment, of the mth student in the class teaching process
Wherein: /(I)An exponential function that is based on a natural constant; /(I)Representing the encoding process results/>Corresponding perception information; the output layer performs vector mapping on the facial expression perception vector to obtain a facial expression classification result/>, at the h class teaching time, of the m-th student in the class teaching process
Wherein: w represents the connection matrix in the output layer,Representing L1 norm,/>Representing a preset characteristic threshold value; /(I)Representing facial expression classification result of mth student at h class teaching time in class teaching process, and carrying out +/-Expressed as concentrating expression,/>Then the expression is indicated as confusing. According to the scheme, facial expression image data are collected and student expression labeling is carried out, a facial expression classification model is trained based on labeled data, in the training process, an objective function is built based on the difference between the real labeling result and the facial expression classification result output by the model, iteration is carried out on model parameter vectors by combining the first moment and the second moment of the objective function gradient, a model for facial expression classification is obtained, the facial expression classification model carries out block processing on the facial expression image data and coding processing by combining position information, perception processing by combining an attention mechanism is carried out on the coding processing result, perception information representing the current attention state of a student is obtained, classification processing of the student concentrating on expressions and confusing expressions in different classroom teaching moments is achieved, and an expression classification result sequence of the student in the classroom teaching process is obtained.
Meanwhile, the scheme provides a classroom teaching difficulty positioning and analyzing mode, which clusters the difficulty time stamps of all students to obtain a plurality of cluster clusters, wherein the clustering flow of the difficulty time stamps is as follows: calculating to obtain a difficulty time stamp distance between any two students, wherein the difficulty time stamp distance between the mth student and the q student is as follows
Wherein: represent scale factors, will/> Set to 1; /(I)Represents an L2 norm; /(I)Representation/>Class teaching time number of same difficult point timeThe h class teaching time in the class teaching process is the difficult time of the m-th student and the q-th student, and is expressed by/>; Randomly selecting the difficulty time stamps of K students as initial clustering centers, wherein each clustering center corresponds to one cluster; calculating the distance from the difficulty time stamp corresponding to the non-clustering center to the clustering center, and merging the difficulty time stamp corresponding to the non-clustering center into the cluster where the closest clustering center is located; calculating the average value of all the difficulty time stamps in the cluster, selecting the difficulty time stamp closest to the average value in the cluster, taking the difficulty time stamp as a cluster center obtained by updating, and repeating clustering; obtaining K clustering clusters until all K clustering centers are unchanged and the time stamps of the difficulties in the K clustering clusters are unchanged; calculating the number of difficulty time stamps in each cluster, extracting the cluster exceeding the preset number of difficulty time stamps, calculating the average value of all the difficulty time stamps in the extracted cluster, obtaining the difficulty time stamp after the averaging treatment, taking the class teaching time corresponding to more than 0.3 in the difficulty time stamp after the averaging treatment as the difficulty time of the cluster, and taking the class teaching content corresponding to the difficulty time as the class teaching difficulty of all students in the cluster. According to the scheme, based on the classroom teaching time when the students are confused, the difficulty locating is carried out on the expression classification result sequence of each student, the sequence marked with the difficulty time is used as the difficulty time stamp of the students in the classroom teaching process, the distance calculation of the difficulty time stamps of different students is carried out by combining the same difficulty time proportion and the nonlinear distance of different students, the difficulty time stamps of all students are clustered to obtain a plurality of cluster clusters, the cluster clusters exceeding the preset number of the difficulty time stamps are extracted, and the classroom teaching content corresponding to the extracted cluster clusters is used as the classroom teaching difficulty of all students in the cluster clusters, so that the classroom teaching difficulty analysis and the extraction processing of the students of different categories are realized.
Drawings
Fig. 1 is a schematic flow chart of a classroom teaching difficulty analysis method for attention analysis according to an embodiment of the present invention; fig. 2 is a functional block diagram of a classroom teaching difficulty analysis system for attention analysis according to an embodiment of the present invention, wherein the classroom teaching difficulty analysis system includes 100 classroom teaching difficulty analysis systems, 101 facial expression classification modules, 102 classroom teaching difficulty positioning modules, and 103 classroom teaching difficulty content positioning devices; fig. 3 is a schematic structural diagram of an electronic device for implementing a classroom teaching difficulty analysis method for attention analysis according to an embodiment of the present invention, 1 the electronic device, 10a processor, 11a memory, 12 a program, and 13 a communication interface; the achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the application provides a classroom teaching difficulty analysis method for attention analysis. The execution subject of the classroom teaching difficulty analysis method of attention analysis includes, but is not limited to, at least one of a server, a terminal, and the like, which can be configured to execute the method provided by the embodiment of the application. In other words, the classroom teaching difficulty analysis method of attention analysis may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Example 1:
S1: and acquiring facial expression image data of students in a real classroom environment by using cameras, and labeling the acquired facial expression image data to form a facial expression classification training data set.
In the step S1, facial expression image data of a student is collected by using a camera in a real classroom environment, and the collected facial expression image data is labeled, including:
The method comprises the steps of collecting facial expression images of students in a real classroom environment by using cameras, and carrying out multi-color channel pixel matrix separation processing on the collected facial expression images to form facial expression image data, wherein the collected facial expression image data set is as follows:
Wherein:
Representing facial expression image data constituted by the N-th collected facial expression image, N representing the total number of collected facial expression image data;
representing the n-th facial expression image captured at/> Pixel matrixes corresponding to the color channels respectively, wherein the pixel matrixes are in a matrix form of X rows and Y columns,/>Representing that the x-th row and y-th column pixels in the n-th facial expression image are respectively in/>Color values of the color channels, X represents the number of rows of pixels of the collected facial expression image, and Y represents the number of columns of pixels of the collected facial expression image;
and labeling the collected facial expression image data, wherein the labeling result of the facial expression image data is the expression category of the facial expression image corresponding to the facial expression image data, the expression category comprises concentration and confusion, and the labeled facial expression image data forms a facial expression classification training data set.
The step of forming the labeled facial expression image data into a facial expression classification training data set comprises the following steps:
the facial expression classification training data set is as follows:
Wherein:
representing a facial expression classification training dataset;
Representing facial expression image data/> Wherein the labeling result is divided into 1,0,/>Representing facial expression image data/>The expression category of the corresponding facial expression image is concentration,/>Representing facial expression image data/>The expression category of the corresponding facial expression image is confusing.
S2: and constructing a facial expression classification model, and training the facial expression classification model based on a facial expression classification training data set, wherein the facial expression classification model takes facial expression image data of a student as input and takes a facial expression classification result of the student as output.
The step S2 of constructing a facial expression classification model comprises the following steps:
Constructing a facial expression classification model, wherein the facial expression classification model takes facial expression image data of a student as input and takes a facial expression classification result of the student as output, and the facial expression classification model comprises an input layer, an image blocking layer, a coding layer, a perception layer and an output layer;
the input layer is used for receiving facial expression image data of the student;
the image blocking layer is used for blocking the facial expression image data to obtain a facial expression image data block sequence;
The coding layer is used for carrying out coding processing of combining position information on the facial expression image data block sequence to obtain a facial expression coding vector;
The perception layer is used for perceiving the facial expression coding vector in a mode of combining an attention mechanism and perception calculation to obtain a facial expression perception vector of the student;
the output layer is used for carrying out vector mapping on the facial expression perception vector to obtain a facial expression classification result of the student corresponding to the facial expression image data;
the facial expression classification model is trained based on the facial expression classification training dataset.
The training of the facial expression classification model comprises the following steps:
the training process for training the facial expression classification model based on the facial expression classification training data set data comprises the following steps:
s21: initializing model parameter vectors for generating facial expression classification model The model parameter vector comprises a coding matrix of a coding layer and a connection matrix of an output layer;
s22: setting model parameter vectors The current iteration number of the model parameter vector is u, the maximum iteration number is Max, the initial value of u is 0, and the result of the nth iteration of the model parameter vector is/>
S23: constructing a training objective function of a facial expression classification model, wherein the input of the training objective function is a model parameter vector and a facial expression classification training data set data, and the constructed training objective function is as follows:
Wherein:
Representing the constructed training objective function,/> Representing input values of the training objective function, namely model parameter vectors;
Representing the facial expression image data/> As a basis of/>In the constructed facial expression classification model, a facial expression classification result is output by the model;
s24: vector model parameters As an input value of the training objective function, a training objective function/>, is calculatedGradient/>, of; In an embodiment of the invention, gradient/>In a manner of/>Performing bias guide processing on the training objective function for the variables, and carrying out model parameter vector/>Substitution variable/>
S25: calculating to obtain model parameter vectorIs the first moment of (2):
Wherein:
Representing model parameter vector/> Is the first moment of (2);
Attenuation coefficient representing first moment, will/> Set to 0.9;
S26: calculating to obtain model parameter vector Is the second moment of (2):
Wherein:
Representing model parameter vector/> Is a second moment of (2); /(I)
Attenuation coefficient representing second moment, will/>Set to 0.92;
S27: weight decay is carried out on the first moment and the second moment:
Wherein:
Representing the first moment/> Weight decay results of (2);
Representing the second moment/> Weight decay results of (2);
S28: for model parameter vector Performing iteration, wherein an iteration formula is as follows:
Wherein:
represents an iteration control parameter, will/> Set to 0.01;
S29: let u=u+1, return to step S24 until the maximum number of iterations is reached, and use And constructing a facial expression classification model.
S3: and collecting a facial expression image data sequence of the student in the classroom teaching process, and classifying the facial expression image data sequence by utilizing a facial expression classification model to obtain an expression classification result sequence of the student in the classroom teaching process.
And S3, collecting a facial expression image data sequence of the student in the classroom teaching process, and classifying the facial expression image data sequence by using a facial expression classification model, wherein the method comprises the following steps:
collecting facial expression image data sequences of students in the classroom teaching process, obtaining the facial expression image data sequences of each student and forming a set:
Wherein:
Representing a sequence of facial expression image data of an mth student in the course of classroom teaching, t representing time sequence information, The teaching time of H classes is represented, and M represents the total number of students in the class teaching process;
Facial expression image data representing m-th student in H classroom teaching time during classroom teaching Facial expression image data representing mth student at h class teaching time in class teaching process
Receiving and classifying any facial expression image data in a facial expression image data sequence by utilizing the facial expression classification model trained in the step S2 to obtain an expression classification result sequence of each student, wherein the facial expression image dataThe classification flow of (2) is as follows:
s31: the input layer receives facial expression image data
S32: image segmentation layer facial expression image dataPerforming blocking processing to obtain a facial expression image data block sequence:
Wherein:
Representing the S-th facial expression image data block obtained by segmentation, S representing the total number of facial expression image data blocks obtained by segmentation,/> Corresponds to/>, in turnA pixel matrix of color channels;
s33: the coding layer performs coding processing combining with position information on the facial expression image data block sequence to obtain a facial expression coding vector:
Wherein:
Representing a facial expression encoding vector;
Representing a block/>, of facial expression image data Is a result of the encoding process;
Representing a block/>, of facial expression image data In facial expression image data/>Wherein the row position is the horizontal image data block rank of the facial expression image data block in the facial expression image data, and the column position is the vertical image data block rank of the facial expression image data block in the facial expression image data;
a coding matrix for RGB color channels;
representing a convolution process;
S34: sense layer combines attention mechanism and sense calculation mode to facial expression coding vector Sensing to obtain facial expression sensing vector/>, at the h class teaching moment, of the mth student in the class teaching process
Wherein:
an exponential function that is based on a natural constant;
Representing the encoding process results/> Corresponding perception information;
S35: the output layer performs vector mapping on the facial expression perception vector to obtain a facial expression classification result of the mth student at the h classroom teaching time in the classroom teaching process
Wherein:
W represents the connection matrix in the output layer, Representing L1 norm,/>Representing a preset characteristic threshold value;
Representing facial expression classification result of mth student at h class teaching time in class teaching process, and carrying out +/- Expressed as concentrating expression,/>Then the expression is indicated as confusing.
S4: and carrying out difficulty positioning on the expression classification result sequence of each student to obtain a difficulty time stamp of the student in the classroom teaching process.
And in the step S4, difficulty positioning is carried out on the expression classification result sequence of each student, and the method comprises the following steps:
obtaining expression classification result sequences of M students and forming a set:
Wherein:
Representing the expression classification result sequence of the mth student;
representing the facial expression classification result of the mth student at the h classroom teaching time in the classroom teaching process;
Difficulty positioning is carried out on the expression classification result sequence of each student to obtain a difficulty time stamp of each student, wherein the expression classification result sequence The difficult time stamp calculation flow of (1) is as follows:
s41: traversing to obtain expression classification result sequence The teaching moment of the classroom where the confusing expression is located;
S42: calculating the time interval between any classroom teaching time obtained by traversing and the classroom teaching time at which the other confused expression is located, and marking all classroom teaching time between two classroom teaching time as difficult point time if the time interval is smaller than a preset threshold value;
s43: repeating the step S42 to obtain the difficulty time stamp of the mth student:
Wherein:
a difficulty time stamp representing the mth student;
the h class teaching time is the m-th student difficult time in the class teaching process, The h class teaching time is not the difficult time of the m student in the class teaching process.
S5: clustering the difficulty time stamps of all students to obtain a plurality of cluster clusters, extracting the cluster clusters exceeding the number of preset difficulty time stamps, and taking the classroom teaching content corresponding to the extracted cluster clusters as the classroom teaching difficulty of all students in the cluster clusters.
And S5, clustering the difficulty time stamps of all students to obtain a plurality of cluster clusters, extracting the cluster clusters exceeding the number of preset difficulty time stamps, and taking the corresponding classroom teaching content as the classroom teaching difficulty of all students in the cluster clusters, wherein the method comprises the following steps:
Clustering the difficulty time stamps of all students to obtain a plurality of clustering clusters, wherein the clustering flow of the difficulty time stamps is as follows:
S51: calculating to obtain a difficulty time stamp distance between any two students, wherein the difficulty time stamp distance between the mth student and the q student is as follows
/>
Wherein:
represent scale factors, will/> Set to 1;
represents an L2 norm;
Representation/> Class teaching time number of same difficult point timeThe h class teaching time in the class teaching process is the difficult time of the m-th student and the q-th student, and is expressed by/>
S52: randomly selecting the difficulty time stamps of K students as initial clustering centers, wherein each clustering center corresponds to one cluster;
s53: calculating the distance from the difficulty time stamp corresponding to the non-clustering center to the clustering center, and merging the difficulty time stamp corresponding to the non-clustering center into the cluster where the closest clustering center is located; wherein the formula of calculation of the distance is the formula in step S51;
S54: calculating the average value of all the difficulty time stamps in the cluster, selecting the difficulty time stamp closest to the average value in the cluster, taking the difficulty time stamp as the updated cluster center, and returning to the step S53; obtaining K clustering clusters until all K clustering centers are unchanged and the time stamps of the difficulties in the K clustering clusters are unchanged;
Calculating the number of difficulty time stamps in each cluster, extracting the cluster exceeding the preset number of difficulty time stamps, calculating the average value of all the difficulty time stamps in the extracted cluster, obtaining the difficulty time stamp after the averaging treatment, taking the class teaching time corresponding to more than 0.3 in the difficulty time stamp after the averaging treatment as the difficulty time of the cluster, and taking the class teaching content corresponding to the difficulty time as the class teaching difficulty of all students in the cluster.
Example 2:
Fig. 2 is a functional block diagram of a classroom teaching difficulty analysis system according to an embodiment of the present invention, which can implement the classroom teaching difficulty analysis method in embodiment 1.
The classroom teaching difficulty analysis system 100 of the present invention may be installed in an electronic device. According to the implemented functions, the classroom teaching difficulty analysis system may include a facial expression classification module 101, a classroom teaching difficulty positioning module 102, and a classroom teaching difficulty content positioning device 103. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
The facial expression classification module 101 is configured to collect facial expression image data of a student in a real classroom environment by using a camera, annotate the collected facial expression image data to form a facial expression classification training data set, construct a facial expression classification model, train the facial expression classification model based on the facial expression classification training data set, collect a facial expression image data sequence of the student in the classroom teaching process, and classify the facial expression image data sequence by using the facial expression classification model to obtain an expression classification result sequence of the student in the classroom teaching process;
The classroom teaching difficulty positioning module 102 is configured to perform difficulty positioning on the expression classification result sequence of each student, and obtain a difficulty time stamp of the student in the classroom teaching process;
The classroom teaching difficulty content positioning device 103 is configured to cluster the difficulty timestamps of all students to obtain a plurality of clusters, extract clusters exceeding the number of preset difficulty timestamps, and use the classroom teaching content corresponding to the extracted clusters as the classroom teaching difficulty of all students in the clusters.
In detail, the modules in the classroom teaching difficulty analysis system 100 in the embodiment of the present invention use the same technical means as the classroom teaching difficulty analysis method described in fig. 1, and can produce the same technical effects, which are not described herein.
Example 3:
fig. 3 is a schematic structural diagram of an electronic device for implementing a classroom teaching difficulty analysis method for attention analysis according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11, a communication interface 13 and a bus, and may further comprise a computer program, such as program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a smart memory card (SMARTMEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASHCARD) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as codes of the program 12, but also for temporarily storing data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects respective parts of the entire electronic device using various interfaces and lines, executes or executes programs or modules (a program 12 for realizing classroom teaching difficulty analysis for attention analysis, etc.) stored in the memory 11, and invokes data stored in the memory 11 to perform various functions of the electronic device 1 and process the data.
The communication interface 13 may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used to establish a communication connection between the electronic device 1 and other electronic devices and to enable connection communication between internal components of the electronic device.
The bus may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
Fig. 3 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
It should be noted that, the foregoing reference numerals of the embodiments of the present invention are merely for describing the embodiments, and do not represent the advantages and disadvantages of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (9)

1. A classroom teaching difficulty analysis method of attention analysis, the method comprising:
s1: collecting facial expression image data of students in a real classroom environment by using cameras, and labeling the collected facial expression image data to form a facial expression classification training data set;
s2: the facial expression classification model is constructed, and based on a facial expression classification training data set, the facial expression classification model takes facial expression image data of a student as input and facial expression classification results of the student as output, and comprises an input layer, an image blocking layer, a coding layer, a perception layer and an output layer, and based on the facial expression classification training data set, the facial expression classification model is trained, and the specific flow comprises the following steps: initializing model parameters of a facial expression classification model, constructing a training objective function of the facial expression classification model, calculating to obtain first-order moment and second-order moment of the model parameters, carrying out weight attenuation on the first-order moment and the second-order moment, and carrying out iterative update on the model parameters based on the attenuated moment estimated values;
S3: collecting a facial expression image data sequence of a student in a classroom teaching process, and classifying the facial expression image data sequence by utilizing a facial expression classification model to obtain an expression classification result sequence of the student in the classroom teaching process, wherein the facial expression image data classification flow comprises: the facial expression image data are segmented to obtain a facial expression image data block sequence, the facial expression image data block sequence is encoded by combining position information to obtain a facial expression encoding vector, the facial expression encoding vector is perceived by combining an attention mechanism to obtain a facial expression perception vector, and the facial expression perception vector is subjected to vector mapping to obtain a facial expression classification result sequence;
S4: performing difficulty positioning on the expression classification result sequence of each student to obtain a difficulty time stamp of the student in the classroom teaching process;
S5: clustering the difficulty time stamps of all students to obtain a plurality of cluster clusters, extracting the cluster clusters exceeding the number of preset difficulty time stamps, and taking the classroom teaching content corresponding to the extracted cluster clusters as the classroom teaching difficulty of all students in the cluster clusters, wherein the clustering process of the difficulty time stamps comprises the following steps: and calculating the difficulty time stamp distance between any two students, carrying out clustering processing, calculating the number of the difficulty time stamps in each cluster, extracting the cluster exceeding the preset number of the difficulty time stamps, calculating the difficulty time stamp after averaging processing, and taking the classroom teaching content corresponding to the difficulty time as the classroom teaching difficulty of all students in the cluster.
2. The method for analyzing the difficulty in teaching in a classroom for attention analysis according to claim 1, wherein in step S1, facial expression image data of the student is collected by using a camera in a real classroom environment, and the collected facial expression image data is labeled, comprising:
The method comprises the steps of collecting facial expression images of students in a real classroom environment by using cameras, and carrying out multi-color channel pixel matrix separation processing on the collected facial expression images to form facial expression image data, wherein the collected facial expression image data set is as follows:
Wherein:
Representing facial expression image data constituted by the N-th collected facial expression image, N representing the total number of collected facial expression image data;
representing the n-th facial expression image captured at/> Pixel matrixes corresponding to the color channels respectively, wherein the pixel matrixes are in a matrix form of X rows and Y columns,/>Representing that the x-th row and y-th column pixels in the n-th facial expression image are respectively in/>Color values of the color channels, X represents the number of rows of pixels of the collected facial expression image, and Y represents the number of columns of pixels of the collected facial expression image;
and labeling the collected facial expression image data, wherein the labeling result of the facial expression image data is the expression category of the facial expression image corresponding to the facial expression image data, the expression category comprises concentration and confusion, and the labeled facial expression image data forms a facial expression classification training data set.
3. The method for analyzing the difficulty in teaching in a class of attention as recited in claim 2, wherein the composing the labeled facial expression image data into a facial expression classification training data set includes:
the facial expression classification training data set is as follows:
Wherein:
representing a facial expression classification training dataset;
Representing facial expression image data/> Wherein the labeling result is divided into 1,0,/>Representing facial expression image data/>The expression category of the corresponding facial expression image is concentration,/>Representing facial expression image data/>The expression category of the corresponding facial expression image is confusing.
4. The method for analyzing the difficulty in teaching in class of attention as recited in claim 1, wherein the step S2 of constructing a facial expression classification model includes:
Constructing a facial expression classification model, wherein the facial expression classification model takes facial expression image data of a student as input and takes a facial expression classification result of the student as output, and the facial expression classification model comprises an input layer, an image blocking layer, a coding layer, a perception layer and an output layer;
the input layer is used for receiving facial expression image data of the student;
the image blocking layer is used for blocking the facial expression image data to obtain a facial expression image data block sequence;
The coding layer is used for carrying out coding processing of combining position information on the facial expression image data block sequence to obtain a facial expression coding vector;
The perception layer is used for perceiving the facial expression coding vector in a mode of combining an attention mechanism and perception calculation to obtain a facial expression perception vector of the student;
the output layer is used for carrying out vector mapping on the facial expression perception vector to obtain a facial expression classification result of the student corresponding to the facial expression image data;
the facial expression classification model is trained based on the facial expression classification training dataset.
5. The method of claim 4, wherein training the facial expression classification model comprises:
the training process for training the facial expression classification model based on the facial expression classification training data set data comprises the following steps:
s21: initializing model parameter vectors for generating facial expression classification model The model parameter vector comprises a coding matrix of a coding layer and a connection matrix of an output layer;
s22: setting model parameter vectors The current iteration number of the model parameter vector is u, the maximum iteration number is Max, the initial value of u is 0, and the result of the nth iteration of the model parameter vector is/>
S23: constructing a training objective function of a facial expression classification model, wherein the input of the training objective function is a model parameter vector and a facial expression classification training data set data, and the constructed training objective function is as follows:
Wherein:
Representing the constructed training objective function,/> Representing input values of the training objective function, namely model parameter vectors;
Representing the facial expression image data/> As a basis of/>In the constructed facial expression classification model, a facial expression classification result is output by the model;
s24: vector model parameters As an input value of the training objective function, a training objective function/>, is calculatedGradient/>, of
S25: calculating to obtain model parameter vectorIs the first moment of (2):
Wherein:
Representing model parameter vector/> Is the first moment of (2);
Attenuation coefficient representing first moment, will/> Set to 0.9;
S26: calculating to obtain model parameter vector Is the second moment of (2):
Wherein:
Representing model parameter vector/> Is a second moment of (2);
Attenuation coefficient representing second moment, will/> Set to 0.92;
S27: weight decay is carried out on the first moment and the second moment:
Wherein:
Representing the first moment/> Weight decay results of (2);
Representing the second moment/> Weight decay results of (2);
S28: for model parameter vector Performing iteration, wherein an iteration formula is as follows:
Wherein:
represents an iteration control parameter, will/> Set to 0.01;
S29: let u=u+1, return to step S24 until the maximum number of iterations is reached, and use And constructing a facial expression classification model.
6. The method for analyzing the difficulty in teaching in a class according to claim 1, wherein the step S3 of collecting the facial expression image data sequence of the student during the teaching in the class and classifying the facial expression image data sequence using the facial expression classification model comprises:
collecting facial expression image data sequences of students in the classroom teaching process, obtaining the facial expression image data sequences of each student and forming a set:
Wherein:
Representing a sequence of facial expression image data of an mth student in the course of classroom teaching, t representing time sequence information, The teaching time of H classes is represented, and M represents the total number of students in the class teaching process;
Facial expression image data representing m-th student in H classroom teaching time during classroom teaching Facial expression image data representing mth student at h class teaching time in class teaching process
Receiving and classifying any facial expression image data in a facial expression image data sequence by utilizing the facial expression classification model trained in the step S2 to obtain an expression classification result sequence of each student, wherein the facial expression image dataThe classification flow of (2) is as follows:
s31: the input layer receives facial expression image data
S32: image segmentation layer facial expression image dataPerforming blocking processing to obtain a facial expression image data block sequence:
Wherein:
Representing the S-th facial expression image data block obtained by segmentation, S representing the total number of facial expression image data blocks obtained by segmentation,/> Corresponds to/>, in turnA pixel matrix of color channels;
s33: the coding layer performs coding processing combining with position information on the facial expression image data block sequence to obtain a facial expression coding vector:
Wherein:
Representing a facial expression encoding vector;
Representing a block/>, of facial expression image data Is a result of the encoding process;
Representing a block/>, of facial expression image data In facial expression image data/>Wherein the row position is the horizontal image data block rank of the facial expression image data block in the facial expression image data, and the column position is the vertical image data block rank of the facial expression image data block in the facial expression image data;
a coding matrix for RGB color channels;
representing a convolution process;
S34: sense layer combines attention mechanism and sense calculation mode to facial expression coding vector Sensing to obtain facial expression sensing vector/>, at the h class teaching moment, of the mth student in the class teaching process
Wherein:
an exponential function that is based on a natural constant;
Representing the encoding process results/> Corresponding perception information;
S35: the output layer performs vector mapping on the facial expression perception vector to obtain a facial expression classification result of the mth student at the h classroom teaching time in the classroom teaching process
Wherein:
W represents the connection matrix in the output layer, Representing L1 norm,/>Representing a preset characteristic threshold value;
the facial expression classification result of the mth student at the h class teaching time in the class teaching process is shown, Expressed as concentrating expression,/>Then the expression is indicated as confusing.
7. The method for analyzing the difficulty in teaching in a class of attention as recited in claim 6, wherein said step S4 of locating the difficulty in the sequence of the expression classification result for each student comprises:
obtaining expression classification result sequences of M students and forming a set:
Wherein:
Representing the expression classification result sequence of the mth student;
representing the facial expression classification result of the mth student at the h classroom teaching time in the classroom teaching process;
Difficulty positioning is carried out on the expression classification result sequence of each student to obtain a difficulty time stamp of each student, wherein the expression classification result sequence The difficult time stamp calculation flow of (1) is as follows:
s41: traversing to obtain expression classification result sequence The teaching moment of the classroom where the confusing expression is located;
S42: calculating the time interval between any classroom teaching time obtained by traversing and the classroom teaching time at which the other confused expression is located, and marking all classroom teaching time between two classroom teaching time as difficult point time if the time interval is smaller than a preset threshold value;
s43: repeating the step S42 to obtain the difficulty time stamp of the mth student:
Wherein:
a difficulty time stamp representing the mth student;
representing the difficulty moment of the mth class teaching moment as the mth student in the class teaching process, and the method comprises the steps of/> The h class teaching time is not the difficult time of the m student in the class teaching process.
8. The method for analyzing the difficulty in teaching class of attention analysis according to claim 7, wherein in the step S5, the difficulty time stamps of all students are clustered to obtain a plurality of clusters, the clusters exceeding the number of preset difficulty time stamps are extracted, and the corresponding teaching class content is used as the teaching class difficulty of all students in the clusters, and the method comprises the following steps:
Clustering the difficulty time stamps of all students to obtain a plurality of clustering clusters, wherein the clustering flow of the difficulty time stamps is as follows:
S51: calculating to obtain a difficulty time stamp distance between any two students, wherein the difficulty time stamp distance between the mth student and the q student is as follows
Wherein:
represents an L2 norm;
Representation/> Class teaching time number of same difficult point timeThe h class teaching time in the class teaching process is the difficult time of the m-th student and the q-th student, and is expressed by/>
Represent scale factors, will/>Set to 1;
s52: randomly selecting the difficulty time stamps of K students as initial clustering centers, wherein each clustering center corresponds to one cluster;
s53: calculating the distance from the difficulty time stamp corresponding to the non-clustering center to the clustering center, and merging the difficulty time stamp corresponding to the non-clustering center into the cluster where the closest clustering center is located; wherein the formula of calculation of the distance is the formula in step S51;
S54: calculating the average value of all the difficulty time stamps in the cluster, selecting the difficulty time stamp closest to the average value in the cluster, taking the difficulty time stamp as the updated cluster center, and returning to the step S53; obtaining K clustering clusters until all K clustering centers are unchanged and the time stamps of the difficulties in the K clustering clusters are unchanged;
Calculating the number of difficulty time stamps in each cluster, extracting the cluster exceeding the preset number of difficulty time stamps, calculating the average value of all the difficulty time stamps in the extracted cluster, obtaining the difficulty time stamp after the averaging treatment, taking the class teaching time corresponding to more than 0.3 in the difficulty time stamp after the averaging treatment as the difficulty time of the cluster, and taking the class teaching content corresponding to the difficulty time as the class teaching difficulty of all students in the cluster.
9. A classroom teaching difficulty analysis system for attention analysis, the system comprising:
The facial expression classification module is used for acquiring facial expression image data of students in a real classroom environment by using cameras, labeling the acquired facial expression image data to form a facial expression classification training data set, constructing a facial expression classification model, training the facial expression classification model based on the facial expression classification training data set, acquiring a facial expression image data sequence of the students in the classroom teaching process, and classifying the facial expression image data sequence by using the facial expression classification model to obtain an expression classification result sequence of the students in the classroom teaching process;
The classroom teaching difficulty positioning module is used for positioning the difficulty of the expression classification result sequence of each student to obtain a difficulty time stamp of the student in the classroom teaching process;
The classroom teaching difficulty content positioning device is used for clustering the difficulty time stamps of all students to obtain a plurality of cluster clusters, extracting the cluster clusters exceeding the number of preset difficulty time stamps, and taking the classroom teaching content corresponding to the extracted cluster clusters as the classroom teaching difficulty of all students in the cluster clusters to realize the classroom teaching difficulty analysis method for attention analysis according to any one of claims 1-8.
CN202410521513.0A 2024-04-28 2024-04-28 Classroom teaching difficulty analysis method and system for attention analysis Active CN118097761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410521513.0A CN118097761B (en) 2024-04-28 2024-04-28 Classroom teaching difficulty analysis method and system for attention analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410521513.0A CN118097761B (en) 2024-04-28 2024-04-28 Classroom teaching difficulty analysis method and system for attention analysis

Publications (2)

Publication Number Publication Date
CN118097761A true CN118097761A (en) 2024-05-28
CN118097761B CN118097761B (en) 2024-07-05

Family

ID=91149414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410521513.0A Active CN118097761B (en) 2024-04-28 2024-04-28 Classroom teaching difficulty analysis method and system for attention analysis

Country Status (1)

Country Link
CN (1) CN118097761B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010133661A1 (en) * 2009-05-20 2010-11-25 Tessera Technologies Ireland Limited Identifying facial expressions in acquired digital images
US20150017626A1 (en) * 2013-07-15 2015-01-15 International Business Machines Corporation Automated educational system
US20150193699A1 (en) * 2014-01-08 2015-07-09 Civitas Learning, Inc. Data-adaptive insight and action platform for higher education
CN108304793A (en) * 2018-01-26 2018-07-20 北京易真学思教育科技有限公司 On-line study analysis system and method
US20200098284A1 (en) * 2018-07-13 2020-03-26 Central China Normal University Classroom teaching cognitive load measurement system
CN112464020A (en) * 2020-11-24 2021-03-09 随锐科技集团股份有限公司 Network classroom information processing method and system and computer readable storage medium
CN113128421A (en) * 2021-04-23 2021-07-16 北京高途云集教育科技有限公司 Learning state detection method and system, learning terminal, server and electronic equipment
CN113141534A (en) * 2021-04-28 2021-07-20 重庆工程职业技术学院 Network remote education device
CN113486700A (en) * 2021-05-08 2021-10-08 北京科技大学 Facial expression analysis method based on attention mechanism in teaching scene
CN114202565A (en) * 2021-02-08 2022-03-18 浙大宁波理工学院 Intelligent learning intervention system based on learning process emotion real-time analysis
CN115661885A (en) * 2022-09-16 2023-01-31 北京科技大学 Student psychological state analysis method and device based on expression recognition
CN116386404A (en) * 2023-04-11 2023-07-04 齐鲁师范学院 Course interaction method based on artificial intelligence
CN117591870A (en) * 2023-10-13 2024-02-23 深圳职业技术大学 Deep reinforcement learning-based emotion perception intelligent teaching method and system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010133661A1 (en) * 2009-05-20 2010-11-25 Tessera Technologies Ireland Limited Identifying facial expressions in acquired digital images
US20150017626A1 (en) * 2013-07-15 2015-01-15 International Business Machines Corporation Automated educational system
US20150193699A1 (en) * 2014-01-08 2015-07-09 Civitas Learning, Inc. Data-adaptive insight and action platform for higher education
CN108304793A (en) * 2018-01-26 2018-07-20 北京易真学思教育科技有限公司 On-line study analysis system and method
US20200098284A1 (en) * 2018-07-13 2020-03-26 Central China Normal University Classroom teaching cognitive load measurement system
CN112464020A (en) * 2020-11-24 2021-03-09 随锐科技集团股份有限公司 Network classroom information processing method and system and computer readable storage medium
CN114202565A (en) * 2021-02-08 2022-03-18 浙大宁波理工学院 Intelligent learning intervention system based on learning process emotion real-time analysis
CN113128421A (en) * 2021-04-23 2021-07-16 北京高途云集教育科技有限公司 Learning state detection method and system, learning terminal, server and electronic equipment
CN113141534A (en) * 2021-04-28 2021-07-20 重庆工程职业技术学院 Network remote education device
CN113486700A (en) * 2021-05-08 2021-10-08 北京科技大学 Facial expression analysis method based on attention mechanism in teaching scene
CN115661885A (en) * 2022-09-16 2023-01-31 北京科技大学 Student psychological state analysis method and device based on expression recognition
CN116386404A (en) * 2023-04-11 2023-07-04 齐鲁师范学院 Course interaction method based on artificial intelligence
CN117591870A (en) * 2023-10-13 2024-02-23 深圳职业技术大学 Deep reinforcement learning-based emotion perception intelligent teaching method and system

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
KHALID BENABBES 等: ""A New Hybrid Approach to Detect and Track Learner’s Engagement in e-Learning"", 《IEEE ACCESS》, 10 July 2023 (2023-07-10) *
KHALID BENABBES等: ""A New Hybrid Approach to Detect and Track Learner’s Engagement in e-Learning"", 《IEEE ACCESS 》, 10 July 2023 (2023-07-10) *
ZHAOYU SHOU等: "《Difficulty-Based Knowledge Point Clustering Algorithm Using Students’ Multi-Interactive Behaviors in Online Learning》", 《SEMANTIC SCHOLAR》, 4 November 2022 (2022-11-04) *
万佳: "互联网+教育在初中语文教学中的融入", 《广东省教师继续教育学会教师发展论坛学术研讨会论文集(一)》, 3 March 2023 (2023-03-03) *
梁继娟;李天煜;谢子颖;蓝兴航;: ""互联网+"背景下微信平台在高等医学院校课堂教学中的应用", 电脑知识与技术, no. 23, 15 August 2020 (2020-08-15) *
江波;李万健;李芷璇;叶韵;: "基于面部表情的学习困惑自动识别法", 开放教育研究, no. 04, 5 August 2018 (2018-08-05) *
王炜;曾红兵;: "初探信息化教学模式在《单片机原理与应用》课程中的应用", 科技视界, no. 24, 25 August 2016 (2016-08-25) *
闵秋莎;李文昊;陈雅婷;: "基于视频观看轨迹的难度感知诊断方法", 现代教育技术, no. 05, 15 May 2020 (2020-05-15) *

Also Published As

Publication number Publication date
CN118097761B (en) 2024-07-05

Similar Documents

Publication Publication Date Title
CN113283446B (en) Method and device for identifying object in image, electronic equipment and storage medium
CN110084281A (en) Image generating method, the compression method of neural network and relevant apparatus, equipment
CN109978893A (en) Training method, device, equipment and the storage medium of image, semantic segmentation network
CN111369581A (en) Image processing method, device, equipment and storage medium
CN113034495B (en) Spine image segmentation method, medium and electronic device
CN110363084A (en) A kind of class state detection method, device, storage medium and electronics
CN111582342A (en) Image identification method, device, equipment and readable storage medium
CN110457523B (en) Cover picture selection method, model training method, device and medium
CN115690615B (en) Video stream-oriented deep learning target recognition method and system
CN115331154B (en) Method, device and equipment for scoring experimental steps and readable storage medium
CN110659398A (en) Visual question-answering method based on mathematical chart data set
CN113255557A (en) Video crowd emotion analysis method and system based on deep learning
CN115457451B (en) Constant temperature and humidity test box monitoring method and device based on Internet of things
CN112308746A (en) Teaching state evaluation method and device and electronic equipment
CN112016560A (en) Overlay text recognition method and device, electronic equipment and storage medium
CN113111716A (en) Remote sensing image semi-automatic labeling method and device based on deep learning
CN107392191B (en) Method for judging identity, device and electronic equipment
CN111950457A (en) Oil field safety production image identification method and system
CN112712114B (en) Method, system, equipment and medium for analyzing instrument based on TextCNN-BiLSTM
CN112381118B (en) College dance examination evaluation method and device
CN118097761B (en) Classroom teaching difficulty analysis method and system for attention analysis
CN116580326A (en) Aviation environment safety risk prevention and control detection and early warning system
CN114612979B (en) Living body detection method and device, electronic equipment and storage medium
CN110765953A (en) Multimedia teaching teacher sign-in monitoring method and system
CN114638973A (en) Target image detection method and image detection model training method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant