CN114971972A - Deep knowledge tracking method integrating forgetting factor and exercise difficulty and intelligent terminal - Google Patents

Deep knowledge tracking method integrating forgetting factor and exercise difficulty and intelligent terminal Download PDF

Info

Publication number
CN114971972A
CN114971972A CN202210681204.0A CN202210681204A CN114971972A CN 114971972 A CN114971972 A CN 114971972A CN 202210681204 A CN202210681204 A CN 202210681204A CN 114971972 A CN114971972 A CN 114971972A
Authority
CN
China
Prior art keywords
knowledge
vector
forgetting
matrix
embedding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210681204.0A
Other languages
Chinese (zh)
Inventor
朱昶胜
朴世超
马芳兰
冯文芳
雷鹏
李天钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
INSTITUTE OF SENSOR TECHNOLOGY GANSU ACADEMY OF SCIENCE
Lanzhou University of Technology
Original Assignee
INSTITUTE OF SENSOR TECHNOLOGY GANSU ACADEMY OF SCIENCE
Lanzhou University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by INSTITUTE OF SENSOR TECHNOLOGY GANSU ACADEMY OF SCIENCE, Lanzhou University of Technology filed Critical INSTITUTE OF SENSOR TECHNOLOGY GANSU ACADEMY OF SCIENCE
Priority to CN202210681204.0A priority Critical patent/CN114971972A/en
Publication of CN114971972A publication Critical patent/CN114971972A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Educational Administration (AREA)
  • Marketing (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Pure & Applied Mathematics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Optimization (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Educational Technology (AREA)
  • Development Economics (AREA)
  • Primary Health Care (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a deep knowledge tracking method and an intelligent terminal integrating forgetting factors and exercise difficulty. And carrying out fusion processing on the historical knowledge states of the students by using the long-term and short-term memory network, and updating the knowledge mastering states of the students. And then, applying project reflection theory to analyze the exercise difficulty, and integrating the exercise difficulty and the knowledge mastering state of the student to predict the future learning condition of the student. And finally, extracting the mastery level condition of the students on each knowledge point according to the knowledge mastering states of the students. The invention has the advantages of strong interpretability, high prediction accuracy and the like, and can improve the accuracy of the knowledge state tracking result of the student.

Description

Deep knowledge tracking method integrating forgetting factor and exercise difficulty and intelligent terminal
Technical Field
The invention relates to the field of education knowledge tracking, in particular to a deep knowledge tracking method and an intelligent terminal integrating forgetting factors and question difficulty.
Background
With the development and popularization of large-scale Open Online Course (MOOC), a large number of Online education platforms are emerging successively. The online education is based on an internet platform, develops education for wide internet students, and realizes intelligent personalized education service by applying internet technologies such as big data and the like. The online education platform breaks through the limitation on time and space in the traditional education, provides richer learning contents and serves wider student groups.
In the process of realizing intelligent education, Knowledge Tracking (KT) is a key technology for realizing. In the prior art, the deep knowledge tracking is more concerned about the answering situation of the learner, and relevant factors influencing the answering of the learner are ignored. For example, with questions of different difficulty that contain the same point of knowledge, the learner will likely produce different results in response. Meanwhile, the traditional deep knowledge tracking model considers that the knowledge point mastering situation of the learner is invariable all the time, but the situation is not the same actually. Over time, learned knowledge is gradually forgotten, and learning the same knowledge point again consolidates memory.
Disclosure of Invention
In order to solve the technical problems, the application provides a deep knowledge tracking method and an intelligent terminal integrating the forgetting factor and the problem difficulty, and the knowledge mastering state of students is tracked through a deep knowledge tracking model integrating the forgetting factor and the problem difficulty.
In order to solve the technical problems, the invention adopts the technical scheme that:
a deep knowledge tracking method integrating forgetting factors and exercise difficulty comprises the following steps: acquiring a problem vector, and acquiring a problem embedding vector according to the problem vector and the first embedding matrix; acquiring relevant weight vectors of the exercises and the knowledge points according to the exercise embedding vectors and the knowledge point embedding matrix; acquiring a forgetting factor, wherein the forgetting factor comprises: the time interval of repeatedly learning the same knowledge points, the time interval from the last learning, the times of repeatedly learning the same knowledge points and the knowledge grasping state matrix; acquiring a forgetting vector and an updating vector according to the forgetting factor so as to process a knowledge mastering state matrix to obtain a first knowledge mastering state matrix; acquiring an answer result embedding vector according to the answer result of the question and the second embedding matrix; embedding the answer result into vectors and the relevant weight vectors of the relevant knowledge points of the exercises as the input of a long-term and short-term memory network, updating the first knowledge mastering state matrix and acquiring a second knowledge mastering state matrix; acquiring a weighted grasping vector of the exercise related knowledge points of the next exercise according to the second knowledge grasping state matrix and the related weight vector; and acquiring a problem embedding vector and problem difficulty of the next problem, and acquiring the correct answer probability of the next problem according to the problem difficulty, the problem embedding vector, the weighted grasping vector and a preset function.
In a preferred embodiment of the present invention, the method further includes:
acquiring a knowledge grasping state embedding vector of a target knowledge point according to the second knowledge grasping state matrix; and acquiring the knowledge grasping level of the user according to the knowledge grasping state embedded vector.
In a preferred embodiment of the present invention, the step of obtaining the relevant weight vectors of the problem and the knowledge points according to the problem embedding vector and the knowledge point embedding matrix includes:
and performing inner product on the problem embedding vector and the knowledge point embedding vector in the knowledge point embedding matrix, and processing the inner product result through a Softmax function to obtain the related weight vectors of the problem and the knowledge point:
Figure BDA0003698490210000021
wherein, w t The associated weight vector, v, for the associated knowledge point of the problem t Expressing problem embedding vector, N t And (3) representing a knowledge point embedding matrix, wherein i is a knowledge point.
In a preferred embodiment of the present invention, the step of obtaining the forgetting vector and the updating vector according to the forgetting factor includes:
acquiring a first forgetting matrix according to the time interval of repeatedly learning the same knowledge point, the time interval from the last learning and the times of repeatedly learning the same knowledge point:
C t (i)=[RK(i),RL(i),KT(i)],
RK is the time interval of repeated learning of the same knowledge point, RL is the time interval from the last learning, KT is the number of times of repeated learning of the same knowledge point, C t Is a first forgetting matrix;
acquiring a second forgetting matrix according to the first forgetting vector and the knowledge mastering state matrix:
Figure BDA0003698490210000022
F t in the form of a second forgetting matrix,
Figure BDA0003698490210000023
mastering a state matrix for knowledge;
acquiring the forgetting vector and the updating vector according to the second forgetting matrix:
fe t (i)=Sigmoid(FE T F t (i)+b fe ),
fu t (i)=Tanh(FU T F t (i)+b fu ),
FE and FU being weight vectors of fully-connected layer functions, b fe And b fu As an offset vector, fe t To forget vector, fu t Is an update vector.
In a preferred embodiment of the present invention, the processing of the first knowledge base state matrix comprises:
Figure BDA0003698490210000031
Figure BDA0003698490210000032
the state matrix is mastered for the first knowledge.
In a preferred embodiment of the present invention, the second knowledge base state matrix updating process:
Figure BDA0003698490210000033
Figure BDA0003698490210000034
for the second knowledge, grasp the state matrix, r t And embedding vectors for answer results.
In the preferred embodiment of the present invention, the difficulty of the above problem:
Figure BDA0003698490210000035
d t+1 for the difficulty of the next exercise,
Figure BDA0003698490210000036
as weight vectors in fully connected layers, v t+1 Embedding vectors for the next problem, b D Is the offset vector in the fully connected layer.
In a preferred embodiment of the present invention, the above-mentioned prediction process of the answer correct probability of the next question is:
Figure BDA0003698490210000037
Figure BDA0003698490210000038
Figure BDA0003698490210000039
m t+1 for weighting the grasping vector, w t+1 Is the related weight vector of the knowledge point of the next problem,
Figure BDA00036984902100000310
the state matrix is mastered for the first knowledge of the next problem,
Figure BDA00036984902100000311
and
Figure BDA00036984902100000312
respectively corresponding to the weight vector in the fully-connected layer function, b 1 And b 2 Respectively, the offset vector, p, in the function of the corresponding fully-connected layer t+1 The probability of correct answer for the next problem.
In a preferred embodiment of the present invention, the modeling process of the knowledge base level comprises:
Figure BDA00036984902100000313
Figure BDA00036984902100000314
Figure BDA00036984902100000315
δ i in order to be a weight vector, the weight vector,
Figure BDA00036984902100000316
and
Figure BDA00036984902100000317
as weight vectors corresponding to fully-connected layer functions, b 1 And b 2 For the bias vector corresponding to the fully-connected layer function,
Figure BDA00036984902100000318
for knowledge grasping of a state matrix, level, of a knowledge point i t The horizontal vector is mastered for the knowledge of the student.
An intelligent terminal, comprising: the device comprises a memory and a processor, wherein a deep knowledge tracking program fusing a forgetting factor and a problem difficulty is stored in the memory, and when the deep knowledge tracking program fusing a forgetting factor and a problem difficulty is executed by the processor, the method for tracking the deep knowledge fusing the forgetting factor and the problem difficulty is realized.
Adopt the produced beneficial effect of above-mentioned technical scheme to lie in: firstly, acquiring knowledge point weight and forgetting factors related to exercises, forgetting the learned knowledge mastering degree, and then performing learning modeling through an LSTM network; and then the difficulty degree of the exercise is obtained by analyzing the past exercises, and the next answer condition of the learner is predicted by combining the factors, so that the aim of tracking the knowledge mastering degree of the students is fulfilled.
Drawings
FIG. 1 is a schematic flow chart of a deep knowledge tracking method integrating forgetting factors and problem difficulty according to the present invention;
FIG. 2 is a comparison of training results of different models under the same data set according to the present invention;
FIG. 3 is a knowledge mastery level output of the model provided by the present invention;
fig. 4 shows the knowledge level output of the comparative model DKVMN model.
Detailed Description
To further illustrate the technical measures and effects taken by the present invention to achieve the intended objects, embodiments of the present invention will be described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements with the same or similar functions throughout. The embodiments described below are only a part of the embodiments of the present invention, and not all of them. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of the present invention without any creative efforts shall fall within the protection scope of the embodiments of the present invention. While the present invention has been described in connection with the preferred embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but is intended to cover various modifications, equivalent arrangements, and specific embodiments thereof.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a deep knowledge tracking method combining forgetting factors and problem difficulty according to the present invention.
The method is based on a Dynamic Key-Value Memory network (DKVMN) model, combines Item Response Theory (IRT), forgetting factor and problem difficulty factor, and designs a knowledge tracking model to track the knowledge mastering state change process of students and predict the answering results of the students. The model is divided into the following parts: weight calculation, forgetting processing, learning simulation, result prediction and knowledge level output. The method comprises the following specific steps:
as shown in fig. 1, the depth knowledge tracking method with forgetting factors and problem difficulty integrated provided by this embodiment includes the following steps:
s11: acquiring a problem vector, and acquiring a problem embedding vector according to the problem vector and the first embedding matrix;
s12: acquiring relevant weight vectors of the exercises and the knowledge points according to the exercise embedding vectors and the knowledge point embedding matrix;
s13: acquiring a forgetting factor;
s14: acquiring a forgetting vector and an updating vector according to the forgetting factor so as to process the knowledge palm holding state matrix to obtain a first knowledge mastering state matrix;
s15: acquiring an answer result embedding vector according to the answer result of the exercise and the second embedding matrix;
s16: embedding the answer result into vectors and the relevant weight vectors of relevant knowledge points of exercises as input of a long-term and short-term memory network, updating the first knowledge grasping state matrix, and acquiring a second knowledge grasping state matrix;
s17: acquiring a weighted grasping vector of the exercise related knowledge points of the next exercise according to the second knowledge grasping state matrix and the related weight vector;
s18: and acquiring a problem embedding vector and problem difficulty of the next problem, and acquiring the correct answer probability of the next problem according to the problem difficulty, the problem embedding vector, the weighted grasping vector and a preset function.
Optionally, the forgetting factor is four forgetting factors proposed for forgetting behavior based on the prior knowledge of the biorthos forgetting curve: a time interval (RK) for repeatedly learning the same Knowledge point, a time interval (RL) from the last learning, the number of times (KT) for repeatedly learning the same Knowledge point, and a mastery degree (KM) of the Knowledge point. The knowledge grasping state matrix is a knowledge point grasping state matrix of the student, the first knowledge grasping state matrix represents the knowledge grasping state matrix of the student before the beginning of the learning, which is obtained by processing the knowledge point grasping state matrix by the forgetting factor, and the second knowledge grasping state matrix represents the knowledge grasping state matrix of the student after the end of the learning.
Through the mode, the deep knowledge tracing model integrating the forgetting factor and the exercise difficulty can be obtained to trace the knowledge mastering state of the student, and the preset accuracy is more accurate.
Optionally, the method further comprises:
acquiring a knowledge grasping state embedding vector of a target knowledge point according to the second knowledge grasping state matrix; and acquiring the knowledge grasping level of the user according to the knowledge grasping state embedded vector.
Optionally, the step of obtaining the relevant weight vector of the problem and the knowledge point according to the problem embedding vector and the knowledge point embedding matrix includes:
and performing inner product on the problem embedding vector and the knowledge point embedding vector in the knowledge point embedding matrix, and processing the inner product result through a Softmax function to obtain the related weight vectors of the problem and the knowledge point:
Figure BDA0003698490210000061
wherein, w t The associated weight vector, v, for the associated knowledge point of the problem t Expressing problem embedding vector, N t And (3) representing a knowledge point embedding matrix, wherein i is a knowledge point.
Optionally, the step of obtaining a forgetting vector and an updating vector according to the forgetting factor includes:
acquiring a first forgetting matrix according to the time interval of repeatedly learning the same knowledge point, the time interval from the last learning and the times of repeatedly learning the same knowledge point:
C t (i)=[RK(i),RL(i),KT(i)],
RK is the time interval of repeated learning of the same knowledge point, RL is the time interval from the last learning, KT is the number of times of repeated learning of the same knowledge point, C t Is a first forgetting matrix;
acquiring a second forgetting matrix according to the first forgetting vector and the knowledge mastering state matrix:
Figure BDA0003698490210000062
F t is the second forgetting matrix and is the second forgetting matrix,
Figure BDA0003698490210000063
mastering a state matrix for knowledge;
acquiring the forgetting vector and the updating vector according to the second forgetting matrix:
fe t (i)=Sigmoid(FE T F t (i)+b fe ),
fu t (i)=Tanh(FU T F t (i)+b fu ),
FE and FU being weight vectors of fully-connected layer functions, b fe And b fu As an offset vector, fe t To forget vector, fu t Is an update vector.
Optionally, the processing of the first knowledge base state matrix:
Figure BDA0003698490210000064
Figure BDA0003698490210000065
Figure BDA0003698490210000066
the state matrix is mastered for the first knowledge.
Optionally, the update process of the second knowledge-grasp state matrix:
Figure BDA0003698490210000067
Figure BDA0003698490210000068
Figure BDA0003698490210000069
for the second knowledge, grasp the state matrix, r t And embedding vectors for answer results.
Optionally, problem difficulty:
Figure BDA00036984902100000610
d t+1 for the difficulty of the next exercise,
Figure BDA00036984902100000611
as weight vectors in fully connected layers, v t+1 Embedding vectors for the next problem, b D Is the offset vector in the fully connected layer.
Optionally, a prediction process of the probability of correct answer for the next problem:
Figure BDA00036984902100000612
Figure BDA00036984902100000613
Figure BDA00036984902100000614
m t+t for weighting the grasping vector, w t+1 Is the related weight vector of the knowledge point of the next problem,
Figure BDA00036984902100000615
the state matrix is mastered for the first knowledge of the next problem,
Figure BDA00036984902100000616
and
Figure BDA00036984902100000617
respectively, corresponding to the weight vector in the fully-connected layer function, b 1 And b 2 Respectively, the offset vector, p, in the function of the corresponding fully-connected layer t+t The probability of correct answer for the next problem.
Optionally, a knowledge mastery level modeling process:
Figure BDA0003698490210000071
Figure BDA0003698490210000072
Figure BDA0003698490210000073
δ i in order to be a weight vector, the weight vector,
Figure BDA0003698490210000074
and
Figure BDA0003698490210000075
as weight vectors corresponding to fully-connected layer functions, b 1 And b 2 For the bias vector corresponding to the fully-connected layer function,
Figure BDA0003698490210000076
for knowledge grasping of a state matrix, level, of a knowledge point i t The horizontal vector is mastered for the knowledge of the student.
Illustratively, the deep knowledge tracing method for fusing forgetting factors and problem difficulty in the embodiment includes the following steps:
step 1: calculating the weight;
first, the exercise question vector e t Multiplying with problem embedding matrix A (i.e. the first embedding matrix) to obtain a d k Embedding problem of dimension into vector v t . Then, the problem embedding vector and the knowledge point embedding vector are used for inner product, and the inner product result is processed by a Softmax function to obtain a related weight vector w of the problem and the knowledge point t . The formula is as follows:
v t =e t ×A,
Figure BDA0003698490210000077
wherein e is t Represents a vector of exercise questions embedded in a matrix A of (d) k X E |) dimension, N t Representing an embedded matrix of known points, each d in the matrix k The dimensional vectors all represent a knowledge point embedding vector.
Step 2: forgetting treatment;
based on Ebinghaos forgetting curve prior knowledge, four forgetting factors are provided aiming at forgetting behaviors: a time interval (RK) for repeatedly learning the same Knowledge point, a time interval (RL) from the last learning, the number of times (KT) for repeatedly learning the same Knowledge point, and a mastery degree (KM) of the Knowledge point.
Based on the four forgetting factors, a forgetting matrix of each student for each knowledge point is obtained first. Combining three factors of RK, RL and KT to obtain a first forgetting matrix C t . Knowledge acquisition state matrix for students
Figure BDA0003698490210000078
And the expression is that the fourth factor KM influences the forgetting behavior. Then C is mixed t Combining with KM to obtain a second forgetting matrix F t Four factors affecting forgetting behavior are shown. The formula is as follows:
C t (i)=[RK(i),RL(i),KT(i)],
Figure BDA0003698490210000079
wherein the first forgetting matrix C t Has a dimension of (d) c ×|k|),C t (i) The first three forgetting factor vectors representing the ith knowledge point.
Then grasping the state matrix of the knowledge of the student
Figure BDA0003698490210000081
Forgetting processing is performed. Firstly, a student forgetting factor F of a knowledge point i is calculated by a Sigmoid function t (i) Conversion to a forgetting vector fe t (i) Then, a Tanh function is used for converting the forgetting factor F of the student to the knowledge point i t (i) Conversion to update vector fu t (i) Finally according to the forgetting vector fe t (i) And update vector fu t (i) Knowledge palm holding state matrix for students
Figure BDA0003698490210000082
And (4) processing. The modeling process is as follows:
fe t (i)=Sigmoid(FE T F t (i)+b fe ),
fu t (i)=Tanh(FU T F t (i)+b fu ),
Figure BDA0003698490210000083
wherein, FE T And FU T Is (d) v +d c )×d v Weight vector of dimension, offset vector b fe And b fu Is d v And (4) maintaining.
Figure BDA0003698490210000084
The first knowledge grasping state matrix is the knowledge grasping state matrix of the student before the study starts. The Sigmoid function and the Tanh function are activation functions in the fully connected layer.
And step 3: learning and simulating;
duplet (e) for student answer at t moment t ,a t ) Expressing, multiplying the binary group with the question result embedding matrix B (i.e. the second embedding matrix) to obtain d v Dimensional answer result embedding vector r t Then r is t Associated weight vector w of same problem and knowledge point t As input, the knowledge holding state of the student is updated through a Long Short-Term Memory network (LSTM) to complete learning modeling. The modeling process is as follows:
r t =(e t ,a t )×B,
Figure BDA0003698490210000085
wherein the content of the first and second substances,
Figure BDA0003698490210000086
the first knowledge grasping state matrix, namely the knowledge grasping state matrix of the student before the beginning of the learning,
Figure BDA0003698490210000087
and the second knowledge grasping state matrix is the knowledge grasping state matrix of the student after the learning is finished. The dimension of the answer result embedded matrix B is (d) v ×2|E|)。
And 4, step 4: predicting the result;
based on the prior of Item Response Theory (IRT), the problem difficulty is used as a reference factor in the process of predicting the result, and is not limited to a single reference for answering the result. Difficulty of exercise d t+1 Is represented as follows:
Figure BDA0003698490210000088
wherein the content of the first and second substances,
Figure BDA0003698490210000089
and b D Representing the weight vector and the bias vector in the fully-connected layer, respectively.
The result prediction is the question e to answer next t+1 Performance prediction of (2). Firstly, a knowledge point related weight vector w is measured t+1 (i.e., the next question e t+1 Relative weight of corresponding knowledge point in) and knowledge mastery state matrix of students
Figure BDA00036984902100000810
(i.e. the knowledge grasping state matrix of the student before the next answer, which is obtained by forgetting the second knowledge grasping state matrix and is the same as the acquisition process of the first knowledge grasping state matrix) is weighted and summed to obtain the weighted grasping vector m of the knowledge points related to the exercises t+1 Then the vector m is divided into t+1 、v t+1 (i.e., the next question to answer)e t+1 Problem embedding vectors) and d t+1 Are combined to obtain a new vector group m t+1 ,v t+1 ,d t+1 ]And inputting the result into a Tanh function, and finally inputting the result into a Sigmoid function to obtain a final prediction result. The modeling process is as follows:
Figure BDA0003698490210000091
Figure BDA0003698490210000092
Figure BDA0003698490210000093
wherein the content of the first and second substances,
Figure BDA0003698490210000094
and
Figure BDA0003698490210000095
respectively representing the weight vectors in the corresponding fully-connected layer function, b 1 And b 2 Respectively representing the offset vectors, p, in the corresponding fully-connected layer function t+1 Then it indicates that the student correctly answers question e t+1 The probability of (c).
And 5: outputting the knowledge level;
firstly, extracting the mastery degree embedding vector of the student to the knowledge point i, and then acquiring the knowledge mastery level of the student according to the embedding vector. The modeling process is as follows:
Figure BDA0003698490210000096
Figure BDA0003698490210000097
Figure BDA0003698490210000098
wherein the unit vector delta i (0,0, 1.. 0) is used to represent the weight vector, the ith dimension position is assigned to 1;
Figure BDA0003698490210000099
and
Figure BDA00036984902100000910
weight vector representing the corresponding fully-connected layer function, b 1 And b 2 A bias vector representing a corresponding fully-connected layer function;
Figure BDA00036984902100000911
embedding vectors representing the mastery degree of students on the knowledge points i; the 0 vector has no practical meaning and is used for supplementing the vector dimension; level t The knowledge master horizontal vector of students of one | K | dimension is represented.
This example performed experiments using four real datasets assisment 2009, assisment 2012, EdNet and slepemapy. The experimental hardware environment is shown in table 1:
table 1: hardware environment of experiment
Figure BDA00036984902100000912
Data set ASSISTment2009 is online learning data provided by ASSIST platform during 2009-; ASSISTment2012 is online learning data provided by ASSIST platform in 2012 and 2013 years, and different from ASSISTment2009 data set, ASSISTment2012 only corresponds to one knowledge point in each topic in the data set; the EdNet data set is data collected by the cross-platform AI tutoring system Santa in 2017 and 2019; cz data set the data source is an online system for geographic exercise that collects data in the system between 2014-2015.
Because the data volume of a part of data sets in the data sets is huge, the data sets are filtered and screened, and the statistical information of the filtered data sets is shown in table 2:
table 2: statistics of a data set
Figure BDA0003698490210000101
In this embodiment, the model is trained and the relevant parameters are set to: batch size 30, memory matrix column number 320, hidden vector size 20, and number of problems processed once 200; in the exponential decay learning rate used in the present embodiment, the initial learning rate init _ learning is 0.01, the decay rate is 10000, and the learning rate after decay is 0.001.
Knowledge point embedding matrix for dataset ASSIST parent 2009
Figure BDA0003698490210000102
Has a column number of 123, d ═ d k =d v =16;
Knowledge point embedding matrix for dataset ASSIST [ 2012 ]
Figure BDA0003698490210000103
The number of rows of (1) is 265, d ═ d k =d v =32;
For a dataset EdNet, a knowledge point embedding matrix
Figure BDA0003698490210000104
The number of columns is 188, d ═ d k =d v =16;
Cz, knowledge point embedding matrix for the data set sleppempy
Figure BDA0003698490210000105
The number of columns of (d) is 1067 ( =d v =128。
The model provided by the invention is compared with other baseline models, including BKT, DKT and DKVMN models, the experimental result is shown by two evaluation indexes of model prediction accuracy ACC and ROC curve area AUC, and the comparison result is shown in Table 3:
table 3: comparison result of different knowledge tracking models
Figure BDA0003698490210000106
Figure BDA0003698490210000111
The bold type and face. Referring to fig. 2, it can be more intuitively found from the AUC variation curve in fig. 2 that the model (FDKT-ED) proposed by the present invention is better than the other three models, and the AUC of the model tends to be stable up to about 100 iterations.
Through comparative experiments, the prediction effect of the BKT model on two data sets is the worst, which shows that the method for modeling the knowledge mastering level by using binary variables in the traditional knowledge tracking has limitation. In the DKT model, a Recurrent Neural Network (RNN) is used for assisting in constructing the whole knowledge level, so that the knowledge tracking modeling process is optimized to a certain extent, but the mastering level of each knowledge point of a student cannot be constructed. The DKVMN solves the problems through a memory enhancing neural network, but is not perfect in the aspect of approaching to the learning behavior of a simulated student, the DKVMN defaults that the mastering level of the student on each knowledge point is not changed, and forgetting factors are ignored; meanwhile, in the prediction link, the factor that the exercise itself can correctly answer the prediction students is not considered. Therefore, the prediction effect of the model provided by the invention is better than that of the 3 models.
The knowledge level output result output by the model provided by the invention is compared with the DKMN model. Referring to fig. 3 and 4, fig. 3 and 4 show the knowledge level output results of the two models, respectively. The experiment inputs the same set of data into two models, the input data using a binary set (k) t ,a t ) Is represented by (a) wherein k t Points of knowledge representing learning, a t Indicating the result of the response. In the output result, the abscissa represents the question sequence, the ordinate represents the corresponding knowledge point, and the color scale represents the degree of grasp of the knowledge level.
It can be found from the figure that the knowledge mastering level of the students can be increased by correctly answering the questions and can be decreased by incorrectly answering the questions, which shows that the model and the DKVMN model provided by the invention can update the knowledge mastering state of the students after the learning behaviors of the students are finished, so that the learning behaviors of the students are modeled.
However, in the model proposed by the present invention, when a knowledge point is not learned in a time period, the corresponding knowledge learning state is decreased, but the knowledge learning state corresponding to the time period is not changed in the DKVMN model, which means that the forgetting behavior of the student is not considered in the model.
The model provided by the invention is used for carrying out an ablation experiment, and the influence of exercise difficulty and forgetting factor on the model prediction capability is fully analyzed. The data set is performed on assistent 2012. The four forgetting factors respectively correspond to a time interval from last learning of the same knowledge point, a time interval from last learning, the number of times of repeatedly learning the knowledge point and the mastering degree of original knowledge, and an experimental result is expressed by an AUC value. The results of the experiment are shown in table 4:
table 4: summary table of ablation experiment results
Figure BDA0003698490210000121
The comparison experiment shows that when the model is used for removing the problem difficulty factor, the AUC value is reduced most obviously, all the other factors influence the prediction capability of the model to a certain extent, the construction process of the model is optimized, the problem difficulty factor plays a greater role in improving the prediction capability of the model, and the other four forgetting factors play approximately the same role.
The deep knowledge tracking method fusing the forgetting factors and the exercise difficulty, provided by the invention, comprises the steps of firstly obtaining the weight of knowledge points related to exercises, then analyzing to obtain the forgetting factors by combining an Elbinhaos forgetting curve, forgetting the learned knowledge mastering degree, and then performing learning modeling through an LSTM network. And then, the difficulty degree of the exercises is obtained by analyzing the past exercises, the next answer condition of the learner is predicted by combining the difficulty degree of the exercises, the aim of tracking the knowledge holding degree of the student is fulfilled, the prediction accuracy is higher, and the accuracy of the knowledge state tracking result of the student is improved.
The invention also provides an intelligent terminal which comprises a memory and a processor, wherein the memory is stored with a depth knowledge tracking program fusing the forgetting factor and the problem difficulty, and the depth knowledge tracking program fusing the forgetting factor and the problem difficulty is executed by the processor to realize the depth knowledge tracking method fusing the forgetting factor and the problem difficulty in any embodiment.
The invention also provides a computer readable storage medium, wherein a deep knowledge tracking program fusing the forgetting factor and the problem difficulty is stored in the computer readable storage medium, and when the deep knowledge tracking program fusing the forgetting factor and the problem difficulty is executed by a processor, the steps of the deep knowledge tracking method fusing the forgetting factor and the problem difficulty in any embodiment are realized.
In the embodiment of the intelligent terminal and the computer-readable storage medium integrating the forgetting factor and the problem difficulty provided by the invention, all technical features of any embodiment of the deep knowledge tracking method integrating the forgetting factor and the problem difficulty can be included, and the contents of expanding and explaining the specification are basically the same as those of each embodiment of the method, and are not described herein again.
It should be understood that the foregoing scenarios are only examples, and do not limit the application scenarios of the technical solutions provided in the embodiments of the present invention, and the technical solutions of the present invention may also be applied to other scenarios. For example, as can be known by those skilled in the art, with the evolution of system architecture and the emergence of new service scenarios, the technical solution provided by the embodiment of the present invention is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the present invention, the same or similar term concepts, technical solutions and/or application scenario descriptions are generally only described in detail at the first occurrence, and when the description is repeated later, the detailed description is not repeated in general for brevity, and when understanding the technical solutions and the like of the present invention, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
The technical features of the technical solution of the present invention can be arbitrarily combined, and for the sake of simplicity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the technical features should be considered as the scope of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A deep knowledge tracking method fusing forgetting factors and exercise difficulty is characterized by comprising the following steps:
acquiring a problem vector, and acquiring a problem embedding vector according to the problem vector and the first embedding matrix;
acquiring relevant weight vectors of the exercises and the knowledge points according to the exercise embedding vectors and the knowledge point embedding matrix;
acquiring a forgetting factor, wherein the forgetting factor comprises: the time interval of repeatedly learning the same knowledge points, the time interval from the last learning, the times of repeatedly learning the same knowledge points and the knowledge grasping state matrix;
acquiring a forgetting vector and an updating vector according to the forgetting factor so as to process a knowledge mastering state matrix to obtain a first knowledge mastering state matrix;
acquiring an answer result embedding vector according to the answer result of the exercise and the second embedding matrix;
embedding the answer results into vectors and the relevant weight vectors of the question relevant knowledge points as the input of a long-term and short-term memory network, updating the first knowledge mastering state matrix and acquiring a second knowledge mastering state matrix;
acquiring a weighted grasping vector of the exercise related knowledge points of the next exercise according to the second knowledge grasping state matrix and the related weight vector;
and acquiring a problem embedding vector and problem difficulty of the next problem, and acquiring the correct answer probability of the next problem according to the problem difficulty, the problem embedding vector, the weighted grasping vector and a preset function.
2. The method for deep knowledge tracking with forgetting factors and problem difficulty fused according to claim 1, wherein the method further comprises:
acquiring a knowledge mastering state embedded vector of the target knowledge point according to the second knowledge mastering state matrix;
and acquiring the knowledge mastery level of the user according to the knowledge mastery state embedded vector.
3. The method for tracking depth knowledge with forgetting factor and problem difficulty fused according to claim 1, wherein the step of obtaining the relevant weight vector of the problem and the knowledge point according to the problem embedding vector and the knowledge point embedding matrix comprises:
and performing inner product on the problem embedding vector and the knowledge point embedding vector in the knowledge point embedding matrix, and processing the inner product result through a Softmax function to obtain the related weight vectors of the problem and the knowledge point:
Figure FDA0003698490200000011
wherein, w t The associated weight vector, v, for the associated knowledge point of the problem t Representing problem embedding vectors, N t And (3) representing a knowledge point embedding matrix, wherein i is a knowledge point.
4. The method for tracking depth knowledge with forgetting factor and problem difficulty fused according to claim 3, wherein the step of obtaining forgetting vector and updating vector according to the forgetting factor comprises:
acquiring a first forgetting matrix according to the time interval of repeatedly learning the same knowledge point, the time interval from the last learning and the times of repeatedly learning the same knowledge point:
C t (i)=[RK(i),RL(i),KT(i)],
RK is the time interval of repeated learning of the same knowledge point, RL is the time interval from the last learning, KT is the number of times of repeated learning of the same knowledge point, C t Is a first forgetting matrix;
acquiring a second forgetting matrix according to the first forgetting vector and the knowledge mastering state matrix:
Figure FDA0003698490200000021
F t in the form of a second forgetting matrix,
Figure FDA0003698490200000022
mastering a state matrix for knowledge;
acquiring the forgetting vector and the updating vector according to the second forgetting matrix:
fe t (i)=Sigmoid(FE T F t (i)+b fe ),
fu t (i)=Tanh(FU T F t (i)+b fu ),
FE T and FU T Weight vector being a function of the full connection layer, b fe And b fu As a bias vector, fe t To forget toAmount fu t Is an update vector.
5. The deep knowledge tracking method with forgetting factor and problem difficulty combined as claimed in any one of claims 1 to 4, wherein the first knowledge grasping state matrix processing:
Figure FDA0003698490200000023
Figure FDA0003698490200000024
the state matrix is mastered for the first knowledge.
6. The deep knowledge tracking method with forgetting factors and problem difficulty fused according to claim 5, wherein the second knowledge grasping state matrix updating process:
Figure FDA0003698490200000025
Figure FDA0003698490200000026
for the second knowledge, grasp the state matrix, r t And embedding vectors for answer results.
7. The method of deep knowledge tracking fusing forgetting factors and problem difficulties according to claim 6, wherein the problem difficulty is:
Figure FDA0003698490200000027
d t+1 for the difficulty of the next exercise,
Figure FDA0003698490200000028
as weight vectors in fully connected layers, v t+1 Embedding vectors for the next problem, b D Is the offset vector in the fully connected layer.
8. The method for deep knowledge tracking with forgetting factor and problem difficulty combined according to claim 7, wherein the prediction process of the correct answer probability of the next problem is as follows:
Figure FDA0003698490200000029
Figure FDA0003698490200000031
Figure FDA0003698490200000032
m t+1 for weighting the grasping vector, w t+1 Is the related weight vector of the knowledge point of the next problem,
Figure FDA0003698490200000033
the state matrix is mastered for the first knowledge of the next problem,
Figure FDA0003698490200000034
and
Figure FDA0003698490200000035
respectively corresponding to the weight vector in the fully-connected layer function, b 1 And b 2 Respectively, the offset vector, p, in the function of the corresponding fully-connected layer t+1 The probability of correct answer for the next problem.
9. The deep knowledge tracking method with forgetting factors and problem difficulty fused according to claim 2, wherein the modeling process of knowledge mastery level comprises:
Figure FDA0003698490200000036
Figure FDA0003698490200000037
Figure FDA0003698490200000038
δ i in order to be a weight vector, the weight vector,
Figure FDA0003698490200000039
and
Figure FDA00036984902000000310
as weight vectors corresponding to fully-connected layer functions, b 1 And b 2 For the bias vector corresponding to the fully-connected layer function,
Figure FDA00036984902000000311
for knowledge grasping state matrix, level, of a knowledge point i t The horizontal vector is mastered for the knowledge of the student.
10. An intelligent terminal, characterized in that, intelligent terminal includes: the memory and the processor, wherein the memory stores a deep knowledge tracking program fusing a forgetting factor and a problem difficulty, and the deep knowledge tracking program fusing a forgetting factor and a problem difficulty is executed by the processor to realize the steps of the deep knowledge tracking method fusing a forgetting factor and a problem difficulty according to any one of claims 1 to 9.
CN202210681204.0A 2022-06-16 2022-06-16 Deep knowledge tracking method integrating forgetting factor and exercise difficulty and intelligent terminal Pending CN114971972A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210681204.0A CN114971972A (en) 2022-06-16 2022-06-16 Deep knowledge tracking method integrating forgetting factor and exercise difficulty and intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210681204.0A CN114971972A (en) 2022-06-16 2022-06-16 Deep knowledge tracking method integrating forgetting factor and exercise difficulty and intelligent terminal

Publications (1)

Publication Number Publication Date
CN114971972A true CN114971972A (en) 2022-08-30

Family

ID=82964172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210681204.0A Pending CN114971972A (en) 2022-06-16 2022-06-16 Deep knowledge tracking method integrating forgetting factor and exercise difficulty and intelligent terminal

Country Status (1)

Country Link
CN (1) CN114971972A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116976434A (en) * 2023-07-05 2023-10-31 长江大学 Knowledge point diffusion representation-based knowledge tracking method and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114385801A (en) * 2021-12-27 2022-04-22 河北工业大学 Knowledge tracking method and system based on hierarchical refinement LSTM network
CN114971066A (en) * 2022-06-16 2022-08-30 兰州理工大学 Knowledge tracking method and system integrating forgetting factor and learning ability

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114385801A (en) * 2021-12-27 2022-04-22 河北工业大学 Knowledge tracking method and system based on hierarchical refinement LSTM network
CN114971066A (en) * 2022-06-16 2022-08-30 兰州理工大学 Knowledge tracking method and system integrating forgetting factor and learning ability

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
朴世超: "基于时间卷积网络的多特征知识追踪方法研究", 《中国优秀硕士论文全文数据库 信息科技辑》, no. 03, 15 March 2024 (2024-03-15), pages 140 - 141 *
李晓光 等: "LFKT:学习与遗忘融合的深度知识追踪模型", 《软件学报》, vol. 32, no. 03, 11 March 2021 (2021-03-11), pages 818 - 830 *
艾方哲: "基于知识追踪的智能导学算法设计", 《中国优秀硕士论文全文数据库 信息科技辑》, no. 01, 15 January 2020 (2020-01-15), pages 140 - 140 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116976434A (en) * 2023-07-05 2023-10-31 长江大学 Knowledge point diffusion representation-based knowledge tracking method and storage medium
CN116976434B (en) * 2023-07-05 2024-02-20 长江大学 Knowledge point diffusion representation-based knowledge tracking method and storage medium

Similar Documents

Publication Publication Date Title
CN110807469B (en) Knowledge tracking method and system integrating long-time memory and short-time memory with Bayesian network
CN110941723A (en) Method, system and storage medium for constructing knowledge graph
CN110991645A (en) Self-adaptive learning method, system and storage medium based on knowledge model
CN113610235B (en) Adaptive learning support device and method based on depth knowledge tracking
CN113724110A (en) Interpretable depth knowledge tracking method and system and application thereof
CN114385801A (en) Knowledge tracking method and system based on hierarchical refinement LSTM network
CN109840595B (en) Knowledge tracking method based on group learning behavior characteristics
CN112085168A (en) Knowledge tracking method and system based on dynamic key value gating circulation network
CN107544960A (en) A kind of inference method activated based on Variable-Bindings and relation
CN115510286A (en) Multi-relation cognitive diagnosis method based on graph convolution network
CN114861754A (en) Knowledge tracking method and system based on external attention mechanism
CN115545160A (en) Knowledge tracking method and system based on multi-learning behavior cooperation
CN114971066A (en) Knowledge tracking method and system integrating forgetting factor and learning ability
CN115544158A (en) Multi-knowledge-point dynamic knowledge tracking method applied to intelligent education system
CN114971972A (en) Deep knowledge tracking method integrating forgetting factor and exercise difficulty and intelligent terminal
CN113378581B (en) Knowledge tracking method and system based on multivariate concept attention model
CN111985560B (en) Knowledge tracking model optimization method, system and computer storage medium
CN117473041A (en) Programming knowledge tracking method based on cognitive strategy
CN116402134A (en) Knowledge tracking method and system based on behavior perception
CN114925218A (en) Learner knowledge cognitive structure dynamic mining method based on adaptive graph
CN116361744A (en) Learner cognition tracking method and system for learning procedural evaluation
CN114997461B (en) Time-sensitive answer correctness prediction method combining learning and forgetting
CN115795015A (en) Comprehensive knowledge tracking method for enhancing test question difficulty
CN115439281A (en) Learner knowledge cognitive structure evaluation method and system based on space-time diagram convolution
Zhang et al. Neural Attentive Knowledge Tracing Model for Student Performance Prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination