CN110992229B - Scientific teaching effect evaluation method based on knowledge migration - Google Patents

Scientific teaching effect evaluation method based on knowledge migration Download PDF

Info

Publication number
CN110992229B
CN110992229B CN201911259418.3A CN201911259418A CN110992229B CN 110992229 B CN110992229 B CN 110992229B CN 201911259418 A CN201911259418 A CN 201911259418A CN 110992229 B CN110992229 B CN 110992229B
Authority
CN
China
Prior art keywords
teaching effect
teaching
training
matrix
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911259418.3A
Other languages
Chinese (zh)
Other versions
CN110992229A (en
Inventor
武新伟
梁琰
高昕
葛菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Xinzhi Digital Technology Co ltd
Original Assignee
Anhui Xinzhi Digital Media Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Xinzhi Digital Media Information Technology Co ltd filed Critical Anhui Xinzhi Digital Media Information Technology Co ltd
Priority to CN201911259418.3A priority Critical patent/CN110992229B/en
Publication of CN110992229A publication Critical patent/CN110992229A/en
Application granted granted Critical
Publication of CN110992229B publication Critical patent/CN110992229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Operations Research (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a scientific teaching effect evaluation method based on knowledge migration, which mainly comprises two steps of collecting large amount of data to establish a general model for teaching effect evaluation and collecting small amount of data to establish a special model for teaching effect evaluation; the first step also comprises four steps of collecting data and initializing, generating a pseudo label, training a teaching effect prediction classifier and predicting a teaching effect; and the second step also comprises four steps of collecting data and initializing, finely adjusting parameters and calculating a domain difference index, training a teaching effect prediction classifier and predicting a teaching effect. The invention has the advantages that: the problems of class imbalance, long training time and uncertain smooth hypothesis in the construction of a teaching effect prediction model in teaching evaluation can be solved, a special model adaptive to a new environment can be rapidly obtained by using a small amount of data in the new environment, and the method has certain self-adaptive performance.

Description

Scientific teaching effect evaluation method based on knowledge migration
Technical Field
The invention relates to the technical field of teaching, in particular to a scientific teaching effect evaluation method based on knowledge migration.
Background
The teaching evaluation is an activity for judging the value of a teaching process and a teaching result according to a teaching target and serving a teaching decision, and is a process for judging the actual or potential value of the teaching activity. Along with the continuous improvement of living standard of people, people pay more attention to education, but more the evaluation of the teaching effect of teachers in schools is to consider the achievement of children, often neglect other factors, and be more comprehensive.
Patent CN109559260A a teaching effect evaluation system discloses a teaching effect evaluation system, including test module and evaluation module, test module is to the score of student's examination in term and end of term examination carry out the examination, evaluation module is that student, leader or colleague and student's head of a family carry out comprehensive scoring to mr's teaching method, teaching environment, teaching management and teaching attitude. The evaluation on the teaching effect of the teacher not only considers the scores of students, but also considers the ordinary teaching method, teaching environment, teaching management and teaching attitude of the teacher, and students, leaders or colleagues and parents of the students comprehensively evaluate the teacher in multiple directions, so that the phenomenon that the students give negative to the students in the period of study because the examination scores of the students are not ideal is avoided. Patent CN110457641A teaching effect evaluation system for teaching classroom that practicality is strong discloses a teaching effect evaluation system for teaching classroom that practicality is strong, through setting up test exercise entry test module, student evaluation unit, information entry unit, evaluation data information contrast module and data information integration management unit, can compare with the result of student classroom evaluation according to the result that different students tested and rated, trail classroom actual effect more accurately in real time for the mr is effectual knows the cognition, experience and the mood of student's study in-process, and the result of evaluation more accords with reality. The patent CN110084508A a method and device for evaluating classroom teaching effects provides a method and device for evaluating classroom teaching effects, which collects electroencephalogram data of each classroom participant in the same time period through the same collection frequency, calculates the correlation between the electroencephalogram data of each two classroom participants, uses the correlation as the consistency index between the electroencephalogram data of each two classroom participants, finally obtains the consistency index between the electroencephalogram data of all classroom participants through a method of calculating the average value, and evaluates the classroom teaching effects through the consistency index.
From the above, although some studies have recognized the singleness of teaching effect evaluation, the scientificity of the evaluation is still insufficient. On one hand, the teaching effect is influenced by a plurality of factors, and the influence mode is complex and cannot be realized by simply dividing a plurality of indexes by threshold values; on the other hand, invasive sensing such as brain wave sensors is costly and may affect the teaching process to some extent. Therefore, it is necessary to design a more advanced and scientific teaching effect evaluation method and system.
Disclosure of Invention
The invention provides a scientific teaching effect evaluation method based on knowledge migration, which specifically comprises the following steps:
step 1: general evaluation model building
Collecting a large amount of data, and establishing a general model for teaching effect evaluation, which comprises the following steps:
step 101: collecting the teaching content, the mode and the effect data,establishing a labeled sample set
Figure GDA0002794699500000021
Establishing label-free sample set
Figure GDA0002794699500000022
p ∈ {1, 2.. eta., l }, q ∈ { l +1, l + 2.. eta., n }, where l denotes the total number of labeled samples, n denotes the total number of all samples, and sample x denotes the total number of samplesp,
Figure GDA0002794699500000023
The system is a 15-dimensional vector, and the sample characteristics comprise teaching subjects, depth, writing on a blackboard or not, multimedia, teaching assessment mode, teacher gender, teacher age, teacher study history, average work time, corresponding number of people, interaction degree, attendance rate, student feedback, average score and language in class; y isp∈{χ12345Denotes a label, which indicates the teaching effect, χ12345Respectively poor, normal, good and good,
Figure GDA0002794699500000024
representing a real number domain;
manually setting the maximum training times T>0 is a positive integer, the training time t is 0, the robustness factor mu epsilon (0,1) is manually set, and the loss coefficient C is manually set>0, manually setting a propagation coefficient alpha epsilon (0,1), manually setting the number N of hidden nodes to be a positive integer larger than 15, randomly generating N input weights w, wherein w is a 9-dimensional column vector, and obtaining w1,w2,...,wN(ii) a Randomly generating N input offsets b, wherein b is a real number to obtain b1,b2,...,bN(ii) a Let Lp=0;
Step 102: for the label-free sample set in step 101
Figure GDA0002794699500000025
And (3) labeling a pseudo label, specifically as follows:
step 10201: randomly taking a value for the Gaussian bandwidth coefficient sigma >0, and establishing an affinity matrix W as follows:
Figure GDA0002794699500000026
wherein (W)ijAn element in row i and column j of W, i, j being 1, 2.
Step 10202: calculating a final label matrix F:
F=(1-α)(I-αS)-1Y
wherein, I is an identity matrix, D is a degree matrix of W, D is a diagonal matrix, and the ith diagonal element is
Figure GDA0002794699500000027
Figure GDA0002794699500000028
Transmission matrix
Figure GDA0002794699500000029
Y is an initial label matrix whose elements are
Figure GDA00027946995000000210
Wherein r is 1,2,3,4, 5;
Figure GDA00027946995000000211
is a sample xqWherein the pseudo tag of
Figure GDA00027946995000000212
Figure GDA00027946995000000213
FqrIs the qth row and the r column element of the matrix F;
step 103: training teaching effect prediction classifier specifically as follows:
step 10301: sequentially taking r as 1,2,3,4 and 5, and training different types of chirCorresponding base classifier
Figure GDA0002794699500000031
The following were used: will be provided with
Figure GDA0002794699500000032
And
Figure GDA0002794699500000033
the middle label or the pseudo label is xrIs taken out of the sample of (1), nrObtaining a sample set
Figure GDA0002794699500000034
Further, the X type can be obtainedrClassifier
Figure GDA0002794699500000035
Wherein, z represents the number of samples,
Figure GDA0002794699500000036
wherein I is a unit matrix, C >0 is a loss coefficient, e is an n-dimensional unit column vector,
h(x)=[G(w1,b1,x),...,G(wN,bN,x)]T
for a non-linear mapping function, G (w, b, x) is an activation function,
Figure GDA0002794699500000037
outputting a matrix for the hidden layer; offset threshold
Figure GDA0002794699500000038
Wherein,
Figure GDA0002794699500000039
in order to get the function of the integer downwards,
Figure GDA00027946995000000310
the function max2min (-) arranges the input sequence from large to small and outputs the arranged sequence,
Figure GDA00027946995000000311
xi is the middle part of Xi
Figure GDA00027946995000000312
The element is mu epsilon (0,1) as a robust factor;
step 10302: for that obtained in step 10301
Figure GDA00027946995000000313
The integration is carried out as follows: inputting a sample z into
Figure GDA00027946995000000314
If only one classifier outputs 1, the final classification result is the class corresponding to the classifier; for the other case, { ξ +r(z), the category corresponding to the minimum element in r ═ 1,2,3,4,5} is the final classification result;
step 10303: using the classifier integration method obtained in step 10302, for { xiI 1,2, a, n, and obtaining a corresponding prediction result { delta ═ di1,2,., n }, and let δ ═ δ1,...,δn]TThen, calculate Lc=δTL δ, wherein
Figure GDA00027946995000000315
Is a Laplace graph matrix if Lc<LpOr t is 0, the current set of base classifiers is retained, i.e.
Figure GDA00027946995000000316
Let Lp←LcIncreasing T by self 1, jumping to the step 2 if T is less than or equal to T, otherwise, stopping training and entering the next step;
step 104: derived based on training
Figure GDA00027946995000000320
And predicting the teaching effect with the integration rule described in step 10302.
Step 2, establishing a special evaluation model
A small amount of data is collected, and a special model for teaching effect evaluation is established as follows:
step 201: collecting teaching content, mode and effect data, and establishing labeled sample set
Figure GDA00027946995000000317
s∈{n+1,...,n+l2In which l2Representing the total number of marked samples collected in the special model for establishing teaching effect evaluation
Figure GDA00027946995000000318
The 15-dimensional vector, the sample features are consistent with the features involved in step 101; y iss∈{χ12345Denotes a label, χ12345Respectively poor, normal, good and good,
Figure GDA00027946995000000319
representing a real number domain;
manually setting the maximum training times T2>0 is a positive integer, and the training time t is 0; manually setting migration trade-off coefficient gamma>0, order Op=0;
Step 202: based on training in step 1
Figure GDA00027946995000000423
Trimming w1,w2,...,wNAnd b1,b2,...,bNObtaining an adjusted set of base classifiers
Figure GDA0002794699500000041
By using
Figure GDA0002794699500000042
Performing integrated predictions
Figure GDA0002794699500000043
To obtain
Figure GDA0002794699500000044
In that
Figure GDA0002794699500000045
Has an error rate of a e [0,1 ]](ii) a The integrated prediction rules are as follows:
inputting a sample z into
Figure GDA0002794699500000046
If only one classifier outputs 1, the final classification result is the class corresponding to the classifier; for other cases, then
Figure GDA0002794699500000047
Figure GDA0002794699500000048
The category corresponding to the minimum element in the classification is the final classification result,
Figure GDA0002794699500000049
is xir(z) adjusting w1,w2,...,wNAnd b1,b2,...,bNThe latter result;
step 203: the domain difference index Ω is calculated as follows:
Figure GDA00027946995000000410
step 204: calculating the current migration index OcΩ + γ a, where γ>0 is the migration trade-off factor, if Oc<OpOr t is 0, then let
Figure GDA00027946995000000411
Let O bep←OcWhen the value of t is increased by 1,if T is less than or equal to T2Jumping to step 202, otherwise, stopping training and entering the next step;
step 205: derived based on training
Figure GDA00027946995000000422
And predicting the teaching effect with the integration rule described in the step 202.
Wherein the activation function involved is:
Figure GDA00027946995000000412
or
Figure GDA00027946995000000413
Or
G(w,b,x)=cos(wTx+b)。
The fine tuning method involved in step 202 is as follows:
let w after fine adjustment1,w2,...,wNAnd b1,b2,...,bNAre respectively as
Figure GDA00027946995000000414
And
Figure GDA00027946995000000415
Figure GDA00027946995000000416
the values are randomly taken on the distribution of the particles,
Figure GDA00027946995000000417
randomly valued in its distribution, τ ∈ {1, 2.., N },
Figure GDA00027946995000000418
represents a mean value of wτVariance of
Figure GDA00027946995000000419
Gauss score ofThe cloth is made of a cloth material,
Figure GDA00027946995000000420
represents a mean value of bτVariance of
Figure GDA00027946995000000421
Of Gaussian distribution, Σw=σwIN,INRepresenting an N-dimensional unit matrix, σwb>0。
Compared with the prior art, the invention has the following advantages: the problems of class imbalance, long training time and uncertain smooth hypothesis in the construction of a teaching effect prediction model in teaching evaluation can be solved, a special model adaptive to a new environment can be rapidly obtained by using a small amount of data in the new environment, and the method has certain self-adaptive performance.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
A scientific teaching effect evaluation method based on knowledge migration is shown in figure 1 and specifically comprises the following steps:
step 1: general evaluation model building
Collecting a large amount of data, and establishing a general model for teaching effect evaluation, which comprises the following steps:
step 101: collecting teaching content, mode and effect data, and establishing labeled sample set
Figure GDA0002794699500000051
Establishing label-free sample set
Figure GDA0002794699500000052
p ∈ {1, 2.. eta., l }, q ∈ { l +1, l + 2.. eta., n }, where l denotes the total number of labeled samples, n denotes the total number of all samples, and sample x denotes the total number of samplesp,
Figure GDA0002794699500000053
The system is a 15-dimensional vector, and the sample characteristics comprise teaching subjects, depth, writing on a blackboard or not, multimedia, teaching assessment mode, teacher gender, teacher age, teacher study history, average work time, corresponding number of people, interaction degree, attendance rate, student feedback, average score and language in class; y isp∈{χ12345Denotes a label, which indicates the teaching effect, χ12345Respectively poor, normal, good and good,
Figure GDA0002794699500000054
representing a real number domain;
manually setting the maximum training times T>0 is a positive integer, the training time t is 0, the robustness factor mu epsilon (0,1) is manually set, and the loss coefficient C is manually set>0, manually setting a propagation coefficient alpha epsilon (0,1), manually setting the number N of hidden nodes to be a positive integer larger than 15, randomly generating N input weights w, wherein w is a 9-dimensional column vector, and obtaining w1,w2,...,wN(ii) a Randomly generating N input offsets b, wherein b is a real number to obtain b1,b2,...,bN(ii) a Let Lp=0;
Step 102: for the label-free sample set in step 101
Figure GDA0002794699500000055
And (3) labeling a pseudo label, specifically as follows:
step 10201: randomly taking a value for the Gaussian bandwidth coefficient sigma >0, and establishing an affinity matrix W as follows:
Figure GDA0002794699500000056
wherein (W)ijAn element in row i and column j of W, i, j being 1, 2.
Step 10202: calculating a final label matrix F:
F=(1-α)(I-αS)-1Y
wherein, I is an identity matrix, D is a degree matrix of W, D is a diagonal matrix, and the ith diagonal element is
Figure GDA0002794699500000057
Figure GDA0002794699500000058
Transmission matrix
Figure GDA0002794699500000059
Y is an initial label matrix whose elements are
Figure GDA00027946995000000510
Wherein r is 1,2,3,4, 5;
Figure GDA00027946995000000511
is a sample xqWherein the pseudo tag of
Figure GDA00027946995000000512
Figure GDA00027946995000000513
FqrIs the qth row and the r column element of the matrix F;
step 103: training teaching effect prediction classifier specifically as follows:
step 10301: sequentially taking r as 1,2,3,4 and 5, and training different types of chirCorresponding base classifier
Figure GDA0002794699500000061
The following were used: will be provided with
Figure GDA0002794699500000062
And
Figure GDA0002794699500000063
the middle label or the pseudo label is xrIs taken out of the sample of (1), nrThe number of the main components is one,obtaining a sample set
Figure GDA0002794699500000064
Further, the X type can be obtainedrClassifier
Figure GDA0002794699500000065
Wherein, z represents the number of samples,
Figure GDA0002794699500000066
wherein I is a unit matrix, C >0 is a loss coefficient, e is an n-dimensional unit column vector,
h(x)=[G(w1,b1,x),...,G(wN,bN,x)]T
for a non-linear mapping function, G (w, b, x) is an activation function,
Figure GDA0002794699500000067
outputting a matrix for the hidden layer; offset threshold
Figure GDA0002794699500000068
Wherein,
Figure GDA0002794699500000069
in order to get the function of the integer downwards,
Figure GDA00027946995000000610
the function max2min (-) arranges the input sequence from large to small and outputs the arranged sequence,
Figure GDA00027946995000000611
xi is the middle part of Xi
Figure GDA00027946995000000612
Element of mu e (0,1) is LuA rod factor;
step 10302: for that obtained in step 10301
Figure GDA00027946995000000613
The integration is carried out as follows: inputting a sample z into
Figure GDA00027946995000000614
If only one classifier outputs 1, the final classification result is the class corresponding to the classifier; for the other case, { ξ +r(z), the category corresponding to the minimum element in r ═ 1,2,3,4,5} is the final classification result;
step 10303: using the classifier integration method obtained in step 10302, for { xiI 1,2, a, n, and obtaining a corresponding prediction result { delta ═ di1,2,., n }, and let δ ═ δ1,...,δn]TThen, calculate Lc=δTL δ, wherein
Figure GDA00027946995000000615
Is a Laplace graph matrix if Lc<LpOr t is 0, the current set of base classifiers is retained, i.e.
Figure GDA00027946995000000616
Let Lp←LcIncreasing T by self 1, jumping to the step 2 if T is less than or equal to T, otherwise, stopping training and entering the next step;
step 104: derived based on training
Figure GDA00027946995000000617
And predicting the teaching effect with the integration rule described in step 10302.
Step 2, establishing a special evaluation model
A small amount of data is collected, and a special model for teaching effect evaluation is established as follows:
step 201: collecting teaching content, mode and effect data, and establishing labeled sample set
Figure GDA00027946995000000618
s∈{n+1,...,n+l2In which l2Representing the total number of marked samples collected in the special model for establishing teaching effect evaluation
Figure GDA00027946995000000619
The 15-dimensional vector, the sample features are consistent with the features involved in step 101; y iss∈{χ12345Denotes a label, χ12345Respectively poor, normal, good and good,
Figure GDA00027946995000000620
representing a real number domain;
manually setting the maximum training times T2>0 is a positive integer, and the training time t is 0; manually setting migration trade-off coefficient gamma>0, order Op=0;
Step 202: based on training in step 1
Figure GDA00027946995000000722
Trimming w1,w2,...,wNAnd b1,b2,...,bNObtaining an adjusted set of base classifiers
Figure GDA0002794699500000071
By using
Figure GDA0002794699500000072
Performing integrated predictions
Figure GDA0002794699500000073
To obtain
Figure GDA0002794699500000074
In that
Figure GDA0002794699500000075
Has an error rate of a e [0,1 ]](ii) a IntegrationThe prediction rules are as follows:
inputting a sample z into
Figure GDA0002794699500000076
If only one classifier outputs 1, the final classification result is the class corresponding to the classifier; for other cases, then
Figure GDA0002794699500000077
Figure GDA0002794699500000078
The category corresponding to the minimum element in the classification is the final classification result,
Figure GDA0002794699500000079
is xir(z) adjusting w1,w2,...,wNAnd b1,b2,...,bNThe latter result;
step 203: the domain difference index Ω is calculated as follows:
Figure GDA00027946995000000710
step 204: calculating the current migration index OcΩ + γ a, where γ>0 is the migration trade-off factor, if Oc<OpOr t is 0, then let
Figure GDA00027946995000000711
Let O bep←OcLet T increase by 1, if T is less than or equal to T2Jumping to step 202, otherwise, stopping training and entering the next step;
step 205: derived based on training
Figure GDA00027946995000000723
And predicting the teaching effect with the integration rule described in the step 202.
Preferably, the activation function involved is:
Figure GDA00027946995000000712
or
Figure GDA00027946995000000713
Or
G(w,b,x)=cos(wTx+b)。
Preferably, the fine tuning method involved in step 202 is as follows:
let w after fine adjustment1,w2,...,wNAnd b1,b2,...,bNAre respectively as
Figure GDA00027946995000000714
And
Figure GDA00027946995000000715
Figure GDA00027946995000000716
the values are randomly taken on the distribution of the particles,
Figure GDA00027946995000000717
randomly valued in its distribution, τ ∈ {1, 2.., N },
Figure GDA00027946995000000718
represents a mean value of wτVariance of
Figure GDA00027946995000000719
The distribution of the gaussian component of (a) is,
Figure GDA00027946995000000720
represents a mean value of bτVariance of
Figure GDA00027946995000000721
Of Gaussian distribution, Σw=σwIN,INRepresenting an N-dimensional unit matrix, σwb>0。
In carrying out this patent, a characteristic value example is given below:
1. "teaching subjects": including languages, mathematics, foreign languages, etc., may be more refined subjects such as discrete mathematics, combinatorial mathematics, etc.
2. "depth": popularization, advancement and high grade.
3. "whether there is a board book": presence or absence.
4. "whether there is multimedia": presence or absence.
5. "teaching assessment mode": examination + attendance, reporting + attendance, attendance only.
6. "teacher gender": male, female, and others.
7. "teacher age": taking a positive integer.
8. "teacher study" in the past: doctor, Master, Benke, high school and below.
9. "average time of job": and taking a real number which is greater than or equal to 0.
10. "number of people in need": taking a positive integer.
11. "degree of interaction": active, general, no interaction.
12. "attendance rate": real number between 0 and 1, and the absent duty ratio calculation mode of each class refers to: the absenteeism rate in each class (the number of people in each class absent/the number of people in each class), which is the average of the absenteeism rate in each class.
13. "student feedback": excellence, median and difference, and their mode.
14. "average minute": taking an integer between 0 and 100.
15. "language in class": the language used in lessons includes Chinese, English, etc.
The teaching effect label in the sample set can adopt an expert evaluation mode.
In the implementation process, a large amount of data needs to be acquired for a general model for teaching effect evaluation, training is carried out, and the model can be directly used after training is finished.
When the method is applied to a new environment (such as a new school), the prediction performance of a general model is often poor, a special model for teaching effect evaluation can be obtained by migration on the basis of the general model, only a small amount of data needs to be collected at the moment, and the method can be directly used after training is completed.
The above examples are provided only for the purpose of describing the present invention, and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalent substitutions and modifications can be made without departing from the spirit and principles of the invention, and are intended to be within the scope of the invention.

Claims (3)

1. A scientific teaching effect evaluation method based on knowledge migration is characterized by specifically comprising the following steps:
step 1: general evaluation model building
Collecting a large amount of data, and establishing a general model for teaching effect evaluation, which comprises the following steps:
step 101: collecting teaching content, mode and effect data, and establishing labeled sample set
Figure FDA0002794699490000014
Establishing label-free sample set
Figure FDA0002794699490000015
p ∈ {1, 2.. eta., l }, q ∈ { l +1, l + 2.. eta., n }, where l denotes the total number of labeled samples, n denotes the total number of all samples, and sample x denotes the total number of samplesp,
Figure FDA0002794699490000016
The system is a 15-dimensional vector, and the sample characteristics comprise teaching subjects, depth, writing on a blackboard or not, multimedia, teaching assessment mode, teacher gender, teacher age, teacher study history, average work time, corresponding number of people, interaction degree, attendance rate, student feedback, average score and language in class; y isp∈{χ12345Denotes a label, which indicates the teaching effect, χ12345Respectively very poor, normal and goodSo that the utility model has the advantages that the material is good,
Figure FDA0002794699490000017
representing a real number domain;
manually setting the maximum training times T>0 is a positive integer, the training time t is 0, the robustness factor mu epsilon (0,1) is manually set, and the loss coefficient C is manually set>0, manually setting a propagation coefficient alpha epsilon (0,1), manually setting the number N of hidden nodes to be a positive integer larger than 15, randomly generating N input weights w, wherein w is a 9-dimensional column vector, and obtaining w1,w2,...,wN(ii) a Randomly generating N input offsets b, wherein b is a real number to obtain b1,b2,...,bN(ii) a Let Lp=0;
Step 102: for the label-free sample set in step 101
Figure FDA0002794699490000018
And (3) labeling a pseudo label, specifically as follows:
step 10201: randomly taking a value for the Gaussian bandwidth coefficient sigma >0, and establishing an affinity matrix W as follows:
Figure FDA0002794699490000011
wherein (W)ijAn element in row i and column j of W, i, j being 1, 2.
Step 10202: calculating a final label matrix F:
F=(1-α)(I-αS)-1Y
wherein, I is an identity matrix, D is a degree matrix of W, D is a diagonal matrix, and the ith diagonal element is
Figure FDA00027946994900000115
Figure FDA00027946994900000114
Transmission matrix
Figure FDA0002794699490000019
Y is an initial label matrix whose elements are
Figure FDA0002794699490000012
Wherein r is 1,2,3,4, 5;
Figure FDA00027946994900000113
is a sample xqWherein the pseudo tag of
Figure FDA00027946994900000117
Figure FDA00027946994900000118
FqrIs the qth row and the r column element of the matrix F;
step 103: training teaching effect prediction classifier specifically as follows:
step 10301: sequentially taking r as 1,2,3,4 and 5, and training different types of chirCorresponding base classifier
Figure FDA00027946994900000116
The following were used: will be provided with
Figure FDA00027946994900000112
And
Figure FDA00027946994900000110
the middle label or the pseudo label is xrIs taken out of the sample of (1), nrObtaining a sample set
Figure FDA00027946994900000111
Further, the X type can be obtainedrClassifier
Figure FDA0002794699490000013
Wherein, z represents the number of samples,
Figure FDA0002794699490000021
wherein I is a unit matrix, C >0 is a loss coefficient, e is an n-dimensional unit column vector,
h(x)=[G(w1,b1,x),...,G(wN,bN,x)]T
for a non-linear mapping function, G (w, b, x) is an activation function,
Figure FDA0002794699490000023
outputting a matrix for the hidden layer; offset threshold
Figure FDA00027946994900000215
Wherein,
Figure FDA00027946994900000216
in order to get the function of the integer downwards,
Figure FDA0002794699490000022
the function max2min (-) arranges the input sequence from large to small and outputs the arranged sequence,
Figure FDA00027946994900000217
xi is the middle part of Xi
Figure FDA00027946994900000218
The element is mu epsilon (0,1) as a robust factor;
step 10302: for that obtained in step 10301
Figure FDA0002794699490000026
The integration is carried out as follows: inputting a sample z into
Figure FDA0002794699490000025
If only one classifier outputs 1, the final classification result is the class corresponding to the classifier; for the other case, { ξ +r(z), the category corresponding to the minimum element in r ═ 1,2,3,4,5} is the final classification result;
step 10303: using the classifier integration method obtained in step 10302, for { xiI 1,2, a, n, and obtaining a corresponding prediction result { delta ═ di1,2,., n }, and let δ ═ δ1,...,δn]TThen, calculate Lc=δTL δ, wherein
Figure FDA0002794699490000024
Is a Laplace graph matrix if Lc<LpOr t is 0, the current set of base classifiers is retained, i.e.
Figure FDA00027946994900000219
Let Lp←LcIncreasing T by self 1, jumping to the step 2 if T is less than or equal to T, otherwise, stopping training and entering the next step;
step 104: derived based on training
Figure FDA00027946994900000220
And predicting the teaching effect with the integration rule described in step 10302.
Step 2, establishing a special evaluation model
A small amount of data is collected, and a special model for teaching effect evaluation is established as follows:
step 201: collecting teaching content, mode and effect data, and establishing labeled sample set
Figure FDA00027946994900000212
s∈{n+1,...,n+l2In which l2Representing the total number of marked samples collected in the special model for establishing teaching effect evaluation
Figure FDA0002794699490000027
The 15-dimensional vector, the sample features are consistent with the features involved in step 101; y iss∈{χ12345Denotes a label, χ12345Respectively poor, normal, good and good,
Figure FDA0002794699490000028
representing a real number domain;
manually setting the maximum training times T2>0 is a positive integer, and the training time t is 0; manually setting migration trade-off coefficient gamma>0, order Op=0;
Step 202: based on training in step 1
Figure FDA00027946994900000221
Trimming w1,w2,...,wNAnd b1,b2,...,bNObtaining an adjusted set of base classifiers
Figure FDA00027946994900000214
By using
Figure FDA00027946994900000213
Performing integrated predictions
Figure FDA0002794699490000029
To obtain
Figure FDA00027946994900000210
In that
Figure FDA00027946994900000211
Has an error rate of a e [0,1 ]](ii) a The integrated prediction rules are as follows:
inputting a sample z into
Figure FDA0002794699490000034
If only one classifier outputs 1, the final classification result is the class corresponding to the classifier; for other cases, then
Figure FDA00027946994900000315
Figure FDA00027946994900000316
The category corresponding to the minimum element in the classification is the final classification result,
Figure FDA00027946994900000317
is xir(z) adjusting w1,w2,...,wNAnd b1,b2,...,bNThe latter result;
step 203: the domain difference index Ω is calculated as follows:
Figure FDA0002794699490000031
step 204: calculating the current migration index OcΩ + γ a, where γ>0 is the migration trade-off factor, if Oc<OpOr t is 0, then let
Figure FDA0002794699490000035
Let O bep←OcLet T increase by 1, if T is less than or equal to T2Jumping to step 202, otherwise, stopping training and entering the next step;
step 205: derived based on training
Figure FDA0002794699490000036
And predicting the teaching effect with the integration rule described in the step 202.
2. The method for evaluating the scientific teaching effect based on knowledge migration as claimed in claim 1, wherein the related activation functions are as follows:
Figure FDA0002794699490000032
or
Figure FDA0002794699490000033
Or
G(w,b,x)=cos(wTx+b)。
3. The method for evaluating the scientific teaching effect based on knowledge transfer as claimed in claim 1, wherein the fine tuning method involved in the step 202 is as follows:
let w after fine adjustment1,w2,...,wNAnd b1,b2,...,bNAre respectively as
Figure FDA00027946994900000312
And
Figure FDA00027946994900000313
Figure FDA0002794699490000038
the values are randomly taken on the distribution of the particles,
Figure FDA0002794699490000037
randomly valued in its distribution, τ ∈ {1, 2.., N },
Figure FDA0002794699490000039
represents a mean value of wτVariance of
Figure FDA00027946994900000311
The distribution of the gaussian component of (a) is,
Figure FDA00027946994900000314
represents a mean value of bτVariance of
Figure FDA00027946994900000310
Of Gaussian distribution, Σw=σwIN,INRepresenting an N-dimensional unit matrix, σwb>0。
CN201911259418.3A 2019-12-10 2019-12-10 Scientific teaching effect evaluation method based on knowledge migration Active CN110992229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911259418.3A CN110992229B (en) 2019-12-10 2019-12-10 Scientific teaching effect evaluation method based on knowledge migration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911259418.3A CN110992229B (en) 2019-12-10 2019-12-10 Scientific teaching effect evaluation method based on knowledge migration

Publications (2)

Publication Number Publication Date
CN110992229A CN110992229A (en) 2020-04-10
CN110992229B true CN110992229B (en) 2021-02-26

Family

ID=70091967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911259418.3A Active CN110992229B (en) 2019-12-10 2019-12-10 Scientific teaching effect evaluation method based on knowledge migration

Country Status (1)

Country Link
CN (1) CN110992229B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102686106B1 (en) 2022-03-30 2024-07-19 한국과학기술기획평가원 Method for analyzing impact of science and technology training

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10521734B2 (en) * 2018-04-20 2019-12-31 Sas Institute Inc. Machine learning predictive labeling system
US10599769B2 (en) * 2018-05-01 2020-03-24 Capital One Services, Llc Text categorization using natural language processing
CN109781411B (en) * 2019-01-28 2020-05-19 西安交通大学 Bearing fault diagnosis method combining improved sparse filter and KELM

Also Published As

Publication number Publication date
CN110992229A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
Purwanto Education research quantitative analysis for little respondents: comparing of Lisrel, Tetrad, GSCA, Amos, SmartPLS, WarpPLS, and SPSS
Naser et al. Predicting student performance using artificial neural network: In the faculty of engineering and information technology
CN112508334B (en) Personalized paper grouping method and system integrating cognition characteristics and test question text information
Ozberk et al. Investigating the factors affecting Turkish students' PISA 2012 mathematics achievement using hierarchical linear modeling
Šarić-Grgić et al. Student clustering based on learning behavior data in the intelligent tutoring system
Beytekin et al. Quality of Faculty Life and Lifelong Learning Tendencies of University Students.
Alves Making diagnostic inferences about student performance on the Alberta education diagnostic mathematics project: An application of the Attribute Hierarchy Method
Tu et al. A polytomous model of cognitive diagnostic assessment for graded data
Zhang et al. Formative evaluation of college students’ online English learning based on learning behavior analysis
CN110992229B (en) Scientific teaching effect evaluation method based on knowledge migration
Turhan et al. Estimation of student success with artificial neural networks
Huang et al. Developing argumentation processing agents for computer-supported collaborative learning
CN113934846A (en) Online forum topic modeling method combining behavior-emotion-time sequence
Shapovalova et al. Adaptive testing model as the method of quality knowledge control individualizing
Guldemond et al. Group effects on individual learning achievement
Akdeniz et al. Investigating individual innovativeness levels and lifelong learning tendencies of students in TMSC
Zhou Research on teaching resource recommendation algorithm based on deep learning and cognitive diagnosis
Zeman et al. Complex cells decrease errors for the Müller-Lyer illusion in a model of the visual ventral stream
Suniasih The Effectiveness of Discovery Learning Model and Problem-Based Learning Using Animated Media to Improve Science Learning Outcomes
Shi Building a Diversified College English Teaching Evaluation Model Using Fuzzy K-means Clustering in E-Learning
Komaravalli et al. Detecting Academic Affective States of Learners in Online Learning Environments Using Deep Transfer Learning
Sun A Comprehensive Evaluation Scheme of Students’ Classroom Learning Status Based on Analytic Hierarchy Process
Pagudpud et al. Mining the national career assessment examination result using clustering algorithm
Chen The Role of Information Convergence Technology in Reshaping the Multiple Directions of Ideological and Political Education in Colleges and Universities
McCallum et al. Using data for monitoring and target setting: A practical guide for teachers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Zone 50268, Zhongke Dadaozhen Building, No. 767 Yulan Avenue, High tech Zone, Hefei City, Anhui Province, 230088

Patentee after: Anhui Xinzhi Digital Technology Co.,Ltd.

Address before: 230088 building 210-c2, A3 / F, Hefei Innovation Industrial Park, 800 Wangjiang West Road, high tech Zone, Hefei City, Anhui Province

Patentee before: Anhui Xinzhi digital media information technology Co.,Ltd.

CP03 Change of name, title or address