CN111191833B - Intelligent experiment process recommendation method and system based on neural network - Google Patents

Intelligent experiment process recommendation method and system based on neural network Download PDF

Info

Publication number
CN111191833B
CN111191833B CN201911355621.0A CN201911355621A CN111191833B CN 111191833 B CN111191833 B CN 111191833B CN 201911355621 A CN201911355621 A CN 201911355621A CN 111191833 B CN111191833 B CN 111191833B
Authority
CN
China
Prior art keywords
experiment
student
neural network
time
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911355621.0A
Other languages
Chinese (zh)
Other versions
CN111191833A (en
Inventor
海克洪
黄龙吟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Meihe Yisi Digital Technology Co ltd
Original Assignee
Wuhan Meihe Yisi Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Meihe Yisi Digital Technology Co ltd filed Critical Wuhan Meihe Yisi Digital Technology Co ltd
Priority to CN201911355621.0A priority Critical patent/CN111191833B/en
Publication of CN111191833A publication Critical patent/CN111191833A/en
Application granted granted Critical
Publication of CN111191833B publication Critical patent/CN111191833B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/067Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • General Health & Medical Sciences (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides an intelligent experiment process recommendation method and system based on a neural network, which are characterized in that student scores and completion time of each experiment in a student historical experiment course are input into a neural network model, the neural network model is trained through SGD and BP algorithms to obtain a student experiment time-use prediction model, the student experiment time-use prediction model is used for predicting the experiment time-use of the current experiment course of a student to recommend proper experiment contents to different students, the defect that the balance of a traditional experiment distribution mode is insufficient is overcome, the individual customization of student experiments is realized, different students can obtain the experiment contents according with the current self condition according to respective learning conditions, the saturation of partial student experiment courses is improved, and partial learning can keep learning enthusiasm according to the self condition.

Description

Intelligent experiment process recommendation method and system based on neural network
Technical Field
The invention relates to the technical field of network teaching, in particular to an intelligent experiment process recommendation method and system based on a neural network.
Background
In a traditional student network experiment process, the content of each experiment course is generally determined by a teacher in a unified way, and all students need to complete the same number of experiments in one experiment course, which causes that students with excellent learning ability complete a specified number of experiments in advance within the time limited by one experiment course and then remain part of the time, while students with weaker learning ability cannot complete all the number of experiments. Over time, a large amount of idle time is wasted by some students, and the learning interest is lost and the learning achievement cannot be improved because some students cannot complete the designated experimental content every time. That is to say, the traditional student network experiment distribution mode can not distribute proper experiment quantity for each student according to student scores, and can not realize the purpose of teaching according to the material.
Disclosure of Invention
In view of the above, on the one hand, the invention provides an intelligent experiment process recommendation method based on a neural network, so as to solve the problem that a traditional student network experiment distribution mode cannot distribute a proper experiment quantity for each student according to student scores.
The technical scheme of the invention is realized as follows: an intelligent experiment process recommendation method based on a neural network comprises the following steps:
acquiring comprehensive evaluation data of students and completion time of each experiment in historical experiment courses of the students;
constructing data samples related to the comprehensive evaluation data and the completion time, and extracting training samples and testing samples from the data samples;
constructing a neural network model;
inputting the training sample into the neural network model, and training the network model through an SGD (generalized serving gateway) and BP (back propagation) algorithm to obtain a student experiment time prediction model;
inputting the test sample into the student experiment time prediction model for testing;
acquiring the last experiment record of the last experiment course of the student, and sequentially predicting the experiment time after the last experiment according to the student experiment time prediction model;
when the prediction experiment use time of each experiment after the last experiment is accumulated in sequence, if the accumulated prediction use time of the Nth experiment position after the last experiment exceeds the limit time of the current experiment course and the accumulated prediction use time of the Nth experiment position after the last experiment is less than the limit time, recommending the N-1 experiments after the last experiment to students.
Optionally, the comprehensive evaluation data includes a homework score at ordinary times, homework difficulty at ordinary times, attendance score at ordinary times, test difficulty at ordinary times, learning attitude evaluation level, and student level evaluation level.
Optionally, the comprehensive evaluation data further includes a student category, a course category, a class category, and a class category, the student category includes a student or a specialist, the course category includes a course or a selection course, the class category includes a first class or a second class, and the class category includes a special class or a general class.
Optionally, the ordinary work achievement and the ordinary test achievement are subjected to standardization, and the ordinary work difficulty, the ordinary attendance achievement and the ordinary test difficulty are subjected to standardization.
Optionally, the building the neural network model includes:
initializing a neural network, defining the number of hidden layer units, and designating the input size;
using the ReLU activation function, a Dropout layer is defined, specifying that 10% of neurons are discarded;
an output layer of the neural network is defined, the number of neurons of the input layer is designated as 1 and sigmoid is designated to be used as an activation function.
Optionally, inputting the training sample into the neural network model, and training the network model through an SGD and BP algorithm to obtain a student experimental time prediction model, including:
multiplying the eigenvalue matrix and the weight matrix of the training sample, and inputting the result into the neural network model for forward propagation to obtain a predicted value;
performing back propagation on the neural network model through an SGD (generalized Gaussian distribution) algorithm and a BP (back propagation) algorithm to update the weight matrix;
performing a plurality of rounds of training on the neural network model to update the weight matrix a plurality of times;
and constructing the student experiment time-use prediction model according to the weight matrix after multiple updates.
Optionally, the SGD algorithm is RMSProp, the back propagation loss function is a cross entropy loss function, and the evaluation function is MAE.
Compared with the prior art, the intelligent experimental process recommendation method based on the neural network has the following beneficial effects:
(1) According to the intelligent experiment process recommendation method based on the neural network, a proper number of experiments can be distributed to each student according to the student scores, the defect that the balance of the traditional experiment distribution mode is insufficient is overcome, the individual customization of the student experiments is realized, and different students can obtain the experiment contents according with the current self conditions according to the respective learning conditions, so that the saturation of the experiment courses of part of the students is improved, and the learning enthusiasm of part of the students can be kept according to the self conditions;
(2) The intelligent experiment process recommendation method based on the neural network comprehensively evaluates students from multiple dimensions, can avoid the contingency and subjectivity of single-dimension evaluation, improves the reliability of student performance data, can be effectively applied to any teaching scene, and achieves the purpose of teaching according to the material.
On the other hand, the invention also provides an intelligent experiment process recommendation system based on the neural network, so as to solve the problem that the traditional student network experiment distribution mode cannot distribute proper experiment quantity for each student according to the student score.
The technical scheme of the invention is realized as follows: an intelligent neural network-based experimental process recommendation system, comprising:
the data acquisition module is used for acquiring comprehensive evaluation data of students and the completion time of each experiment in the historical experiment courses of the students;
the sample construction module is used for constructing data samples related to the comprehensive evaluation data and the completion time, and extracting training samples and testing samples from the data samples;
the network construction module is used for constructing a neural network model;
the model building module is used for inputting the training samples into the neural network model, training the neural network model through SGD and BP algorithms, and obtaining a student experiment time-use prediction model;
the model testing module is used for inputting the test sample into the student experiment time prediction model for testing;
the model prediction module is used for acquiring the last experiment record of the last experiment course of the student and sequentially predicting the experiment time after the last experiment according to the student experiment time prediction model;
and the experiment recommending module is used for sequentially accumulating the predicted experiment time of each experiment after the last experiment, and recommending the N-1 experiments after the last experiment to students if the accumulated predicted time of the Nth experiment after the last experiment exceeds the limit time of the current experiment course and the accumulated predicted time of the N-1 experiment after the last experiment is less than the limit time.
Compared with the prior art, the intelligent experimental process recommendation system based on the neural network and the intelligent experimental process recommendation method based on the neural network have the same advantages, and are not described again.
On the other hand, the invention also provides a computer readable storage medium to solve the problem that the traditional student network experiment distribution mode cannot distribute proper experiment quantity for each student according to the student score.
The technical scheme of the invention is realized as follows: a computer-readable storage medium, storing a computer program which, when read and executed by a processor, implements the method of any of the above.
The advantages of the computer-readable storage medium and the intelligent experimental process recommendation method based on the neural network are the same as the advantages of the intelligent experimental process recommendation method based on the neural network compared with the prior art, and are not repeated herein.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an intelligent neural network-based experimental process recommendation method of the present invention;
FIG. 2 is a flow chart of step S3 of the present invention;
FIG. 3 is a flowchart of step S4 of the present invention;
FIG. 4 is a diagram illustrating test results of a student time-of-use prediction model of the present invention;
FIG. 5 is a block diagram of an intelligent neural network-based experimental process recommendation system according to the present invention.
Description of the reference numerals:
10-a data acquisition module; 20-a sample construction module; 30-a network construction module; 40-a model building module; 50-a model test module; 60-a model prediction module; 70-Experimental recommendation Module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, the intelligent neural network-based experimental process recommendation method of the present invention includes:
s1, acquiring comprehensive evaluation data of students and completion time of each experiment in historical experiment courses of the students;
s2, constructing data samples related to the comprehensive evaluation data and the completion time, and extracting training samples and testing samples from the data samples;
s3, constructing a neural network model;
s4, inputting the training sample into the neural network model, and training the network model through SGD and BP algorithms to obtain a student experiment time-use prediction model;
s5, inputting the test sample into the student experiment time prediction model for testing;
s6, obtaining the last experiment record of the last experiment course of the student, and sequentially predicting the experiment time after the last experiment according to the student experiment time prediction model;
and S7, sequentially accumulating the predicted experiment use time of each experiment after the last experiment, and recommending the N-1 experiments after the last experiment to students if the accumulated predicted experiment use time of the Nth experiment after the last experiment exceeds the limited time of the current experiment course and the accumulated predicted experiment use time of the N-1 experiment after the last experiment is less than the limited time.
In this embodiment, taking experiment course a as an example, it can be understood that experiment course a should include multiple lessons, and each lesson includes multiple student experiments. In step S1, the comprehensive evaluation data of the student is evaluation data including the learning score of the student, and the student history experiment course refers to a plurality of experiment courses a completed before the current time node. In step S3, the neural network model is a full connection layer neural network. Optionally, the comprehensive evaluation data includes a usual work score, a usual work difficulty, a usual attendance score, a usual test difficulty, a learning attitude evaluation level, and a student level evaluation level, and further includes a student category, a course category, a class category, and a class category, where the student category includes a student or a specialist, the course category includes a compulsory course or a selective course, the class category is one or two schoolcourses, and the class category includes a special class or a common class. Generally, student evaluation usually depends on examination results, ordinary homework results and the like, evaluation dimensionality is small, influence factors are more, such as abnormal or abnormal taking of an examination, homework copying and the like, and general student evaluation has contingency and low reliability; student ratings sometimes also include teacher scores, which are subjective. According to the embodiment, the students are comprehensively evaluated from multiple dimensions, the contingency and subjectivity of single-dimension evaluation can be avoided, and the reliability of student achievement data is improved. Generally, student evaluation generally sets the same standard for all students and all courses, so that comprehensive evaluation data of students is only very applicable to a unique scene of a certain course, a certain class of specific students, a certain school date and the like, but not effectively applicable to other scenes. The comprehensive evaluation data of the students in the embodiment includes the classes of the students, the class of the courses, the class of the school dates, the class of the classes and the like, and can be effectively applied to any teaching scene to realize the purpose of teaching according to the factors.
In step S2, a space dictionary dit is established course-A The key of the dictionary is a school number, the value range is { x1, x2, ..., xn }, the value is an empty list [, ]]After initialization is completed, ditt course-A An example of the format of (1) is dct course-A ={x1:[],x2:[],x3:[],…,xn:[]Get through dictionary fact course-A The keys in the system respectively acquire the student class, course class, period class and class of the corresponding school number, and insert the student class, course class, period class and class into the value list, wherein the student class characteristic code corresponding to the subject student is 0, the student class code corresponding to the specialist student is 1, the class feature code of the student course corresponding to the selected lesson is 0, the class code of the student course corresponding to the required lesson is 1, the class feature code of the class corresponding to the first class is 0, the class feature code of the class corresponding to the second class is 1, the class feature code of the class corresponding to the common class is 0, the class code corresponding to the special class is 1, and the example fact is obtained course-A ={x1:[1.,0.,1.,0.],x2:[0.,1.,0.,1.],x3::[0.,0.,1.,1.],…,xn:[1.,1.,0.,0.]}; acquiring the ordinary homework scores of students, standardizing the ordinary homework scores, firstly obtaining the average value of the ordinary homework scores, then calculating the standard deviation of the ordinary homework scores, and finally calculating the codes corresponding to the ordinary homework scores, wherein the data after standardized processing can be more easily processed by a neural network; calculating the difficulty of ordinary work, setting 1 as simple, 2 as general, 3 as difficult, calculating the total of the difficulty of work, controlling the difficulty range between 0 and 1, and inserting the difficulty range into the dictionary fact course-A Internal; acquiring the usual attendance records of students, quantizing the attendance evaluation, wherein 0 is absent attendance, 0.5 corresponds to late arrival or early departure, 1 corresponds to full attendance, firstly, the maximum value of the usual attendance record sum is obtained, and then normalized attendance sets are respectively obtained, wherein the attendance sets are between [0,1]Decimal between, traverse the dictionarydict course-A The keys in the system are used for respectively acquiring ordinary attendance scores corresponding to school numbers and inserting the ordinary attendance scores into a value list; by analogy, the test result, the test difficulty, the learning attitude evaluation grade and the student level evaluation grade are all inserted into the dictionary dit course-A In (1), finally get the example dit course-A ={x1:[1.,0.,1.,0.,0.93,0.48,0.7,0.57,0.48,0.7,2.],x2:[0.,1.,0.,1.,-0.52,0.48,0.5,0.46,0.48,0.5,4.],x3:[0.,0.,1.,1.,0.65,0.48,0.62,0.98,0.48,0.9,1.],…,xn:[1.,1.,0.,0.,-0.30,0.48,0.78,0.6,0.48,0.8,3.]}。
Creating space dictionary fact practice-A Wherein the key is a school number with a value range of { x1, x2, ..., xn }, and the value is an empty list, [ 2 ], [ x2 ], [ 8230 ], [ x ] and [ x ] is a numerical value]After initialization is completed, dit practice-A An example of the format of (1) is dct practice-A ={x1:[],x2:[],x3:[],…,xn:[]}. Past students xi e { x1, x2, ..., xn } time of use pi e { p1, p2, ..., pn } of n specific experiment numbers about experiment course A are inserted into the corresponding list, and the format after insertion is exemplified as dit practice-A {x1:[15,8,14,…,12],x2:[23,9,12,…,17],x3:[12,5,11,…,10],…,xn:[22,11,17,…,14]}。
Will build dit course-A Dictionary and fact practice-A Dictionary synthesis, traversing dictionary dct by key practice-A Obtaining dictionary fact corresponding to current key practice-A Traverse the list, and associate each value in the list with the dictionary fact practice-A The value lists corresponding to the keys are combined, and the combined result is written into a text file, wherein the written text file has a format example as follows:
1.0.1.0.0.93 0.48 0.7 0.57 0.48 0.7 2.1 15
1.0.1.0.0.93 0.48 0.7 0.57 0.48 0.7 2.2 8
1.0.1.0.0.93 0.48 0.7 0.57 0.48 0.7 2.n 12
0.1.0.1.-0.52 0.48 0.5 0.46 0.48 0.5 4.1 23
0.1.0.1.-0.52 0.48 0.5 0.46 0.48 0.5 4.2 9
0.1.0.1.-0.52 0.48 0.5 0.46 0.48 0.5 4.n 17
this creates a data sample for comprehensive assessment data and time to completion of the experiment. The fields respectively represent the class of students, class, class category, ordinary work achievement, ordinary work difficulty, ordinary attendance achievement, ordinary test difficulty, learning attitude evaluation level, student level evaluation level, experiment number and experiment time.
Reading the text file, firstly obtaining the recording line number m of the whole text file, establishing an index list of line numbers, the initial list form is [1,2,3, \ 8230;, n]The index list is scrambled, in the form of [3927,5837,5331, \ 8230; 462](ii) a Then initializing a dictionary data _ dit, reading the record in the text file, dividing the line, writing the line content into the dictionary content by taking the line number as a key, and after the writing is finished, the format of the dictionary is data _ dit = { '1': 1.0.1.0.93.0.48 0.7 0.57 0.48 0.48.7.2.1 15', '2': 1.0.1.0.0.0.93.93.0.7.57 0.48 0.7.7.2 ' \ 8', -8230, m ': 1.1.0.0-0.30.48.48 0.78.0.48.8.8.3.n 14' }; taking 90% of m records as a training set and 10% as a test set, wherein the size of a train _ data tensor of training set data is
Figure BDA0002335831620000081
The training set label train _ labels tensor has a size @>
Figure BDA0002335831620000082
The test set data valid data tensor has a size of @>
Figure BDA0002335831620000083
The test set tag valid _ labels tensor size @>
Figure BDA0002335831620000091
Traversing the disordered index list, respectively taking different row number index values, then searching the value corresponding to the data _ fact, segmenting the obtained value by using a blank space, and filling the first 12 contents into the spaceAnd in the train _ data tensor, the last content is filled into the train _ labels tensor until the two tensors are filled. At this time, the index list has already traversed 90%, the remaining 10% is traversed continuously by the same rule, and the traversed data are respectively inserted into valid _ data and valid _ labels, so that the preparation of the training sample and the test sample is completed.
Specifically, as shown in fig. 2, step S3 includes:
step S31, initializing a neural network, defining the number of hidden layer units, and designating the input size;
step S32, using the ReLU activation function, defining a Dropout layer, which specifies that 10% of neurons are discarded;
and step S33, defining an output layer of the neural network, designating the number of the neurons of the input layer as 1 and designating sigmoid to be used as an activation function.
After all the different experimental information of all students about the experimental course a is obtained, training can be performed by using the neural network in deep learning, and the building of the deep learning neural network model is completed through steps S31 to S33. Wherein a Dropout layer is defined for reducing the risk of overfitting, and the ReLU function is applied with a mathematical expression of
Figure BDA0002335831620000092
The sigmoid function applies a mathematical expression of @>
Figure BDA0002335831620000093
Specifically, as shown in fig. 3, step S4 includes:
step S41, multiplying the eigenvalue matrix and the weight matrix of the training sample, and inputting the result into the neural network model for forward propagation to obtain a predicted value;
step S42, carrying out back propagation on the neural network model through an SGD (generalized serving gateway) and BP (back propagation) algorithm so as to update the weight matrix;
step S43, performing multiple rounds of training on the neural network model to update the weight matrix for multiple times;
and S44, constructing the student experiment time prediction model according to the weight matrix which is updated for multiple times.
After the network structure is defined, the training can be started, and the whole training is realized by using an SGD and a BP algorithm, wherein the SGD uses RMSProp, the selected loss function is a cross entropy loss function, and the evaluation function uses MAE. Taking the first hidden layer of the neural network model as an example, in the previous step, different eigenvalue matrixes are respectively obtained, wherein the eigenvalue matrix is a two-dimensional matrix
Figure BDA0002335831620000101
When single operation is performed
Figure BDA0002335831620000102
Has ^ for the weight matrix>
Figure BDA0002335831620000103
With the first sample
Figure BDA0002335831620000104
For example, the result of multiplying the two tensors is
Figure BDA0002335831620000111
Y is a tensor of (100,) which is input to the output layer of the neural network model. The output layer of the neural network model only has 1 neural unit, and the corresponding weight matrix is a one-dimensional tensor
W=[w 1 w 2 … w 100 ];
The final output of the neural network model is a single value, namely a predicted value
Figure BDA0002335831620000115
Is provided with
Figure BDA0002335831620000112
In obtaining the predicted value
Figure BDA0002335831620000116
Then the actual value y can be calculated 1 The difference between
Figure BDA0002335831620000113
The purpose of the training is to make loss i As small as possible, thereby improving the prediction accuracy and reducing loss i Written as w j And
Figure BDA0002335831620000117
i.e.:
Figure BDA0002335831620000114
wherein w j And
Figure BDA0002335831620000118
wherein j has a value in the range of 1-100->
Figure BDA0002335831620000119
Wherein the value range of i is 1-12. According to the back propagation algorithm, it is desirable to make loss i Is decreased, can be respectively applied to w j And/or>
Figure BDA00023358316200001110
Partial derivatives are calculated to be equal to 0, thus obtaining a set of w j And/or>
Figure BDA00023358316200001111
Updating the set of values into the neural network modelThe weight matrix at the next sample (or sample of the next batch). And inputting the next sample or the next batch of samples into the neural network model until all training samples are input, finishing one round of training of the neural network model, updating the weight values in the neural network model for a plurality of times, and continuously reducing the loss value when the neural network model is trained for a plurality of rounds so as to improve the prediction precision.
In the step S5, a 10-fold cross validation technique is adopted to validate the model obtained by training, and the validation result is shown in fig. 4, it can be seen that the loss value is reduced to a relatively low value after two rounds of training, then the model starts to float repeatedly, the loss value is about 2, the network structure is reinitialized and the training is performed for 2 rounds, the training model is stored to obtain the student experiment time prediction model, and it can be predicted that the error between the model prediction accuracy and the actual accuracy is about plus or minus 2 minutes, so that the student experiment time prediction model obtained by training based on the SGD and BP algorithms in the embodiment has high accuracy and reliability for student experiment time prediction.
In step S6, after the student logs in the system, the system automatically acquires the characteristics of the current student, such as student category, course category, class category, ordinary work score, ordinary work difficulty, ordinary attendance score, ordinary test difficulty, learning attitude evaluation level, student level evaluation level, and experiment number, and if some information of the current student is missing, the missing value is set to 0. The system judges the initial point of the current experiment step of the student, transmits the data to the previously stored student experiment time-use prediction model, and the model outputs a numerical value
Figure BDA0002335831620000121
Namely the time for prediction experiments.
In step S6, if the limited time of the current experiment course is 30 minutes, the last experiment number of the last experiment course completed by the student is 2, the elapsed time of the experiment number 3 predicted by the student experiment elapsed time prediction model is 10 minutes, the elapsed time of the predicted experiment number 4 is 15 minutes, and the elapsed time of the predicted experiment number 5 is 8 minutes, the cumulative predicted elapsed time at the experiment number 4 is less than the limited time, and the cumulative predicted elapsed time at the experiment number 5 exceeds the limited time of the current experiment course, N =3, and the method of this embodiment recommends the two experiments of the number 3 and the number 4 to the student.
Like this, through above-mentioned step, this embodiment can be according to the experiment of student score for every student's distribution suitable quantity, has compensatied the not enough defect of traditional experiment distribution mode equilibrium, has realized the individualized customization of student's experiment, and different students can obtain the experimental content that accords with current self situation according to the respective learning condition to promote the saturation of part student's experiment course, also let partial study can keep the enthusiasm of study according to self situation.
As shown in fig. 5, this embodiment further provides an intelligent experiment process recommendation system based on a neural network, including:
the data acquisition module 10 is used for acquiring comprehensive evaluation data of students and completion time of each experiment in historical experiment courses of the students;
a sample construction module 20, configured to construct data samples regarding the comprehensive evaluation data and the completion time, and extract training samples and test samples from the data samples;
a network construction module 30, configured to construct a neural network model;
the model building module 40 is used for inputting the training samples into the neural network model, training the network model through SGD and BP algorithms, and obtaining a student experiment time prediction model;
the model testing module 50 is used for inputting the test sample into the student experiment time prediction model for testing;
the model prediction module 60 is configured to obtain a last experiment record of a last experiment course of a student, and sequentially predict the experiment time after the last experiment according to the student experiment time prediction model;
and the experiment recommending module 70 is configured to sequentially accumulate the predicted experiment time of each experiment after the last experiment, and recommend N-1 experiments after the last experiment to the student if the accumulated predicted time of the nth experiment after the last experiment exceeds the limit time of the current experiment course and the accumulated predicted time of the N-1 experimental after the last experiment is less than the limit time.
Like this, the intelligence experimentation recommendation system of this embodiment can distribute the experiment of suitable quantity for every student according to student's score, has compensatied the not enough defect of traditional experiment distribution mode equilibrium, has realized the individualized customization of student's experiment, and different students can obtain the experimental content that accords with current self situation according to the respective learning condition to promote the saturation of part student's experiment course, also let partial study can keep the enthusiasm of study according to self situation.
The present embodiment also provides a computer-readable storage medium, where a computer program is stored, and when the computer program is read and executed by a processor, the computer program implements the intelligent experimental process recommendation method according to any one of the above descriptions. The computer readable storage medium of the embodiment can distribute a proper amount of experiments for each student according to the student scores during operation, overcomes the defect of insufficient balance of the traditional experiment distribution mode, realizes individual customization of student experiments, and enables different students to obtain experiment contents according with the current self conditions according to respective learning conditions, thereby improving the saturation of the experiment courses of the partial students and enabling the partial learning to keep learning enthusiasm according to the self conditions.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. An intelligent experiment process recommendation method based on a neural network is characterized by comprising the following steps:
acquiring comprehensive evaluation data of students and completion time of each experiment in historical experiment courses of the students;
constructing data samples related to the comprehensive evaluation data and the completion time, and extracting training samples and testing samples from the data samples; the comprehensive evaluation data comprises a normal work score, normal work difficulty, normal attendance score, normal test difficulty, learning attitude evaluation grade and student level evaluation grade;
constructing a neural network model;
inputting the training sample into the neural network model, and training the network model through an SGD (generalized serving gateway) and BP (back propagation) algorithm to obtain a student experiment time prediction model;
inputting the test sample into the student experiment time prediction model for testing;
acquiring the last experiment record of the last experiment course of the student, and sequentially predicting the time of each experiment after the last experiment according to the student experiment time prediction model;
when the prediction experiment use time of each experiment after the last experiment is accumulated in sequence, if the accumulated prediction use time of the Nth experiment position after the last experiment exceeds the limit time of the current experiment course and the accumulated prediction use time of the Nth experiment position after the last experiment is less than the limit time, recommending the N-1 experiments after the last experiment to students.
2. The neural network-based intelligent experimental process recommendation method of claim 1, wherein the comprehensive evaluation data further comprises student categories, class categories, period categories and class categories, the student categories comprise a student or a specialist, the class categories comprise a compulsory course or a selection course, the class categories are one or two periods larger, and the class categories comprise a special class or a general class.
3. The neural network-based intelligent experimental process recommendation method of claim 1, wherein the ordinary work achievement and the ordinary test achievement are subjected to standardized processing, and the ordinary work difficulty, the ordinary attendance achievement and the ordinary test difficulty are subjected to normalized processing.
4. The intelligent neural network-based experimental process recommendation method of claim 1, wherein the building of the neural network model comprises:
initializing a neural network, defining the number of hidden layer units, and designating the input size;
using the ReLU activation function, a Dropout layer is defined, specifying that 10% of neurons are discarded;
defining an output layer of the neural network, designating the number of neurons of the input layer as 1 and designating sigmoid as an activation function.
5. The intelligent neural network-based experimental process recommendation method as claimed in claim 1, wherein the training samples are input into the neural network model, and the network model is trained through SGD and BP algorithms to obtain a student experimental time prediction model, and the method comprises the following steps:
multiplying the eigenvalue matrix and the weight matrix of the training sample, and inputting the result into the neural network model for forward propagation to obtain a predicted value;
back propagation is carried out on the neural network model through an SGD algorithm and a BP algorithm so as to update the weight matrix;
performing a plurality of rounds of training on the neural network model to update the weight matrix a plurality of times;
and constructing the student experiment time-use prediction model according to the weight matrix after multiple updates.
6. The intelligent neural network-based experimental process recommendation method of claim 5, wherein the SGD algorithm is RMSProp, the back-propagated loss function is a cross-entropy loss function, and the evaluation function is MAE.
7. An intelligent neural network-based experimental process recommendation system, comprising:
the data acquisition module (10) is used for acquiring comprehensive evaluation data of students and the completion time of each experiment in the historical experiment courses of the students; the comprehensive evaluation data comprises a normal work score, normal work difficulty, normal attendance score, normal test difficulty, learning attitude evaluation grade and student level evaluation grade;
a sample construction module (20) for constructing data samples regarding the comprehensive evaluation data and the completion time, extracting training samples and test samples from the data samples;
a network construction module (30) for constructing a neural network model;
the model construction module (40) is used for inputting the training samples into the neural network model, training the network model through SGD and BP algorithms, and obtaining a student experiment time prediction model;
the model testing module (50) is used for inputting the testing sample into the student experiment time prediction model for testing;
the model prediction module (60) is used for acquiring the last experiment record of the last experiment course of the student and sequentially predicting the experiment time of each experiment after the last experiment according to the student experiment time prediction model;
and the experiment recommending module (70) is used for sequentially accumulating the predicted experiment time of each experiment after the last experiment, and recommending the N-1 experiments after the last experiment to students if the accumulated predicted experiment time of the Nth experiment after the last experiment exceeds the limited time of the current experiment course and the accumulated predicted experiment time of the N-1 experiment after the last experiment is less than the limited time.
8. A computer-readable storage medium, characterized in that it stores a computer program which, when read and executed by a processor, implements the method according to any one of claims 1-6.
CN201911355621.0A 2019-12-25 2019-12-25 Intelligent experiment process recommendation method and system based on neural network Active CN111191833B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911355621.0A CN111191833B (en) 2019-12-25 2019-12-25 Intelligent experiment process recommendation method and system based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911355621.0A CN111191833B (en) 2019-12-25 2019-12-25 Intelligent experiment process recommendation method and system based on neural network

Publications (2)

Publication Number Publication Date
CN111191833A CN111191833A (en) 2020-05-22
CN111191833B true CN111191833B (en) 2023-04-18

Family

ID=70709350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911355621.0A Active CN111191833B (en) 2019-12-25 2019-12-25 Intelligent experiment process recommendation method and system based on neural network

Country Status (1)

Country Link
CN (1) CN111191833B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528158B (en) * 2020-12-24 2023-08-11 北京百度网讯科技有限公司 Course recommendation method, device, equipment and storage medium
CN114973881B (en) * 2022-03-31 2023-06-23 中国矿业大学 Intelligent scoring electronic technology experiment comprehensive teaching system
CN117455389B (en) * 2023-10-10 2024-05-28 北京华普亿方科技集团股份有限公司 Vocational training management platform based on artificial intelligence
CN117688248B (en) * 2024-02-01 2024-04-26 安徽教育网络出版有限公司 Online course recommendation method and system based on convolutional neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08310746A (en) * 1995-05-18 1996-11-26 Fujitec Co Ltd Learning method for waiting time predicting neural net
CN101877077A (en) * 2009-11-25 2010-11-03 天津工业大学 Time series predicting model
CN105279288A (en) * 2015-12-04 2016-01-27 深圳大学 Online content recommending method based on deep neural network
CN107481170A (en) * 2017-08-18 2017-12-15 深圳市华第时代科技有限公司 A kind of course recommends method, apparatus, curricula-variable server and storage medium
CN109146174A (en) * 2018-08-21 2019-01-04 广东恒电信息科技股份有限公司 A kind of elective course accurate recommendation method based on result prediction
CN109241431A (en) * 2018-09-07 2019-01-18 腾讯科技(深圳)有限公司 A kind of resource recommendation method and device
CN110378818A (en) * 2019-07-22 2019-10-25 广西大学 Personalized exercise recommended method, system and medium based on difficulty

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012138959A2 (en) * 2011-04-07 2012-10-11 Denley Tristan Course recommendation system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08310746A (en) * 1995-05-18 1996-11-26 Fujitec Co Ltd Learning method for waiting time predicting neural net
CN101877077A (en) * 2009-11-25 2010-11-03 天津工业大学 Time series predicting model
CN105279288A (en) * 2015-12-04 2016-01-27 深圳大学 Online content recommending method based on deep neural network
CN107481170A (en) * 2017-08-18 2017-12-15 深圳市华第时代科技有限公司 A kind of course recommends method, apparatus, curricula-variable server and storage medium
CN109146174A (en) * 2018-08-21 2019-01-04 广东恒电信息科技股份有限公司 A kind of elective course accurate recommendation method based on result prediction
CN109241431A (en) * 2018-09-07 2019-01-18 腾讯科技(深圳)有限公司 A kind of resource recommendation method and device
CN110378818A (en) * 2019-07-22 2019-10-25 广西大学 Personalized exercise recommended method, system and medium based on difficulty

Also Published As

Publication number Publication date
CN111191833A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111191833B (en) Intelligent experiment process recommendation method and system based on neural network
CN111460249B (en) Personalized learning resource recommendation method based on learner preference modeling
CN110264091B (en) Student Cognitive Diagnosis Method
Kadoić et al. Integrating the DEMATEL with the analytic network process for effective decision-making
Pratt et al. On the nature and discovery of structure
CN114519143B (en) Training method of course recommendation model, course recommendation method and device
CN111291940B (en) Student class dropping prediction method based on Attention deep learning model
CN111444432A (en) Domain-adaptive deep knowledge tracking and personalized exercise recommendation method
Hoiles et al. Bounded off-policy evaluation with missing data for course recommendation and curriculum design
CN114722182A (en) Knowledge graph-based online class recommendation method and system
CN114299349B (en) Crowdsourcing image learning method based on multi-expert system and knowledge distillation
CN114781710B (en) Knowledge tracking method for difficulty characteristics of comprehensive learning process and question knowledge points
CN114429212A (en) Intelligent learning knowledge ability tracking method, electronic device and storage medium
CN112084330A (en) Incremental relation extraction method based on course planning meta-learning
CN116596582A (en) Marketing information prediction method and device based on big data
CN114201684A (en) Knowledge graph-based adaptive learning resource recommendation method and system
CN115169449A (en) Attribute-level emotion analysis method, system and storage medium based on contrast learning and continuous learning
CN112860847A (en) Video question-answer interaction method and system
CN116561260A (en) Problem generation method, device and medium based on language model
CN112396092B (en) Crowdsourcing developer recommendation method and device
CN114154839A (en) Course recommendation method based on online education platform data
CN113283488A (en) Learning behavior-based cognitive diagnosis method and system
CN112818100A (en) Knowledge tracking method and system fusing question difficulty
CN116484868A (en) Cross-domain named entity recognition method and device based on diffusion model generation
CN113239699B (en) Depth knowledge tracking method and system integrating multiple features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 01, 2 / F, building A14, phase 1.1, Wuhan National Geospatial Information Industrialization Base (New Area), no.6, Beidou Road, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430000

Applicant after: Wuhan Meihe Yisi Digital Technology Co.,Ltd.

Address before: No.01-6, 1st floor, building 6, international enterprise center, special 1, Guanggu Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430000

Applicant before: HUBEI MEIHE YISI EDUCATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant