CN117557426B - Work data feedback method and learning evaluation system based on intelligent question bank - Google Patents

Work data feedback method and learning evaluation system based on intelligent question bank Download PDF

Info

Publication number
CN117557426B
CN117557426B CN202311675200.2A CN202311675200A CN117557426B CN 117557426 B CN117557426 B CN 117557426B CN 202311675200 A CN202311675200 A CN 202311675200A CN 117557426 B CN117557426 B CN 117557426B
Authority
CN
China
Prior art keywords
answer
behavior
logic
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311675200.2A
Other languages
Chinese (zh)
Other versions
CN117557426A (en
Inventor
黎国权
朱晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaoma Zhixue Technology Co ltd
Original Assignee
Guangzhou Xiaoma Zhixue Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaoma Zhixue Technology Co ltd filed Critical Guangzhou Xiaoma Zhixue Technology Co ltd
Priority to CN202311675200.2A priority Critical patent/CN117557426B/en
Publication of CN117557426A publication Critical patent/CN117557426A/en
Application granted granted Critical
Publication of CN117557426B publication Critical patent/CN117557426B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Educational Administration (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Primary Health Care (AREA)
  • Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application provides an homework data feedback method and a learning evaluation system based on an intelligent question bank, which are used for realizing accurate analysis and feedback of a student homework answering process by constructing a plurality of answering behavior logic chain networks. The method comprises the steps of monitoring and recording data interacted by students and an intelligent question library, extracting answer behaviors and first knowledge application labels, generating second knowledge application labels by combining answer results, forming target answer behavior data by fusing associated answer tracks, analyzing logic association attributes among the answer behaviors, and selecting the most representative logic association attributes from the logic association attributes to be loaded into target logic chain data. Finally, weak knowledge paths contained in the homework answering process data can be determined, information feedback can be provided pertinently, knowledge application modes, understanding capacity and potential learning difficulties of students can be revealed, and learning efficiency and quality are improved effectively.

Description

Work data feedback method and learning evaluation system based on intelligent question bank
Technical Field
The application relates to the technical field of computer information, in particular to an intelligent question bank-based operation data feedback method and a learning evaluation system.
Background
In a conventional educational system, students usually perform homework through a paper pen test or an electronic homework system, and a teacher needs to invest a lot of time to correct and analyze homework to know the learning condition of the students. This approach has significant limitations: the teacher's feedback may not be sufficiently immediate and often lacks pertinence; students may not be able to timely learn about their own errors and deficiencies; it is difficult for a teacher to accurately grasp the knowledge mastering level and learning progress of each student.
With the development of information technology, a learning platform based on intelligent information technology is presented. The learning platforms can automatically record answering behaviors of students, provide instant feedback, and reveal learning modes and difficulties of the students through data analysis. However, while such learning platforms have advantages in terms of processing large amounts of data, they still face challenges in understanding and analyzing the logic chain of complex answering behaviors. For example, how to accurately relate answering behavior to knowledge point tags, how to identify and strengthen a student's answering logic chain, and how to effectively provide personalized learning interventions are all issues that need to be addressed in the prior art.
Disclosure of Invention
In order to at least overcome the defects in the prior art, the application aims to provide an exercise data feedback method and a learning evaluation system based on an intelligent question bank, which not only can accurately extract and analyze answering behaviors of students, but also can construct and optimize an answering behavior logic chain network so as to generate target answering behavior data and target logic chain data. By the aid of the method, a student learning process can be understood more deeply, and more accurate and effective learning support is provided for students and teachers.
In a first aspect, the present application provides a job data feedback method based on an intelligent question bank, applied to a learning evaluation system, the method comprising:
Utilizing a plurality of answer behavior logic chain networks, carrying out answer behavior extraction and logic chain analysis on the homework answer process data of the target student user according to any answer behavior logic chain network, and generating corresponding answer behavior extraction data and logic chain analysis data; the operation answering process data are operation answering process data which are recorded in a monitoring mode and are related to the intelligent question bank data; the answer behavior extraction data are used for reflecting: the operation answering process data comprises first knowledge application labels which correspond to operation answering behaviors related to the intelligent question library data respectively; the logic chain analysis data is used for reflecting: logic association attributes between every two job answering behaviors;
Generating second knowledge application labels which are respectively corresponding to all answer tracks included by all operation answer behaviors according to the first knowledge application labels respectively corresponding to all operation answer behaviors and answer results respectively in the operation answer process data, and generating target answer behavior data by merging and outputting all associated answer tracks which are matched with the second knowledge application labels into corresponding target answer behaviors;
performing answer track correlation analysis on each generated operation answer behavior and each target answer behavior respectively, determining a target answer behavior combination corresponding to each logic association attribute, selecting one logic association attribute from each logic association attribute associated with the same target answer behavior combination, and loading the logic association attribute into target logic chain data to generate target logic chain data;
And determining a weak knowledge path contained in the homework answering process data according to the target answering behavior data and the target logic chain data, and feeding back information to the target student user based on the weak knowledge path.
In a possible implementation manner of the first aspect, the answer behavior extraction data and the answer behavior logic chain network have unique mapping logic association attributes, each answer behavior extraction data includes at least one job answer behavior, a first knowledge application tag of each job answer behavior and a corresponding first tag confidence level;
The generating a second knowledge application label corresponding to each answer track included in each operation answer behavior according to the first knowledge application label corresponding to each operation answer behavior and the answer result in the operation answer process data, includes:
For each answer track included in the operation answer process data, generating a reference knowledge application tag and a corresponding reference tag confidence coefficient of the answer track in each answer behavior extraction data according to an answer result of any answer track in the operation answer process data and a first knowledge application tag and a corresponding first tag confidence coefficient of each operation answer behavior in each answer behavior extraction data;
And extracting a reference knowledge application label and a corresponding reference label confidence degree in the data based on the answer track in each answer behavior, extracting an answer behavior weight of an answer behavior logic chain network corresponding to the data according to each answer behavior, and selecting one reference knowledge application label from the generated reference knowledge application labels to output as a second knowledge application label of the answer track.
In a possible implementation manner of the first aspect, each answer behavior extraction data includes an answer behavior track and a non-answer behavior track, where the answer behavior track is an answer track included in each job answer behavior, the non-answer behavior track is an answer track that is outside each job answer behavior and exists in the job answer process data, a reference knowledge application label of the non-answer behavior track is a setting label, and a reference label confidence is a setting parameter value;
The generating a reference knowledge application label and a corresponding reference label confidence level of the answer track in each answer behavior extraction data according to the answer result of any answer track in the answer process data and the first knowledge application label and the corresponding first label confidence level of each answer behavior in each answer behavior extraction data comprises:
and for each answer behavior track included in each answer behavior extraction data in the generated plurality of answer behavior extraction data, respectively using a first knowledge application label and a first label confidence coefficient of the operation answer behavior to which the answer behavior track belongs as a reference knowledge application label and a reference label confidence coefficient of the answer behavior track.
In a possible implementation manner of the first aspect, the extracting, based on the reference knowledge application tag and the corresponding reference tag confidence level of the answer trajectory in each answer behavior, the answer behavior weight value of the answer behavior logic chain network corresponding to each answer behavior extraction data, and selecting one reference knowledge application tag from the generated reference knowledge application tags to output as the second knowledge application tag of the answer trajectory, includes:
Extracting reference knowledge application labels and corresponding reference label confidence levels in data based on the answer tracks in each answer behavior, carrying out weight fusion on the reference label confidence levels corresponding to the at least one reference knowledge application label respectively according to answer behavior weight values of an answer behavior logic chain network to which the at least one reference knowledge application label respectively belongs, and updating the reference label confidence levels of the reference knowledge application labels;
And selecting one reference knowledge application label with updated reference label confidence meeting a set label selection rule from the generated reference knowledge application labels as a second knowledge application label of the answer track.
In a possible implementation manner of the first aspect, the step of determining the answer behavior weight of the answer behavior logic chain network includes:
for each answer behavior logic chain network, carrying out answer behavior extraction on each first answer sample data in a priori configured first answer sample data sequence by utilizing the answer behavior logic chain network, and generating answer behavior sample extraction data of each first answer sample data; the first answer sample data sequence comprises a plurality of first answer sample data and answer behavior labeling data respectively corresponding to the plurality of first answer sample data;
Respectively comparing answer behavior sample extraction data of each first answer sample data with answer behavior labeling data, and determining answer behavior extraction effective values of the answer behavior logic chain network based on comparison results; the effective value extracted by the answering behavior is used for reflecting the accuracy of the answering behavior extraction of the answering behavior logic chain network;
And respectively extracting effective values from answer behaviors corresponding to the answer behavior logic chain networks, and performing regularized conversion to generate answer behavior weights of each answer behavior logic chain network.
In a possible implementation manner of the first aspect, the performing answer track correlation analysis on each generated job answer behavior and each target answer behavior respectively, determining a target answer behavior combination corresponding to each logic association attribute, and selecting one logic association attribute from each logic association attribute associated with the same target answer behavior combination to load into target logic chain data, to generate target logic chain data includes:
For each generated logic association attribute, carrying out answer track correlation analysis on two job answer behaviors corresponding to the logic association attribute and each target answer behavior respectively to generate a corresponding target answer behavior combination and correlation result; the correlation result is used for reflecting correlation parameter values of the two operation answering behaviors and the target answering behaviors respectively;
For each generated target answer behavior combination, the following operations are respectively executed to obtain target logic chain data:
When a plurality of logic association attributes are associated by a target answer behavior combination, selecting one logic association attribute from the plurality of logic association attributes according to the correlation result of each logic association attribute, and loading the logic association attribute into the target logic chain data.
In a possible implementation manner of the first aspect, the performing, by using the two job answer behaviors corresponding to the logic association attribute, answer track correlation analysis on the two job answer behaviors and each target answer behavior, and generating a corresponding target answer behavior combination and correlation result includes:
For two job answering behaviors corresponding to each logic association attribute, determining target answering behaviors which are consistent with the set answering result association logic association attribute with the job answering behaviors and have the same target answering behaviors based on the answering results of the job answering behaviors in the job answering process data and the first knowledge application labels of the job answering behaviors;
Analyzing the answer track correlation between the operation answer behavior and the target answer behavior, and determining an answer track correlation result of the operation answer behavior, wherein the answer track correlation result is used for reflecting whether the answer track correlation exists between the operation answer behavior and the target answer behavior;
determining the target answer behavior combination based on the target answer behaviors respectively corresponding to the two operation answer behaviors;
And determining a correlation result of the logic correlation attribute based on the number of answer track correlations between the two job answer behaviors and the target answer behaviors respectively corresponding to the two job answer behaviors.
In a possible implementation manner of the first aspect, the logic chain analysis data and the answer behavior logic chain network have unique mapping logic association attributes, and each logic chain analysis data includes logic association attributes between every two job answer behaviors and corresponding logic association attribute confidence degrees;
when a target answer behavior is combined and correlated with a plurality of logic correlation attributes, selecting one logic correlation attribute from the plurality of logic correlation attributes to be loaded into the target logic chain data according to the correlation result of each logic correlation attribute, wherein the method comprises the following steps:
When a target answer behavior combination associates a plurality of logic association attributes, respectively carrying out weight fusion on logic association attribute confidence degrees respectively corresponding to logic association attributes belonging to the same logic association attribute through logic association weights of answer behavior logic chain networks corresponding to each logic association attribute and correlation results of each logic association attribute, and updating the logic association attribute confidence degrees of each logic association attribute;
and selecting one logic association attribute with updated logic association attribute confidence meeting a set attribute selection rule from the plurality of logic association attributes as a target logic association attribute of the target answer behavior combination, and loading the target logic association attribute into the target logic chain data.
In a possible implementation manner of the first aspect, the determining step of the logic association weight of the answer behavior logic chain network includes:
For each answer behavior logic chain network, performing logic chain analysis on each second answer sample data in the priori configured second answer sample data sequence by utilizing the answer behavior logic chain network, and generating logic association attribute sample extraction data of each second answer sample data; the second answer sample data sequence comprises a plurality of first answer sample data and logic associated attribute labeling data respectively corresponding to the plurality of first answer sample data;
Respectively comparing the logic association attribute sample extraction data of each first answer sample data with the logic association attribute labeling data, and determining a logic chain analysis effective value of the answer behavior logic chain network based on a comparison result; the logic chain analysis effective value is used for reflecting the accuracy of the answer behavior logic chain network logic chain analysis;
And respectively carrying out regularized conversion on the logic chain analysis effective values respectively corresponding to the plurality of answering behavior logic chain networks to generate logic association weights of each answering behavior logic chain network.
In a second aspect, an embodiment of the present application further provides a learning evaluation system, where the learning evaluation system includes a processor and a machine-readable storage medium, where the machine-readable storage medium stores a computer program, and the computer program is loaded and executed according to the processor to implement the job data feedback method based on the intelligent question bank of the first aspect.
According to the technical scheme in any aspect, the embodiment of the application aims to optimize learning experience through deep analysis of the student homework answering process, improve education efficiency, automatically extract and analyze student answering behaviors, construct logic chain analysis data, and further generate target answering behavior data and target logic chain data. These data help reveal the student's knowledge usage patterns, understanding capabilities, and potential learning difficulties. First, the student homework answering process related to the intelligent question bank data is monitored and recorded, and the quality of the collected data is ensured. Then, by comparing the answer behaviors with the contents in the intelligent question bank, the first knowledge application label is extracted to reflect knowledge points used by the students in solving each question. And then, generating a second knowledge application label matched with each answer track according to the answer result, and fusing and outputting the associated answer tracks to form target answer behavior data. And then, carrying out correlation analysis on the job answer behaviors and the target answer behaviors to determine logical association attributes between every two job answer behaviors. When a target answer behavior is combined and correlated with a plurality of logic correlation attributes, the logic correlation attribute with the most representation is selected to be loaded into the target logic chain data. The process involves weight fusion and updating of the confidence level of the logic association attribute, and ensures that the selected logic association attribute can represent the target answer behavior combination most. Finally, based on the target answering behavior data and the target logic chain data, weak knowledge paths in the student homework answering process can be determined, and customized information feedback is provided for students aiming at the weak links. This personalized feedback mechanism allows students to recognize and overcome their own learning disabilities faster, thereby maximizing learning efficiency. Therefore, the embodiment of the application can promote the effective configuration of educational resources by intelligently analyzing the answering behaviors and knowledge application conditions of students, help the students to build a firm knowledge structure and finally improve the overall educational quality.
Drawings
For a clearer description of the technical solutions of the embodiments of the present application, reference will be made to the accompanying drawings, which are needed to be activated in the embodiments, it being understood that the following drawings only illustrate some embodiments of the present application and are therefore not to be considered limiting of the scope, and that other related drawings can be obtained according to these drawings without the inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an intelligent question bank-based job data feedback method according to an embodiment of the present application;
fig. 2 is a schematic functional block diagram of a learning and evaluation system for implementing the job data feedback method based on the intelligent question bank according to an embodiment of the present application.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the application and is provided in the context of a particular application and its requirements. It will be apparent to those having ordinary skill in the art that various changes can be made to the disclosed embodiments and that the general principles defined herein may be applied to other embodiments and applications without departing from the principles and scope of the application. Therefore, the present application is not limited to the described embodiments, but is to be accorded the widest scope consistent with the claims.
Referring to fig. 1, the application provides a job data feedback method based on an intelligent question bank, which comprises the following steps.
Step S110, answer behavior extraction and logic chain analysis are carried out on the homework answer process data of the target student user according to any answer behavior logic chain network by utilizing a plurality of answer behavior logic chain networks, and corresponding answer behavior extraction data and logic chain analysis data are generated.
In this embodiment, the job answer process data is job answer process data related to intelligent question bank data and recorded in a monitoring manner. The answer behavior extraction data are used for reflecting: and the operation answering process data comprises first knowledge application labels which respectively correspond to each operation answering behavior related to the intelligent question library data. The logic chain analysis data is used for reflecting: logical association attributes between every two job answer behaviors.
For example, consider an offline educational platform equipped with an intelligent analysis system for tracking and evaluating the activity of each student user's homework questions. Assume that a student Bob is performing a study of the mathematical operations of the intelligent question bank data using the offline education platform. For example, a math exercise involves different types of algebraic topics, each of which requires the student Bob to exercise a specific knowledge point.
For example, when student Bob begins to make a question, his answering behavior may be tracked by monitoring recorded job answering process data (e.g., number of question attempts, selected answers, time spent solving the question, etc.). For example, when solving a quadratic problem, student Bob first tries the method but does not get the correct result, and then uses the root formula to solve the problem successfully.
On the basis, the plurality of answering behavior logic chain networks analyze the homework answering process data of the student Bob, generate answering behavior extraction data and reflect first knowledge application labels (such as a 'method' and a 'root-finding formula') corresponding to the student Bob. At the same time, logic chain analysis data is also generated for reflecting logic association attributes (such as 'from the failure of the method to the success of the root-finding formula') between the answer behaviors.
Illustratively, this process is explored in connection with another more detailed example, and to simplify the discussion, attention will be directed to one particular knowledge point in the mathematical question bank: solving a unitary quadratic equation. Assuming that the target student user is still student Bob, this will be illustrated by the case where student Bob solves the relevant mathematical problem on the intelligent education platform.
Answer behavior extraction and logic chain analysis
Scenario example-student Bob solves a unitary quadratic equation:
1. monitoring record (job answer process data):
-student Bob logging into the intelligent education platform.
He starts to solve a series of topics associated with solving the unitary quadratic equation.
Record the time of each submission of the answer by student Bob, the options selected, the step of solving the questions written, etc.
For example, for the term "solve equation (x ζ2-5x+6=0)", student Bob first tried factorization but not successfully; he then uses the root formula and gets the correct answer.
2. Extracting answer behaviors:
-analyzing the solution step and the final answer of student Bob.
The answer behavior logic chain network recognizes that student Bob first tried factorization (labeled "try factorization"), and then goes to the root formulation (labeled "apply root formulation").
These marks constitute a first knowledge application tag reflecting the student Bob's attempts and mastery of the different methods of solving the problem.
3. Logic chain analysis:
next, the logic flow of the student Bob to switch from one method to another is analyzed.
For example, note that when student Bob encounters an equation that cannot be factored, he tends to turn to the root formula.
This logic flow (from factorization failure to root formula success) is recorded as a logical association attribute, forming logical chain analysis data.
Specific scene examples:
The student Bob completes the following questions in one work cycle on the intelligent education platform:
Question 1 solving the equation (x 2-5x+6=0)
The question answering process of the student Bob, namely, firstly, trying factorization and not succeeding; then the root formula is applied to get the correct answers (x=2) and (x=3).
The first knowledge uses labels-try factorization (failure), apply root-finding formulas (success).
Question 2 solving the equation (x 2-4x+4=0)
-Student Bob's answer procedure-directly apply factorization, successfully solve (x=2).
First knowledge applying labels, applying factorization (success).
Question 3 solution equation (x ζ2+x-6=0)
The answering process of the student Bob fails to solve due to the factorization failure; the prompt is then reviewed, factorization is retried, and finally success occurs.
First knowledge exercise labels-try factorization (failure), view cues (learning), try factorization again (success).
By analyzing the transitions between these tags, logical association attributes are created:
when factoring fails, student Bob may go to the root formula (logical association attribute a).
When looking at the prompt after the factorization failed, student Bob may try factorization again and succeed (logical association attribute B).
The above process generates detailed answer behavior extraction data and logic chain analysis data. These answer behavior extraction data and logic chain analysis data may help understand the strategies that student Bob takes in solving the questions and at which knowledge points he may need additional assistance. In this way, the offline education platform can customize learning materials for the student Bob to help him consolidate concept understanding and improve problem solving skills.
Step S120, generating second knowledge application labels corresponding to the answer tracks included in the operation answer behaviors according to the first knowledge application labels corresponding to the operation answer behaviors and the answer results in the operation answer process data, and generating target answer behavior data by integrating the associated answer tracks with the second knowledge application labels.
For example, a second knowledge application tag may be generated according to the first knowledge application tag corresponding to each answer behavior of the student Bob and the actual answer result. These second knowledge application labels describe answer trajectories in further detail, e.g. "method attempt failed" and "root formula application succeeded".
Then matching the associated answer tracks, and merging and outputting the answer tracks into corresponding target answer behaviors, for example, if a plurality of students turn to a root-finding formula and solve the questions successfully after the method is unsuccessful, the answer tracks can be identified as an effective logic chain.
Illustratively, in this step, knowledge behind each answer behavior of student Bob may be analyzed and applied, and a second knowledge application tag may be generated, and similar answer trajectories may be fused to form the target answer behavior data. The purpose of this is to build a more comprehensive, deeper model of learner behavior, thereby enabling more accurate educational intervention and feedback.
For example, in the process of student Bob completing a mathematical operation, for example, he tries a method (first knowledge application tag) when solving a quadratic problem, then uses a root formula after failure and gets a correct answer (another first knowledge application tag).
For the two answer behaviors, more specific second knowledge application labels can be respectively given. For example, a recipe attempt may be marked as "under-mastered" or "misapplied", while a successful use of the root formulation may be marked as "correctly applied root formulation".
Then, the answer data of all students can be checked, and answer tracks similar to those of the students Bob can be found. If many students use root-finding formulas to solve the problems successfully after trying the method and failing, the similar answer tracks (i.e. tracks with the same or similar second knowledge application labels) are matched and fused.
By fusion, a common, effective answer pattern, which is the target answer behavior, can be identified. In this example, the target answer behavior may be described as "turn to the root formula and get successful when the method is unsuccessful". This target answer behavior data indicates that the root formula is a reliable alternative solution strategy for students who have difficulty in first attempting.
This process can help understand the common patterns of behavior of students in solving certain types of problems and reveal which strategies ultimately lead to success. By analyzing the target answering behavior, the platform can design targeted teaching content and exercises to help students strengthen knowledge points that they may not have fully mastered.
Step S130, performing answer track correlation analysis on each generated operation answer behavior and each target answer behavior respectively, determining a target answer behavior combination corresponding to each logic association attribute, selecting one logic association attribute from logic association attributes associated with the same target answer behavior combination, and loading the logic association attribute into target logic chain data to generate target logic chain data.
For example, each of the homework answering behaviors of student Bob may now be analyzed for answer trajectory correlation with the target answering behavior. Whether the answering behaviors of the students Bob accord with a certain target answering behavior combination can be determined, and the logic association attribute which can reflect the answering mode of the student group is selected from the combination, and is loaded into target logic chain data.
Illustratively, based on the situation of the student Bob, how to perform correlation analysis on each generated homework answer behavior and the target answer behavior, and how to determine and select the logical association attribute will be explained in detail below. The following are detailed explanation and specific examples of steps:
Suppose that student Bob shows the following answering behavior in a series of math questions:
after trying the method and failing, it is usually turned to the root formula.
Often the correct result is obtained when the root formula is used directly.
Answer behavior extraction data is established for the behaviors, including "method failure" and "root formula success".
Next, logical association attributes are determined
Logical association attribute a. Student Bob turns to the root formula after factoring failure.
Logical association attribute B student Bob directly adopts the root formula and successfully solves the problem.
Both logical association attributes point to the same target answer behavior—effectively solving the questions using the root formula.
One of these two logically related attributes is now selected to represent the typical answer pattern of student Bob.
If the analysis finds that most students turn to the root equation when they cannot solve by factorization, it may choose the logical association attribute A as the main logical chain.
Thus, target logic chain data is created therefrom, describing the most typical answer path of student Bob: root formulas are applied from factorization failure to success. This logic chain becomes the basis for platform identification and suggesting improvement strategies.
Specific examples:
Suppose student Bob has completed the following questions:
Subject 1 (x≡2-5x+6=0) (factorization failure, root formula success)
Subject 2 (x≡2-4x+4=0) (factorization was successful)
Subject 3 (x≡2+x-6=0) (factorization failure, success after hint)
It has now been found that topic 1 and topic 3 have in common that after a factorization failure, student Bob has tried another method and eventually succeeded in solving the problem. Note also that in topic 1, student Bob turns to the root formula without external assistance, while in topic 3 she needs a prompt to solve the topic successfully.
Based on this information, it is possible to decide to take logical association attribute a (turning to the root formula after factorization failure) as the target logical chain, as it shows the adaptive strategy of student Bob when not relying on external assistance. In this way, the educational platform can design personalized teaching resources for student Bob, for example, providing more exercises and courses about root formulas, while encouraging her to find solutions by themselves when she encounters a problem.
And step 140, determining a weak knowledge path contained in the homework answering process data according to the target answering behavior data and the target logic chain data, and feeding back information to the target student user based on the weak knowledge path.
Finally, the weak knowledge path of the student Bob can be determined by utilizing the target answer behavior data and the target logic chain data. For example, it may be found that student Bob often makes mistakes in the application of the method, which is where he needs to be enhanced.
Based on these analysis results, the student Bob may be provided with personalized feedback, e.g. recommending him more of the method practice problems or providing a teaching video on how to better use the method. Such information feedback is intended to help student Bob improve his learning process in a targeted manner and to be improved in his field of weakness.
Illustratively, based on the foregoing examples, a specific implementation step may be created to determine weak knowledge paths and based on these paths, feedback information to the target student user (student Bob). This process can be divided into the following steps:
First, the target answer behavior data and the target logic chain data generated by the student Bob when solving the homework on the platform can be used, and the target answer behavior data and the target logic chain data reveal which methods he can not master well enough in the process of solving the questions, and the difficulties he can encounter when using a specific solution strategy. By comparing the successful and unsuccessful solution strategies of student Bob, his weak knowledge points are identified. For example, if student Bob frequently fails in trying a method, but frequently succeeds with a root formula, then the method may be a weak point for him. In this way, these successive failures can be considered as a weak knowledge path. This weak knowledge path points to a specific knowledge point or skill that needs to be improved.
Based on the identified weak knowledge path, a personalized learning intervention scheme may be designed. For example, a series of video courses specifically teaching the method may be recommended for the problem of student Bob's method. For example, additional practice problems and simulation tests may be provided to help student Bob to understand and practice the knowledge point. Or may also include a teacher's one-to-one coaching, focusing on his weaknesses.
Therefore, the customized resources and suggestions are directly fed back to the student Bob through the intelligent education platform, and the response condition of the student Bob to the suggestions and the subsequent exercise result can be tracked, so that the student Bob is ensured to be promoted in the weak field.
For example, suppose that student Bob's performance in solving the problem of the quadratic equation of the ten ways of involving the method is analyzed. It finds that Bob only uses the method twice to successfully solve the equation, and the rest eight times obtain correct answers through the root-finding formula. Based on these data, the decision method is a weak knowledge path for student Bob.
Thus, the following information is pushed to student Bob:
teaching video links: "Hi student Bob, notices that you have met some challenges in the application of the method. There is a video tutorial that can help you understand and master the method better. "
Personalizing exercises: "after completing the video you can try these exercises that you pick for you to check your progress. "
Progress tracking: "will track your progress, ensuring you have improved on this knowledge point. "
In this way, student Bob is not only explicitly informed of his own points of weakness, but also gets resources and tools to improve these points of weakness. Over time his performance in this area should be improved, and the platform will also continuously monitor his progress, ensuring that the support provided is valid.
Based on the steps, the embodiment of the application aims to optimize learning experience and improve education efficiency through deep analysis of the student homework answering process, can automatically extract and analyze answering behaviors of students, construct logic chain analysis data, and further generate target answering behavior data and target logic chain data. These data help reveal the student's knowledge usage patterns, understanding capabilities, and potential learning difficulties. First, the student homework answering process related to the intelligent question bank data is monitored and recorded, and the quality of the collected data is ensured. Then, by comparing the answer behaviors with the contents in the intelligent question bank, the first knowledge application label is extracted to reflect knowledge points used by the students in solving each question. And then, generating a second knowledge application label matched with each answer track according to the answer result, and fusing and outputting the associated answer tracks to form target answer behavior data. And then, carrying out correlation analysis on the job answer behaviors and the target answer behaviors to determine logical association attributes between every two job answer behaviors. When a target answer behavior is combined and correlated with a plurality of logic correlation attributes, the logic correlation attribute with the most representation is selected to be loaded into the target logic chain data. The process involves weight fusion and updating of the confidence level of the logic association attribute, and ensures that the selected logic association attribute can represent the target answer behavior combination most. Finally, based on the target answering behavior data and the target logic chain data, weak knowledge paths in the student homework answering process can be determined, and customized information feedback is provided for students aiming at the weak links. This personalized feedback mechanism allows students to recognize and overcome their own learning disabilities faster, thereby maximizing learning efficiency. Therefore, the embodiment of the application can promote the effective configuration of educational resources by intelligently analyzing the answering behaviors and knowledge application conditions of students, help the students to build a firm knowledge structure and finally improve the overall educational quality.
In one possible implementation manner, the answer behavior extraction data and the answer behavior logic chain network have unique mapping logic association attributes, and each answer behavior extraction data comprises at least one operation answer behavior, a first knowledge application label of each operation answer behavior and a corresponding first label confidence.
Step S120 may include:
step S121, for each answer track included in the job answer process data, generating a reference knowledge application tag and a corresponding reference tag confidence level of the answer track in each answer behavior extraction data according to an answer result of any answer track in the job answer process data and a first knowledge application tag and a corresponding first tag confidence level of each job answer behavior in each answer behavior extraction data.
For example, the case of student Bob continues to be taken as an example. The student Bob solves the problem of the quadratic equation that the root-finding formula succeeds after one method fails, and data comprising two parts, namely 'method try' and 'root-finding formula application', are generated at the moment. Both parts bear a first knowledge application tag and a corresponding first tag confidence (indicating a confidence level for the accuracy of this tag).
The student Bob solves a plurality of quadratic equation questions in the whole working period, and uses different strategies each time. For each answer track, a reference knowledge application label and a corresponding reference label confidence level can be generated according to the answer result and the first knowledge application label and the first label confidence level of each operation answer behavior. For example, if student Bob successfully uses the root formula after the method fails in most cases, the reference confidence of the "root formula application" label will be high.
Step S122, extracting a reference knowledge application label and a corresponding reference label confidence level in the data based on the answer track in each answer behavior, extracting an answer behavior weight of the answer behavior logic chain network corresponding to the data according to each answer behavior, and selecting one reference knowledge application label from the generated reference knowledge application labels to output as a second knowledge application label of the answer track.
For example, after completing a number of similar quadratic topics, student Bob exhibits a consistent behavior pattern: the method fails to try and the root formula is successfully applied. Then, a final second knowledge application label is selected according to each reference knowledge application label and the confidence level thereof, and the weight of each answering action (defined by a logic chain network of the answering actions) is added. Here, because the "root formula application" shows a high confidence in the answer trajectory of student Bob, it may be selected as the second knowledge application label.
This process is essentially the construction of a predictive model based on student answer behavior data that identifies and recommends efficient learning paths. By analyzing the results of the student's answers, their first knowledge application labels and their confidence levels, a reasonable second knowledge application label can be assigned to each answer trajectory. Such analysis helps the educational platform to better understand the learning needs of the student, thereby providing more personalized teaching resources and feedback, helping the student make progress in the challenging field.
In one possible implementation manner, each answer behavior extraction data includes an answer behavior track and a non-answer behavior track, wherein the answer behavior track is an answer track included in each operation answer behavior, the non-answer behavior track is an answer track which is outside each operation answer behavior and exists in the operation answer process data, a reference knowledge application label of the non-answer behavior track is a setting label, and a reference label confidence is a setting parameter value.
For example, in one specific work scenario, student Bob tries to solve equation (x 2-6x+9=0). He first tries to solve this equation by factorization, but not; he then successfully solved the equation using the root equation. The process of the student Bob solving this equation constitutes a question answering behavior trace.
In addition to attempting to solve the problem, student Bob may perform other activities such as viewing course notes, suspending the homework to drink water, etc. These behaviors do not directly contribute to the solution results, but they exist in the homework answering process data and have some influence on the learning process of student Bob. Thus, it is possible to set a label for such behavior, such as "course review" or "rest", and give a preset confidence parameter value, for example set to 0.5, indicating a moderate level of confidence.
In step S121, for each answer behavior trace included in each answer behavior extraction data in the generated plurality of answer behavior extraction data, the first knowledge application tag and the first tag confidence level of the job answer behavior to which the answer behavior trace belongs are used as the reference knowledge application tag and the reference tag confidence level of the answer behavior trace, respectively.
Thus, for each attempt by student Bob in the process of solving the problem, each step he takes can be recorded, and a corresponding first knowledge application label and confidence level can be assigned. For example, an attempt to factor in a failure may be labeled as "factoring-failure" (the first knowledge applies a label), with a confidence of 0.7; while an attempt to successfully solve the problem with the root formula may be labeled "root formula-success" (first knowledge application label), with a confidence level of 0.95.
For the non-answering behavior of Bob, such as looking at course notes, a "course review" (set label) and a corresponding preset confidence level, such as 0.5, are also assigned.
Through such classification and labeling, student Bob's learning behavior can be more fully understood, thereby providing more accurate personalized coaching and advice. For example, if it is noted that student Bob often views course notes after factoring, he may be advised to review the relevant knowledge points before starting the solution. At the same time, if student Bob is found to have a lower confidence in the factorization than the root formula, additional exercises may be recommended to enhance his skill in this area. By combining analysis of answering behavior tracks and non-answering behavior tracks, the education platform can provide more comprehensive and deep learning support for students.
In one possible implementation, step S122 may include:
Step S1221, extracting reference knowledge application labels and corresponding reference label confidence levels in the data of each answer behavior based on the answer track, performing weight fusion on the reference label confidence levels corresponding to the at least one reference knowledge application label according to the answer behavior weight values of the answer behavior logic chain network to which the at least one reference knowledge application label belongs, and updating the reference label confidence levels of the reference knowledge application labels.
Step S1222, selecting one reference knowledge application label with updated reference label confidence meeting the set label selection rule from the generated reference knowledge application labels as a second knowledge application label of the answer track.
For example, the mathematical homework activity of student Bob on an intelligent educational platform may be used as an example to explain how this process is performed.
Assume that student Bob presents different answering behavior and non-answering behavior tracks when completing a series of mathematical homework questions, each non-answering behavior track having a corresponding reference knowledge application tag and reference tag confidence. Then this information can be used to determine which are his weak points, providing him with targeted assistance.
For example, in solving the quadratic equation \ (x 2-6x+9=0\), student Bob first tried the method (failed) and then successfully used the root equation. Thus, there are two answer behavior extraction data: "method try-fail" and "root formula apply-success", each of which has a reference label confidence level of, for example, 0.7 and 0.95, respectively.
And then, carrying out weight fusion on the two confidence degrees based on answer behavior weights in an answer behavior logic chain network. If the weight of the method is low and the weight of the root formula is high, after fusion, the confidence of the "root formula application-success" may increase, and the confidence of the "method try-failure" may decrease.
After the weight fusion, the updated confidence coefficient of the reference label is obtained by the student Bob on the answer behavior track displayed in the whole process of solving the equation.
And selecting an updated reference label confidence level according to a set label selection rule in all generated reference knowledge application labels. For example, if the rule is to select the highest confidence label, then "root formula application-success" may be selected as the second knowledge application label for this answer trajectory.
Through the above steps, the learning mode, advantage and weakness of the student can be accurately identified and personalized feedback and support provided accordingly. In the case of student Bob, if he finds frequent failures on the recipe and is always successful when using the root formulation, he can recommend more learning resources about the recipe and give more exercises of the root formulation to consolidate his advantages. In this way, student Bob can learn more efficiently, while educational platform can also better meet his learning needs.
In one possible implementation manner, the step of determining the answer behavior weight of the answer behavior logic chain network includes:
Step A110, for each answer behavior logic chain network, carrying out answer behavior extraction on each first answer sample data in the first answer sample data sequence configured in advance by utilizing the answer behavior logic chain network, and generating answer behavior sample extraction data of each first answer sample data. The first answer sample data sequence comprises a plurality of first answer sample data and answer behavior labeling data respectively corresponding to the plurality of first answer sample data.
For example, a scenario in which student Bob completes a mathematical task on an intelligent education platform is continuously used for illustration.
It is envisaged that the platform has a database containing a first sequence of answer sample data for a plurality of students (including student Bob) when solving a particular type of question. The data sequences record detailed processes of solving questions of students, and each data sequence has corresponding answering behavior marking data, such as labels of correct answers, labels of wrong types and the like.
And extracting answer behaviors from the first answer sample data of each item in the database to generate answer behavior sample extraction data. For example, when Bob solves the problem of a method, bob tries to solve the method, and then successfully uses the root formula after failure. This behavior sequence is extracted and correlated with his answer result.
And step A120, respectively comparing the answer behavior sample extraction data of each first answer sample data with the answer behavior labeling data, and determining an answer behavior extraction effective value of the answer behavior logic chain network based on a comparison result. And the answer behavior extraction effective value is used for reflecting the accuracy of answer behavior extraction of the answer behavior logic chain network.
For example, student Bob and other students' answer behavioral sample extraction data may be compared to previously manually annotated answer behavioral annotation data. For example, if the student Bob's behavior is first attempted and then fails, then successfully applies the root formula, and the labeling data also reflects this, then this answer behavior extraction may be considered accurate.
And evaluating the extraction accuracy of the answer behavior logic chain network by comparing the extracted answer behavior data with the answer behavior labeling data, thereby determining the effective value of answer behavior extraction. The effective value is a quantization index used for measuring the accuracy of answer behavior extraction.
And step A130, extracting effective values from answer behaviors respectively corresponding to the answer behavior logic chain networks, and performing regularized conversion to generate answer behavior weights of each answer behavior logic chain network.
For example, different logic chain networks of answer behaviors may be directed to different question types or knowledge points, and their answer behaviors may therefore differ in extracting valid values. In order to extract valid values from these answer behaviors and convert them into weights that can be used for further analysis, a regularization process is performed. For example, the effective value is converted into a value between 0 and 1 according to a certain algorithm (such as normalization processing), and the value is the answer behavior weight of the answer behavior logic chain network. The weights reflect the importance of the extraction accuracy of answer behaviors in different logic chain networks and can be used for subsequent data analysis and learning intervention.
Through the steps, the answer behavior weight value can be determined for each answer behavior logic chain network, and the answer behavior weight value reflects the accuracy of the answer behavior logic chain network in capturing the answer behaviors of students. This enables a more accurate assessment of the learning status of the student and accordingly provides personalized learning advice. For example, if the answer behavior weight of the student Bob on the method is low, which indicates that the answer behavior extraction accuracy of this part is not high, further training or adjustment of the logic chain network may be required to better support the learning of the student Bob.
In one possible implementation, step S130 may include:
Step S131, for each generated logic association attribute, performing answer track correlation analysis on two job answer behaviors corresponding to the logic association attribute and each target answer behavior respectively, and generating corresponding target answer behavior combination and correlation results. The correlation result is used for reflecting the correlation parameter values of the two job answering behaviors and the target answering behaviors.
For each generated target answer behavior combination, the following operations are respectively executed to obtain target logic chain data:
Step S132, when a target answer behavior is combined and correlated with a plurality of logic correlation attributes, selecting one logic correlation attribute from the plurality of logic correlation attributes according to the correlation result of each logic correlation attribute, and loading the selected logic correlation attribute into the target logic chain data.
For example, to further illustrate this technical content, consider the mathematical homework activity of student Bob on an intelligent educational platform. Assuming that student Bob completes a series of questions related to solving the quadratic equation, his trajectory of answering behavior in each question can be recorded. Now, it is necessary to analyze this data to build a more complete logical chain of knowledge points, helping other students to learn this section better.
Student Bob solves two different quadratic problems. The first problem is solved using a method and the second problem is solved using a root formulation. These two answer actions may be recorded and marked with their logically related attributes, such as "use method" and "use root formula".
The system firstly analyzes the relevance between the two homework answering behaviors of the student Bob and target answering behaviors (namely ideal answering behavior tracks) preset in a database. Through the analysis, a target answer behavior combination corresponding to each job answer behavior is generated, and meanwhile, a parameter value reflecting the correlation between each job answer behavior and the target answer behavior is obtained.
Next, it was found that a plurality of students all exhibited similar behavior trajectories for answering questions when solving similar quadratic equations, such as turning to the root equation and success after multiple trial-and-error methods failed. If a certain target answer behavior combination (such as successfully solving a quadratic equation) is associated with a plurality of logic association attributes (such as 'try-out method' and 'use root formula'), one of the logic association attributes is selected according to the correlation result of each logic association attribute. For example, if the answer behavior associated with "use root formula" is generally closer to the target answer behavior combination, then this logical association attribute may be selected and loaded into the target logical chain data.
Thus, the established target logic chain data can more accurately guide students to adopt a more efficient strategy (namely directly using the root-finding formula) when encountering a quadratic equation, rather than adhering to a method with poor use effect.
By performing correlation analysis on the answer tracks, a target logic chain containing the most effective answer behaviors can be constructed. For student Bob, this means that if he encounters a similar topic in the future, a more optimal learning advice will be given based on the previous analysis results. For other students, the platform can recommend strategies which are most in line with efficient learning paths by using the analysis data, so that the students can be helped to improve the problem solving capability, and finally the aim of improving the overall learning efficiency is achieved.
In one possible implementation, step S131 may include:
Step S1311, for the two job answer behaviors corresponding to each logic association attribute, determining, based on the answer result of the job answer behavior in the job answer process data and the first knowledge application label of the job answer behavior, a target answer behavior that is consistent with the set answer result association logic association attribute and identical to the first knowledge application label.
For example, this technical content relates to analyzing correlations between job answer behaviors in order to understand how different answer behaviors are correlated with each other. Again, the example of student Bob completing a mathematical assignment on the intelligent educational platform is used to illustrate these steps.
Assume that the current goal is to analyze student Bob's answering behavior in dealing with two mathematical questions with logically related attributes to see if there is a correlation between these behaviors.
Student Bob previously solved two logically related mathematical problems such as "root quadratic" and "analyze images of quadratic functions". Both job answering behaviors require a certain understanding of the quadratic equation, and they have logically associated properties.
First, other target answering behaviors which accord with the logic association attribute associated with the preset answering result are determined, which means that the target answering behaviors can find out the performances of the student Bob in other related questions, wherein the questions also relate to knowledge points of quadratic equations, and whether the first knowledge application labels of the students are the same is checked.
Step S1312, performing answer track correlation analysis on the operation answer behaviors and the target answer behaviors, and determining answer track correlation results of the operation answer behaviors, wherein the answer track correlation results are used for reflecting whether answer track correlations exist between the operation answer behaviors and the target answer behaviors or not.
For example, in addition to solving the quadratic equation and analyzing the quadratic function image, student Bob also tried other similar topics, such as calculating the extremum of the quadratic function. Therefore, the answer track of the student Bob on the questions can be analyzed, including factors such as the question solving method selected by the student Bob, the type of the mistakes made, the question solving time and the like, and then whether the student Bob displays a consistent answer behavior mode when solving the questions is determined.
Step S1313, determining the target answer behavior combination based on the target answer behaviors respectively corresponding to the two job answer behaviors.
For example, all the target answer behaviors related to the two corresponding job answer behaviors (such as calculating the extremum of the quadratic function) can be identified and recorded according to the two corresponding job answer behaviors (such as solving the quadratic equation and analyzing the quadratic function image), so as to determine a target answer behavior combination.
Step S1314, determining a correlation result of the logical association attribute based on the number of answer track correlations between the two job answer behaviors and the corresponding target answer behaviors.
Finally, the correlation result of the logic correlation attribute is determined based on the number of answer trajectory correlations exhibited by the student Bob in the logically-correlated answer behaviors. If the student Bob shows a consistent solution strategy and knowledge application in all related questions, then it is determined that these answer behaviors have a high degree of relevance.
Through the steps, the mastery degree of the student Bob on a specific knowledge point and the internal relation between answer behaviors can be better understood. For example, if it is found that student Bob takes a valid strategy and obtains the correct result when solving the quadratic equation and calculating the quadratic function extremum, this indicates that he has a deeper understanding of the quadratic equation. Conversely, if he exhibits inconsistent answering behavior on some related questions, additional resources or guidance may need to be provided to help him advance in the art.
In one possible implementation manner, the logic chain analysis data and the answer behavior logic chain network have unique mapping logic association attributes, and each logic chain analysis data comprises logic association attributes between every two job answer behaviors and corresponding logic association attribute confidence degrees.
Step S132 may include:
Step S1321, when a target answer behavior is combined and associated with a plurality of logic association attributes, respectively carrying out weight fusion on logic association attribute confidence degrees respectively corresponding to logic association attributes belonging to the same logic association attributes according to the logic association weight value of the answer behavior logic chain network corresponding to each logic association attribute and the correlation result of each logic association attribute, and updating the logic association attribute confidence degrees of each logic association attribute.
For example, the present embodiment describes how, in the face of a plurality of possible logical association attributes, the logical association attribute most relevant to the target answer behavior combination is selected and applied to the target logical chain data. This process will be specifically described by continuing with the example of student Bob completing a mathematical operation on the intelligent educational platform.
Assuming that the intelligent education platform has established a logic chain network of answering behaviors including different solving steps by analyzing various strategies of the student Bob and other students to solve quadratic equations. Each step of solving the problem strategy is regarded as a logic association attribute, and each attribute has an initial confidence level reflecting the importance of the attribute in the successful solving process.
If found, the logical association attributes common to students include "try-out method", "use root formula" and "solve problem graphically" when solving quadratic equations. Each logical association attribute corresponds to a particular answer behavior logical chain network and has a correlation parameter value based on previous analysis results.
When the target answer behavior combination (such as successfully solving a quadratic equation) is related to the logic association attributes, the confidence level of the logic association attributes is updated by combining the weight value and the correlation result of the logic chain network corresponding to each logic association attribute. The weights may be derived from previous answer actions extracting valid values, and the correlation results are derived from answer trajectory correlation analysis. Thus, the confidence coefficients belonging to the same logical association attribute can be subjected to weight fusion, and the confidence coefficients can be updated.
Step S1322, selecting, from the plurality of logic association attributes, one logic association attribute whose updated logic association attribute confidence accords with a set attribute selection rule as a target logic association attribute of the target answer behavior combination, and loading the target logic association attribute into the target logic chain data.
For example, after weight fusion, a set of updated logical association attribute confidence levels is obtained. These updated confidence levels more accurately reflect the correlation between each logical association attribute and the target answer behavior combination. The set attribute selection rule may select one from the updated logical association attributes. If the rule is to select the attribute with the highest confidence, then the "use root formula" may be selected because it shows the strongest correlation with the successful solution. This target logical association attribute is then loaded into the target logical chain data.
Through the process, the logical association attribute most relevant to successful answer behaviors can be identified and enhanced. For student Bob, this means that he will recommend that he use the root formula directly when he encounters similar problems in the future, as this is considered to be the most likely way to help him successfully solve the problem. Meanwhile, the process also provides an optimized learning path for other students, so that the students can master the skills for solving the quadratic equation more effectively. The personalized learning intervention not only improves the learning effect, but also increases the confidence of students on the problem solving method.
In one possible implementation manner, the step of determining the logic association weight of the answer behavior logic chain network includes:
And step B110, for each answer behavior logic chain network, performing logic chain analysis on each second answer sample data in the prior configured second answer sample data sequence by utilizing the answer behavior logic chain network, and generating logic association attribute sample extraction data of each second answer sample data. The second answer sample data sequence comprises a plurality of first answer sample data and logic associated attribute labeling data corresponding to the plurality of first answer sample data respectively.
For example, the example of a student Bob completing a mathematical task on an intelligent educational platform continues to be used.
It is assumed that the intelligent education platform is preconfigured with a second answer sample data sequence, and the data comprises behaviors of a plurality of different students (including students Bob) when solving similar mathematical problems, and logically associated attribute labeling data of the behaviors by an expert. For example, a sample data may record that a student first tried the method failed in solving the quadratic equation and then successfully used the root equation to arrive at an answer. The answer behavior logic chain network analyzes the second answer sample data, extracts the logic association attribute in each sample data, and generates logic association attribute sample extraction data. For example, it may identify the logical association attribute "turn to root formula after failure of the recipe".
And step B120, respectively comparing the logic association attribute sample extraction data and the logic association attribute labeling data of each first answer sample data, and determining a logic chain analysis effective value of the answer behavior logic chain network based on a comparison result. And the logic chain analysis effective value is used for reflecting the accuracy of the answer behavior logic chain network logic chain analysis.
For example, student Bob also produces a series of answering actions when he completes his homework. The platform now needs to verify the accuracy of the logic chain analysis. The student Bob's logical association attribute sample extraction data may be compared with pre-labeled logical association attribute labeling data. If the logic association attribute of the student Bob in the actual problem solving process accords with the data marked by the expert, the logic chain analysis can be considered to be effective, and a logic chain analysis effective value is determined according to the logic chain analysis effective value so as to reflect the accuracy of the answer behavior logic chain network in the logic chain analysis.
And step B130, respectively carrying out regularized conversion on the logic chain analysis effective values respectively corresponding to the plurality of answer behavior logic chain networks to generate logic association weights of each answer behavior logic chain network.
For example, on an intelligent educational platform, there may be multiple logic chain networks that are each responsible for analyzing different types of answering behavior or for different question types.
For each answer behavior logic chain network, the logic chain analysis effective value is subjected to regularized conversion, for example, the value is positioned between 0 and 1 through normalization processing, so that the logic association weight is generated. These logical association weights indicate the degree of trust of different logical chain networks in capturing and analyzing the logical association attributes.
Through the steps, the logic association weight can be determined for each answer behavior logic chain network. This logical association weight helps instruct the platform how to better evaluate and understand the student's answering behavior, thereby providing personalized educational resources and advice. For example, if student Bob gets a recommendation of a high logical association weight when analyzing using a particular logical chain network, the platform will trust the analysis result of that network more and possibly give more suggestions for solving quadratic equations using root formulas based thereon. This approach is ultimately intended to help students learn and master knowledge points in the most efficient manner.
Fig. 2 schematically illustrates a learning assessment system 100 that can be used to implement various embodiments described in the present application.
For one embodiment, FIG. 2 shows a learning assessment system 100, the learning assessment system 100 having a plurality of processors 102, a control module (chipset) 104 coupled to one or more of the processor(s) 102, a memory 106 coupled to the control module 104, a non-volatile memory (NVM)/storage device 108 coupled to the control module 104, a plurality of input/output devices 110 coupled to the control module 104, and a network interface 112 coupled to the control module 104.
Processor 102 may include a plurality of single-core or multi-core processors, and processor 102 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some alternative embodiments, the learning evaluation system 100 can be used as a server device such as a gateway in the embodiments of the present application.
In some alternative embodiments, the learning evaluation system 100 can include a plurality of computer-readable media (e.g., memory 106 or NVM/storage 108) having instructions 114 and a plurality of processors 102 combined with the plurality of computer-readable media configured to execute the instructions 114 to implement the modules to perform the actions described in this disclosure.
For one embodiment, the control module 104 may include any suitable interface controller to provide any suitable interface to one or more of the processor(s) 102 and/or any suitable management end or component in communication with the control module 104.
The control module 104 may include a memory controller module to provide an interface to the memory 106. The memory controller modules may be hardware modules, software modules, and/or firmware modules.
Memory 106 may be used to load and store data and/or instructions 114 for learning assessment system 100, for example. For one embodiment, memory 106 may comprise any suitable volatile memory, such as, for example, a suitable DRAM. In some alternative embodiments, memory 106 may comprise a double data rate type four synchronous dynamic random access memory.
For one embodiment, the control module 104 may include a plurality of input/output controllers to provide interfaces to the NVM/storage 108 and the input/output device(s) 110.
For example, NVM/storage 108 may be used to store data and/or instructions 114. NVM/storage 108 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage(s).
NVM/storage 108 may include storage resources that are physically part of the management side on which learning evaluation system 100 is installed, or which may be accessible by the device may not be necessary as part of the device. For example, NVM/storage 108 may be accessed via input/output device(s) 110 according to a network.
The input/output device(s) 110 may provide an interface for the learning assessment system 100 to communicate with any other suitable management end, and the input/output device 110 may include a communication component, a pinyin component, a sensor component, and the like. The network interface 112 may provide an interface for the learning assessment system 100 to communicate in accordance with a plurality of networks, and the learning assessment system 100 may communicate wirelessly with a plurality of components of a wireless network in accordance with any of a plurality of wireless network standards and/or protocols, such as accessing a wireless network in accordance with a communication standard, such as WiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof.
For one embodiment, one or more of the processor(s) 102 may be packaged together with logic of a plurality of controllers (e.g., memory controller modules) of the control module 104. For one embodiment, one or more of the processor(s) 102 may be packaged together with logic of multiple controllers of the control module 104 to form a system in package. For one embodiment, one or more of the processor(s) 102 may be integrated on the same die with logic of multiple controllers of the control module 104. For one embodiment, one or more of the processor(s) 102 may be integrated on the same die with logic of multiple controllers of the control module 104 to form a system-on-chip.
In various embodiments, the learning evaluation system 100 may be, but is not limited to being: a desktop computing device or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.), and the like. In various embodiments, the learning evaluation system 100 may have more or fewer components and/or different architectures. For example, in some alternative embodiments, the learning assessment system 100 includes a plurality of cameras, a keyboard, a liquid crystal display screen (including a touch screen display), a non-volatile memory port, a plurality of antennas, a graphics chip, an application specific integrated circuit, and a speaker.
The foregoing has outlined rather broadly the more detailed description of the application in order that the detailed description of the principles and embodiments of the application may be implemented in conjunction with the detailed description of the application that follows, the examples being merely intended to facilitate an understanding of the method of the application and its core concepts; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (8)

1. An intelligent question bank-based job data feedback method, which is characterized by being applied to a learning evaluation system, comprising the following steps:
Utilizing a plurality of answer behavior logic chain networks, carrying out answer behavior extraction and logic chain analysis on the homework answer process data of the target student user according to any answer behavior logic chain network, and generating corresponding answer behavior extraction data and logic chain analysis data; the operation answering process data are operation answering process data which are recorded in a monitoring mode and are related to the intelligent question bank data; the answer behavior extraction data are used for reflecting: the operation answering process data comprises first knowledge application labels which correspond to operation answering behaviors related to the intelligent question library data respectively; the logic chain analysis data is used for reflecting: logic association attributes between every two job answering behaviors;
Generating second knowledge application labels which are respectively corresponding to all answer tracks included by all operation answer behaviors according to the first knowledge application labels respectively corresponding to all operation answer behaviors and answer results respectively in the operation answer process data, and generating target answer behavior data by merging and outputting all associated answer tracks which are matched with the second knowledge application labels into corresponding target answer behaviors;
performing answer track correlation analysis on each generated operation answer behavior and each target answer behavior respectively, determining a target answer behavior combination corresponding to each logic association attribute, selecting one logic association attribute from each logic association attribute associated with the same target answer behavior combination, and loading the logic association attribute into target logic chain data to generate target logic chain data;
Determining weak knowledge paths contained in the homework answering process data according to the target answering behavior data and the target logic chain data, and feeding back information to the target student users based on the weak knowledge paths;
The answer behavior extraction data and the answer behavior logic chain network have unique mapping logic association attributes, each answer behavior extraction data comprises at least one operation answer behavior, a first knowledge application label of each operation answer behavior and a corresponding first label confidence level;
The generating a second knowledge application label corresponding to each answer track included in each operation answer behavior according to the first knowledge application label corresponding to each operation answer behavior and the answer result in the operation answer process data, includes:
For each answer track included in the operation answer process data, generating a reference knowledge application tag and a corresponding reference tag confidence coefficient of the answer track in each answer behavior extraction data according to an answer result of any answer track in the operation answer process data and a first knowledge application tag and a corresponding first tag confidence coefficient of each operation answer behavior in each answer behavior extraction data;
Extracting a reference knowledge application tag and a corresponding reference tag confidence level in the data based on the answer track in each answer behavior, extracting an answer behavior weight of an answer behavior logic chain network corresponding to the data according to each answer behavior, and selecting one reference knowledge application tag from the generated reference knowledge application tags to output as a second knowledge application tag of the answer track;
Each answer behavior extraction data comprises an answer behavior track and a non-answer behavior track, wherein the answer behavior track is an answer track contained in each operation answer behavior, the non-answer behavior track is an answer track which is outside each operation answer behavior and exists in the operation answer process data, the reference knowledge of the non-answer behavior track uses a label as a setting label, and the confidence of the reference label is a setting parameter value;
The generating a reference knowledge application label and a corresponding reference label confidence level of the answer track in each answer behavior extraction data according to the answer result of any answer track in the answer process data and the first knowledge application label and the corresponding first label confidence level of each answer behavior in each answer behavior extraction data comprises:
and for each answer behavior track included in each answer behavior extraction data in the generated plurality of answer behavior extraction data, respectively using a first knowledge application label and a first label confidence coefficient of the operation answer behavior to which the answer behavior track belongs as a reference knowledge application label and a reference label confidence coefficient of the answer behavior track.
2. The method for feeding back operation data based on an intelligent question bank according to claim 1, wherein the selecting, based on the reference knowledge application label and the corresponding reference label confidence level of the answer trajectory in each answer behavior extraction data, according to the answer behavior weight of the answer behavior logic chain network corresponding to each answer behavior extraction data, one reference knowledge application label from the generated reference knowledge application labels to output as the second knowledge application label of the answer trajectory includes:
Extracting reference knowledge application labels and corresponding reference label confidence levels in data based on the answer tracks in each answer behavior, carrying out weight fusion on the reference label confidence levels corresponding to the at least one reference knowledge application label respectively according to answer behavior weight values of an answer behavior logic chain network to which the at least one reference knowledge application label respectively belongs, and updating the reference label confidence levels of the reference knowledge application labels;
And selecting one reference knowledge application label with updated reference label confidence meeting a set label selection rule from the generated reference knowledge application labels as a second knowledge application label of the answer track.
3. The method for feeding back operation data based on intelligent question bank according to claim 1 or 2, wherein the step of determining answer behavior weight of the answer behavior logic chain network comprises:
for each answer behavior logic chain network, carrying out answer behavior extraction on each first answer sample data in a priori configured first answer sample data sequence by utilizing the answer behavior logic chain network, and generating answer behavior sample extraction data of each first answer sample data; the first answer sample data sequence comprises a plurality of first answer sample data and answer behavior labeling data respectively corresponding to the plurality of first answer sample data;
Respectively comparing answer behavior sample extraction data of each first answer sample data with answer behavior labeling data, and determining answer behavior extraction effective values of the answer behavior logic chain network based on comparison results; the effective value extracted by the answering behavior is used for reflecting the accuracy of the answering behavior extraction of the answering behavior logic chain network;
And respectively extracting effective values from answer behaviors corresponding to the answer behavior logic chain networks, and performing regularized conversion to generate answer behavior weights of each answer behavior logic chain network.
4. The method for feeding back job data based on intelligent question library according to claim 1 or 2, wherein the step of performing answer track correlation analysis on each generated job answer behavior and each target answer behavior respectively, determining a target answer behavior combination corresponding to each logic association attribute, and selecting one logic association attribute from each logic association attribute associated with the same target answer behavior combination to load into target logic chain data, and generating target logic chain data comprises the steps of:
For each generated logic association attribute, carrying out answer track correlation analysis on two job answer behaviors corresponding to the logic association attribute and each target answer behavior respectively to generate a corresponding target answer behavior combination and correlation result; the correlation result is used for reflecting correlation parameter values of the two operation answering behaviors and the target answering behaviors respectively;
For each generated target answer behavior combination, the following operations are respectively executed to obtain target logic chain data:
When a plurality of logic association attributes are associated by a target answer behavior combination, selecting one logic association attribute from the plurality of logic association attributes according to the correlation result of each logic association attribute, and loading the logic association attribute into the target logic chain data.
5. The method for feeding back job data based on intelligent question bank according to claim 4, wherein said performing answer track correlation analysis on two job answer behaviors corresponding to the logic association attribute and each target answer behavior respectively to generate corresponding target answer behavior combination and correlation result includes:
For two job answering behaviors corresponding to each logic association attribute, determining target answering behaviors which are consistent with the set answering result association logic association attribute with the job answering behaviors and have the same target answering behaviors based on the answering results of the job answering behaviors in the job answering process data and the first knowledge application labels of the job answering behaviors;
Analyzing the answer track correlation between the operation answer behavior and the target answer behavior, and determining an answer track correlation result of the operation answer behavior, wherein the answer track correlation result is used for reflecting whether the answer track correlation exists between the operation answer behavior and the target answer behavior;
determining the target answer behavior combination based on the target answer behaviors respectively corresponding to the two operation answer behaviors;
And determining a correlation result of the logic correlation attribute based on the number of answer track correlations between the two job answer behaviors and the target answer behaviors respectively corresponding to the two job answer behaviors.
6. The intelligent question library-based job data feedback method according to claim 5, wherein the logic chain analysis data and the answer behavior logic chain network have unique mapping logic association attributes, and each logic chain analysis data comprises logic association attributes between every two job answer behaviors and corresponding logic association attribute confidence degrees;
when a target answer behavior is combined and correlated with a plurality of logic correlation attributes, selecting one logic correlation attribute from the plurality of logic correlation attributes to be loaded into the target logic chain data according to the correlation result of each logic correlation attribute, wherein the method comprises the following steps:
When a target answer behavior combination associates a plurality of logic association attributes, respectively carrying out weight fusion on logic association attribute confidence degrees respectively corresponding to logic association attributes belonging to the same logic association attribute through logic association weights of answer behavior logic chain networks corresponding to each logic association attribute and correlation results of each logic association attribute, and updating the logic association attribute confidence degrees of each logic association attribute;
and selecting one logic association attribute with updated logic association attribute confidence meeting a set attribute selection rule from the plurality of logic association attributes as a target logic association attribute of the target answer behavior combination, and loading the target logic association attribute into the target logic chain data.
7. The method for job data feedback based on intelligent question bank according to claim 6, wherein the step of determining the logic association weight of the answer behavior logic chain network comprises:
For each answer behavior logic chain network, performing logic chain analysis on each second answer sample data in the priori configured second answer sample data sequence by utilizing the answer behavior logic chain network, and generating logic association attribute sample extraction data of each second answer sample data; the second answer sample data sequence comprises a plurality of first answer sample data and logic associated attribute labeling data respectively corresponding to the plurality of first answer sample data;
Respectively comparing the logic association attribute sample extraction data of each first answer sample data with the logic association attribute labeling data, and determining a logic chain analysis effective value of the answer behavior logic chain network based on a comparison result; the logic chain analysis effective value is used for reflecting the accuracy of the answer behavior logic chain network logic chain analysis;
And respectively carrying out regularized conversion on the logic chain analysis effective values respectively corresponding to the plurality of answering behavior logic chain networks to generate logic association weights of each answering behavior logic chain network.
8. A learning assessment system comprising a processor and a machine-readable storage medium having stored therein machine-executable instructions loaded and executed by the processor to implement the intelligent question bank based job data feedback method of any of claims 1-7.
CN202311675200.2A 2023-12-08 2023-12-08 Work data feedback method and learning evaluation system based on intelligent question bank Active CN117557426B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311675200.2A CN117557426B (en) 2023-12-08 2023-12-08 Work data feedback method and learning evaluation system based on intelligent question bank

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311675200.2A CN117557426B (en) 2023-12-08 2023-12-08 Work data feedback method and learning evaluation system based on intelligent question bank

Publications (2)

Publication Number Publication Date
CN117557426A CN117557426A (en) 2024-02-13
CN117557426B true CN117557426B (en) 2024-05-07

Family

ID=89812614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311675200.2A Active CN117557426B (en) 2023-12-08 2023-12-08 Work data feedback method and learning evaluation system based on intelligent question bank

Country Status (1)

Country Link
CN (1) CN117557426B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077178A (en) * 2011-10-26 2013-05-01 财团法人资讯工业策进会 Learning diagnosis and dynamic learning resource recommendation method and system
CN107644572A (en) * 2016-07-21 2018-01-30 上海莘越软件科技有限公司 A kind of tutoring system based on thought process
CN108509439A (en) * 2017-02-24 2018-09-07 上海莘越软件科技有限公司 A kind of Algebra Teaching system
CN109035947A (en) * 2018-08-06 2018-12-18 苏州承儒信息科技有限公司 A kind of working method of the educational system based on step analysis mode
CN111931875A (en) * 2020-10-10 2020-11-13 北京世纪好未来教育科技有限公司 Data processing method, electronic device and computer readable medium
CN112000881A (en) * 2020-07-31 2020-11-27 广州未名中智教育科技有限公司 Learning method, system, computer device and storage medium for recommending knowledge
CN113379320A (en) * 2021-07-05 2021-09-10 上海松鼠课堂人工智能科技有限公司 Learning effect evaluation method, device, equipment and storage medium
KR20220007193A (en) * 2020-07-10 2022-01-18 주식회사 제네시스랩 Methods, Systems and Computer-Readable Medium for Deriving In-Depth Questions for Automated Evaluation of Interview Videos using Machine Learning Model
WO2022170985A1 (en) * 2021-02-09 2022-08-18 广州视源电子科技股份有限公司 Exercise selection method and apparatus, and computer device and storage medium
CN116541538A (en) * 2023-07-06 2023-08-04 广东信聚丰科技股份有限公司 Intelligent learning knowledge point mining method and system based on big data
CN116957867A (en) * 2023-07-10 2023-10-27 华中师范大学 Digital human teacher online teaching service method, electronic equipment and computer readable storage medium
CN117033802A (en) * 2023-10-09 2023-11-10 广东信聚丰科技股份有限公司 Teaching subject pushing method and system based on AI assistance

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8366449B2 (en) * 2008-08-13 2013-02-05 Chi Wang Method and system for knowledge diagnosis and tutoring
US20120244507A1 (en) * 2011-03-21 2012-09-27 Arthur Tu Learning Behavior Optimization Protocol (LearnBop)

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077178A (en) * 2011-10-26 2013-05-01 财团法人资讯工业策进会 Learning diagnosis and dynamic learning resource recommendation method and system
CN107644572A (en) * 2016-07-21 2018-01-30 上海莘越软件科技有限公司 A kind of tutoring system based on thought process
CN108509439A (en) * 2017-02-24 2018-09-07 上海莘越软件科技有限公司 A kind of Algebra Teaching system
CN109035947A (en) * 2018-08-06 2018-12-18 苏州承儒信息科技有限公司 A kind of working method of the educational system based on step analysis mode
KR20220007193A (en) * 2020-07-10 2022-01-18 주식회사 제네시스랩 Methods, Systems and Computer-Readable Medium for Deriving In-Depth Questions for Automated Evaluation of Interview Videos using Machine Learning Model
CN112000881A (en) * 2020-07-31 2020-11-27 广州未名中智教育科技有限公司 Learning method, system, computer device and storage medium for recommending knowledge
CN111931875A (en) * 2020-10-10 2020-11-13 北京世纪好未来教育科技有限公司 Data processing method, electronic device and computer readable medium
WO2022170985A1 (en) * 2021-02-09 2022-08-18 广州视源电子科技股份有限公司 Exercise selection method and apparatus, and computer device and storage medium
CN113379320A (en) * 2021-07-05 2021-09-10 上海松鼠课堂人工智能科技有限公司 Learning effect evaluation method, device, equipment and storage medium
CN116541538A (en) * 2023-07-06 2023-08-04 广东信聚丰科技股份有限公司 Intelligent learning knowledge point mining method and system based on big data
CN116957867A (en) * 2023-07-10 2023-10-27 华中师范大学 Digital human teacher online teaching service method, electronic equipment and computer readable storage medium
CN117033802A (en) * 2023-10-09 2023-11-10 广东信聚丰科技股份有限公司 Teaching subject pushing method and system based on AI assistance

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于决策树的CAI测试软件的设计与实现;吴斌新;;中国远程教育;20060710(07);66-69 *
基于贝叶斯网络的学习者学习行为评估算法研究;曾蔚;太原师范学院学报(自然科学版);20181231;第17卷(第4期);29-34 *
知识追踪模型在数学应用题中的应用概述;余小鹏;赵亚;;信息通信;20200315(03);205-206 *
计算机自适应测验中沉思-冲动型认知风格、能力水平、试题难度与试题作答时间的关系分析;陆宏;王玥;王超;梁雨;;现代教育技术;20201015(10);92-98 *

Also Published As

Publication number Publication date
CN117557426A (en) 2024-02-13

Similar Documents

Publication Publication Date Title
US11631338B2 (en) Deep knowledge tracing with transformers
US20130288222A1 (en) Systems and methods to customize student instruction
US20200202226A1 (en) System and method for context based deep knowledge tracing
US20090202969A1 (en) Customized learning and assessment of student based on psychometric models
US20190130511A1 (en) Systems and methods for interactive dynamic learning diagnostics and feedback
CN112052828B (en) Learning ability determining method, learning ability determining device and storage medium
CN111177413A (en) Learning resource recommendation method and device and electronic equipment
RU2010104996A (en) DEVICE, SYSTEM AND METHOD OF ADAPTIVE TEACHING AND TRAINING
CN105373977A (en) Course teaching system and operation method of course teaching system
CN105825454A (en) Self-adaptive learning method and system based on computer terminal, and terminal device
US10410534B2 (en) Modular system for the real time assessment of critical thinking skills
CN115544241B (en) Intelligent pushing method and device for online operation
CN111753846A (en) Website verification method, device, equipment and storage medium based on RPA and AI
Thomas et al. When the tutor becomes the student: Design and evaluation of efficient scenario-based lessons for tutors
CN108595531A (en) Spell training method, system, computer equipment and storage medium
Asselman et al. Evaluating the impact of prior required scaffolding items on the improvement of student performance prediction
CN111951133B (en) Method, device and storage medium for assisting in solving questions
CN117557426B (en) Work data feedback method and learning evaluation system based on intelligent question bank
US11416558B2 (en) System and method for recommending personalized content using contextualized knowledge base
CN113469508B (en) Personalized education management system, method and medium based on data analysis
Sun et al. Current state of learning analytics: a synthesis review based on the combination of activity theory and pedagogy
CN114943032A (en) Information processing method, information processing device, electronic equipment and computer readable storage medium
Zhang et al. Going Online: A Simulated Student Approach for Evaluating Knowledge Tracing in the Context of Mastery Learning.
CN110827178A (en) System and method for making learning plan by artificial intelligence
CN105118341A (en) Network classroom teaching method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant