KR20170004330A - Applicant-customized evaluation and analysis system by grouping the test applicants and the method thereof - Google Patents

Applicant-customized evaluation and analysis system by grouping the test applicants and the method thereof Download PDF

Info

Publication number
KR20170004330A
KR20170004330A KR1020150094545A KR20150094545A KR20170004330A KR 20170004330 A KR20170004330 A KR 20170004330A KR 1020150094545 A KR1020150094545 A KR 1020150094545A KR 20150094545 A KR20150094545 A KR 20150094545A KR 20170004330 A KR20170004330 A KR 20170004330A
Authority
KR
South Korea
Prior art keywords
candidates
candidate
test
wrong
test set
Prior art date
Application number
KR1020150094545A
Other languages
Korean (ko)
Inventor
방규선
Original Assignee
(주)지유에듀테인먼트
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)지유에듀테인먼트 filed Critical (주)지유에듀테인먼트
Priority to KR1020150094545A priority Critical patent/KR20170004330A/en
Publication of KR20170004330A publication Critical patent/KR20170004330A/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The present invention relates to a system and a method for analyzing and analyzing a mock testimony, more specifically, to a testee who takes a mock test and shows correct answers rate and error rate per problem, and other candidates, The present invention relates to a system and method for scoring and analyzing a personalized mock essay through grouping of candidates who can improve learning efficiency by showing what problems are wrong.
A simulation test score and analysis system according to the present invention includes: a test set database storing a plurality of test sets; A candidate database storing information about a test candidate of the test sets; A test application unit for selecting a test set from among the test sets and issuing a test report; A test scoring unit for scoring the candidates for the simulated exam of the selected test set; A positive correct answer rate calculation unit for calculating a correct answer rate and an incorrect answer rate for each question of the selected test set using the scoring result of the test scoring unit; And a positive error rate information output unit for displaying information on a correct answer rate and an error rate for each of the problems calculated by the positive error rate calculation unit on a display device of each candidate, A candidate grouping module for grouping a plurality of groups into a plurality of groups according to a test score according to a score of the test scoring unit; And a group correct positive rate calculation module for calculating a correct rate and an incorrect rate of the candidates for each of the plurality of groups, and the positive rate information output unit displays the positive rate of each group for each candidate on a display device.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method and a system for evaluating a personalized mock-up score by a group of candidates,

The present invention relates to a system and method for analyzing and analyzing mockery reports, and more particularly, to analyzing the percentage of correct answers and the percentage of errors per problem for a large number of candidates who have taken a mock examination and grouping and comparing candidates for each score level or each level Analyze to provide candidates with a problem that should not be mistaken on their own scales or levels, as well as to let candidates who are most similar to their learning disposition or test scoring patterns know what problems they have in other sets of tests The present invention relates to a system and method for scoring and analyzing personalized mock examinations through grouping of candidates capable of improving learning efficiency.

The present invention also relates to a smart learning method for matching candidates with various UHD-related video contents stored in a server in association with the corresponding mock test question.

After the spread of the Internet, learning methods and solutions such as lectures and tests that have conventionally been conducted offline have been developed and used as various learning management systems using on-line.

Typically, students learn to interact with lecturers through online chatting and online communication, from learning how to improve learning efficiency by using various contents together online while viewing online video, And how to improve it has been developed and widely used in education.

In addition, a learning management system has been developed and used, which automatically scales and displays the scoring result when the learner solves the problem, and provides the error rate per problem.

A conventional learning management system according to Korean Patent Laid-Open No. 10-2013-0141232 is a system for providing such functions, and it relays and manages information between a teacher terminal for transmitting and receiving information, a learner terminal, and the learner terminal And a management server, wherein the teacher terminal includes a problem information transmission module for transmitting information related to a problem presented to the learner, and the learner terminal includes a result information transmission module for transmitting the solution result information about the proposed problem , The management server includes a database unit including a correct answer data DB for storing correct answer data for a problem and correct answer data from the correct answer data DB through information related to the problem transmitted from the trouble information transmission module, The result of the solution transmitted from the module is matched with the correct answer data, And an analyzing unit including an automatic grading module for calculating a score of a question between a teacher and a learner based on previously stored correct answer data.

However, this conventional learning management system provides the error rate statistics for each problem. However, since the error rate statistics are only the statistics of the error rate of all the candidates and actually do not consider the learning level of the learners, There is a problem in that it does not provide any information about the error rate of the learners who have the same level of learning with oneself or the choice of wrong answers.

Korean Patent Publication No. 10-2013-0141232

The present invention has been made to solve the above problems, and provides a system and a method for showing an error rate of a candidate having a learning level similar to that of a test candidate, or showing what problems are different in different tests .

The present invention also relates to a personalized mock test score and analysis system and method through a grouping of candidates who can find other candidates who are wrong in this problem with respect to the wrong problem and those other candidates can find mainly different problems in other test sets .

The personalized mock test score and analysis system through grouping of candidates according to an embodiment of the present invention includes a test set database storing a plurality of test sets; A candidate database storing information about a test candidate of the test sets; A test application unit for selecting a test set from among the test sets and issuing a test report; A test scoring unit for scoring the candidates for the simulated exam of the selected test set; A positive correct answer rate calculation unit for calculating a correct answer rate and an incorrect answer rate for each question of the selected test set using the scoring result of the test scoring unit; And a positive error rate information output unit for displaying information on a correct answer rate and an error rate for each of the problems calculated by the positive error rate calculation unit on a display device of each candidate, A candidate grouping module for grouping a plurality of groups into a plurality of groups according to a test score according to a score of the test scoring unit; And a group correct positive rate calculation module for calculating a correct rate and an incorrect rate of the candidates for each of the plurality of groups, and the positive rate information output unit displays the positive rate of each group for each candidate on a display device.

In another embodiment of the present invention, the positive error rate calculation module further includes a top error rate providing module for finding and providing the most frequently selected wrong answers by the candidates for each of the groups, And shows the wrong answers and the top wrong answers on each candidate's display device.

In a preferred embodiment, the candidate grouping module groups test candidates of the selected test set by a predetermined interval in a high score order.

In a preferred embodiment, the Candidate Grouping module groups test candidates of the selected test set in order of 90 points, 80 points, 70 points, 60 points and less points in a high score order.

In a preferred embodiment, the grouping module groups each candidate on the same basis for each test set according to the result of the scoring of the candidates for the plurality of test sets, and notifies the first candidate who has applied the selected test set to the selected test Set error problem providing section that shows test questions with high error rates of candidates belonging to the same group as the first candidate in the other test sets other than the set.

In a preferred embodiment, the same-group-wrong-answer-problem-providing unit notifies the first candidate who has applied to the selected test set that the error rate of the candidates belonging to the same group as the first candidate in the test set other than the selected test set is 50% Show problems.

In a preferred embodiment, the first candidate, who has applied to the selected test set, has the wrong problem with the first candidate, among the candidates in the same group as the first candidate with respect to the selected test set, And a pseudo-error problem provision unit for providing problems with high error rates in other test sets.

In a preferred embodiment, the pseudo-wrong problem provisioner is configured to determine whether the first candidate, who is in the same group as the first candidate with respect to the selected test set, It also provides problems with more than 50% of incorrect answers in different test sets of wrong candidates.

In a preferred embodiment, the pseudo-wrong problem provisioner is configured to determine whether the first candidate, who is in the same group as the first candidate with respect to the selected test set, Together with the other test sets of the wrong candidates, the wrong candidates will find and provide the most frequently selected wrong answers regarding the problems with the error rate of 50% or more.

In a preferred embodiment, the Candidate Grouping module groups the test takers of the selected test set in the order of top 3%, top 7%, top 10%, top 20%, and top 40% in order of high score.

According to another aspect of the present invention, there is provided a method for analyzing scoring and problem analysis, comprising: generating a test set database storing a plurality of test sets; Generating a candidate database storing information about a test candidate of the test sets; A test writing step of selecting a test set of any of the test sets and issuing a test report; A test scoring step of scoring the candidates for the simulated exam of the selected test set; Calculating a correct answer rate and an incorrect answer rate for each question in the selected test set using the scoring result of the test scoring unit; And a correct answer rate information output step of displaying information on the correct answer rate and the incorrect answer rate for each question calculated by the positive answer rate calculation unit on a display device of each candidate, A candidate grouping step of grouping the test scores into a plurality of groups according to a test score according to the test scores; And calculating a correct answer rate and a false positive rate for each of the plurality of candidates according to each of the plurality of groups. In the correct answer rate information output step, the correct answer rate for each group is displayed on a display device of each candidate .

In the preferred aspect of the present invention, the correct answer rate calculation step may further include: providing an incorrect answer to the candidate for each of the groups to find and provide a wrong answer most frequently selected by the candidate, And displays the different correct answers and the upper wrong answers on each candidate's display device.

In a preferred embodiment, the candidate grouping step groups the test candidates of the selected test set by a certain interval in a high score order.

In a preferred embodiment, the candidate grouping step groups the test takers of the selected test set in the order of 90 points, 80 points, 70 points, 60 points and less points in a high score order.

In a preferred embodiment, the candidate grouping step includes grouping the candidates on the same basis for each test set on the basis of the result of scoring of the candidates for the plurality of test sets, and assigning the first candidate to the first candidate And the same group wrong answer question providing step that shows the test questions having a high error rate of the candidates belonging to the same group as the first candidate in the test set other than the selected test set.

In a preferred embodiment of the present invention, in the same group wrong answer problem provision step, the first candidate who has applied to the selected test set has a false positive rate of 50% or more of candidates belonging to the same group as the first candidate in a test set other than the selected test set Show test questions.

In a preferred embodiment, the first candidate, who has applied to the selected test set, has the wrong problem with the first candidate, among the candidates in the same group as the first candidate with respect to the selected test set, And a similar error correcting problem provision step for providing problems with high error rates in other test sets.

In a preferred embodiment, in the provision of the pseudo-error problem, the first candidate, who has applied to the selected test set, selects the first candidate from among the candidates in the same group as the first candidate for the selected test set It provides problems with more than 50% of false positives in different test sets of wrong candidates.

In a preferred embodiment, the step of providing the pseudo-wrong problem may comprise the step of providing a pseudo-wrong question to the first testee, wherein the first candidate, who is in the same group as the first candidate for the selected test set, In the other test sets of wrong candidates who have the wrong problem, the wrong candidates find the wrong answers most wrongly chosen together with the problems with the error rate of 50% or more.

In a preferred embodiment, the candidate grouping step groups the test takers of the selected test set in the order of top 3%, top 7%, top 10%, top 20%, and top 40% in the high score order.

According to the present invention, it is possible to improve the accuracy of test takers who have a similar level of learning to the test applicants, or to show that they are wrong in other tests, It has the merit that it enables students to motivate their learning motivations more strongly and also allows learners with similar levels to concentrate on problems that are particularly susceptible to errors.

1 is a diagram showing an example of a conventional learning management system,
FIG. 2 is a diagram showing a configuration of a simulation test score and analysis system according to an embodiment of the present invention;
FIG. 3 is a view showing a configuration of the positive error rate calculation unit of FIG. 2;
4 is a diagram showing an example of a plurality of test sets,
5 is a drawing showing an example of a set of test questions to be taken by a candidate,
6 is a view showing an example of selecting a criterion for grouping candidates,
FIG. 7 is a view showing an example of displaying a positive error rate for each problem,
FIG. 8 is a flowchart showing a method for analyzing a scoring and problem analysis according to an embodiment of the present invention.

Hereinafter, embodiments according to the present invention will be described in detail with reference to the accompanying drawings. However, the present invention is not limited to or limited by the embodiments. Like reference symbols in the drawings denote like elements.

FIG. 2 is a view showing a configuration of a mock test score and analysis system according to an embodiment of the present invention, FIG. 3 is a view showing a configuration of a positive error rate calculation unit of FIG. 2, and FIG. 4 is a drawing showing an example of a plurality of test sets FIG. 5 is a view showing an example of a test problem set to which a candidate is subjected, FIG. 6 is a view showing an example of selecting a criterion for grouping candidates, and FIG. 7 is a view showing an example of displaying a positive error rate per problem.

Referring to the drawings, a system for analyzing scoring and problem analysis according to an exemplary embodiment of the present invention includes a test set database 100 storing a plurality of test sets, a candidate database storing information on test candidates of the test sets 200), a test examiner 300 for selecting a test set of any one of the test sets, and a test scoring unit 400 for scoring the candidates for the simulated exam of the selected test set, A correct answer rate calculation unit 500 for calculating a correct answer rate and an incorrect answer rate for each question in the selected test set by using the scoring result of the test scoring unit, a correct answer rate calculation unit 500 for calculating the correct answer rate and the error rate for each question And a correct response rate information output unit 600 that is displayed on the display device of each candidate.

The positive error rate calculation unit 500 includes a candidate grouping module 510 for grouping the test candidates of the selected test set into a plurality of groups according to a test score according to the score of the test scoring unit, A positive correct answer rate calculation module 520 for calculating a wrong answer rate, and an upper wrong answer providing module 530 for finding and providing the most wrong answers selected by the candidates for the respective problems.

The positive error rate information output unit 600 displays a positive error rate and a top wrong answer for each of the candidates on the display device of each candidate.

The candidate grouping module 510 groups test candidates of the selected test set by a predetermined interval in a high score order. For example, the candidate grouping module 510 may group test candidates of the selected test set in the order of 90 points, 80 points, 70 points, 60 points, and lower scores in a high score order.

If the first candidate has applied for the first set of tests, mark the results of the first candidate's examination. When the score of the first candidate is obtained through scoring, the score is used to determine to which group the first candidate belongs. If the score of the first candidate is 68 points, the first candidate belongs to the 60 point group. When the group of the first candidate is determined, the positive error rate calculation module for each group calculates the percentage of correct answers and the percentage of correct answers of the candidates by the plurality of groups. That is, calculate the percentage of correct or incorrect answers for each group of 90-point scale, 80-point scale, 70-point scale, 60-point scale and lower scoring scale. Next, the upper wrong answer providing module finds the most frequently selected wrong answers for each group. That is, among the example fingerprints for the specific problem, the most incorrectly selected fingerprints are selected as the top wrong answers among the displayed fingerprints, and in particular, when the upper wrong answers are determined, And displays it to the first candidate. Or if the first candidate belongs to the 60-point group, he / she may not show the percentage of correct answers and other incorrect answers of the other groups, but may show only the correct answers and the wrong answers of the 60-point group.

In addition, the mock test score and analysis system according to the embodiment of the present invention improves the testee's learning efficiency by using a different test set in addition to the test set which the candidate has taken. Candidates who belong to the same group as the first candidate provide the wrong questions in the different test sets when the first candidate has made the first test set.

First, each candidate is grouped by scoring using the scoring results of the candidates who took the first test set. In other words, the candidates are graded through grading into 90-point scale, 80-point scale, 70-point scale and 60-point scale.

Of course, grouping can be done by scoring in this way, or the test takers of the first test set can be grouped in order of top 3%, top 7%, top 10%, top 20% and top 40% Grouping can be done in various ways according to grouping criteria.

If the candidate grouping module in the Candidate Grouping module groups the candidates for 90th point, 80th point, 70th point, 60th point, etc. for the first test set, then the other test sets such as the second test set, the third test set, Candidates are grouped into 90-point scale, 80-point scale, 70-point scale and 60-point scale.

The same group wrong answer problem remover 700 determines which of the candidate groups belongs to the group of the candidate scores in the first test set that the first candidate has taken. If the first candidate belongs to the 80-point group as a result of the test scoring, a test set other than the first test set, i.e., candidates who have obtained an 80-point score in the second test set, are found and then they search for the most wrong test questions.

In the embodiment of the present invention, more than 50% of the candidates who obtained the score of 80 points in the second test set find only the wrong problems.

Similarly, for the third set of tests, candidates who score 80 points are found, and more than 50% of them find the wrong problem.

In this way, the other candidates belonging to the same group as the first candidate find much wrong problems in other tests.

At this time, other test takers in the same group in the other test set show a lot of wrong problems, and at the same time, you can show which view fingerprints you chose in many wrong answers when they are wrong. In other words, if the answer to a question was three times, and the same group candidate who was mistaken for the problem was more than 50%, this problem is shown and the question is also shown by the wrong candidates choosing which view fingerprint to answer. If they choose the answer 1, the most wrong person is selected, or the answer 2 is selected to show the most wrong person. At this time, it displays the error rate for each selection view fingerprint for the problem. If 40 of the 50 candidates in the same group were wrong and the correct answer was 4, choose 1 and get 30 wrong, choose 2 and get the wrong person If the number of wrong persons is 2, the error rate for the first fingerprint is 75%, the error rate for the second fingerprint is 20%, and the error rate for the third fingerprint is 5% It shows the error rate of the same group applicants for each view fingerprint.

The quasi-corrective problem solving unit 800 finds the other candidates who are wrong with the first candidate and then finds the wrong candidates in the different test sets. That is, if the first candidate who took the first test set had the wrong problem 10 times, he or she would look for the candidates of the first test set that were wrong with the 10th test set. If 20 of the first test set candidates were wrong in question 10, these 20 candidates will find many false questions in the other test sets. For example, assuming that these 20 candidates have all taken a second set of tests, 20 of those in the second set of tests will find 50% or more of the problems and then show the problem to the first candidate.

At this time, the twenty people showed many problems that were wrong, and at the same time, they could show which view fingerprints were wrongly selected when they were wrong. Four of the candidates in the top 10 were wrong and the correct answers were four, so I chose one, the wrong one was seven, the two were wrong, two were wrong, Assuming that the wrong person is 1 person, the error rate of the first fingerprint is 70%, the error rate of the second fingerprint is 20%, and the error rate of the third fingerprint is 10% View Displays the percentage of incorrect answers for each fingerprint.

In the embodiment of the present invention, candidates having the most similar learning patterns are found by finding candidate candidates most similar to the learning tendency or test scoring result pattern so that they can know what problems are wrong in other test sets So that they can create more interest in learning.

To this end, the candidate grouping module groups the respondents who are similar to the wrong problems of the first candidate by a certain ratio or more, and groups them into the similar wrong group. At this time, the percentage of the wrong candidates is 50%, and the candidates who are similar to the wrong candidates by more than 50% are selected and grouped into the similar wrong group. This ratio should be selected by the learner. Therefore, the learner can search for candidates with a similarity ratio of 70% or more, or find only respondents with a similarity ratio of 90% or more.

Or the candidate grouping module groups the test candidates of the selected test set in a high score order in a predetermined interval and then determines a group to which the first candidate belongs in the group of the interval and selects candidates belonging to the group to which the first candidate belongs The first candidate may select a group of respondents who are similar to the wrong problems by a percentage or more and group them into a similar wrong group.

It is also possible to select candidates who belong to the same group as those of the candidates grouped by a certain section rather than all of the candidates who have taken a specific test set, You can also select groups of respondents and group them into similar incorrect groups.

It is also possible to group other candidates who are not the same as the other candidates who are not the same with themselves, as well as other candidates who have a high percentage of such wrong answers that have chosen the same incorrect answers as their own. This makes it possible to find candidates who are more similar to their learning patterns or learning levels or trends and group them separately into a similar view selection group.

The same group wrong answer classifier 700 may classify the similar wrong-answer group or the similar-view group to the first candidate who has applied to the first test set using the grouping information grouped into the similar- Other candidates in the selection group can find more than 50% of the wrong problems in other test sets, such as the second test set.

Through this diverse grouping, students can find candidates who have the most similar learning patterns or tendencies, and find out which problems they are wrong in other test sets so that they can find and study only those problems. Most learners have a high interest in other candidates who have a similar learning level or learning tendency to themselves. Therefore, when the grouping method using the grouping module of the present invention is used, I will study with a higher interest and concentration in other matters.

In the embodiment of the present invention, by analyzing the results of examinations of other candidates who have taken the same test with the candidates, it is possible to analyze the candidates' And provides a variety of analyzes to give directions to learning by showing the motivation and motivation of the applicant.

To this end, the group decision problem analyzing unit and the problem specific wrong responding analyzing unit are included as its constituent elements.

The group determination problem analyzing unit calculates the percentage of correct answers for each group for each question, and calculates the difference between the correct answers for each group for each question.

This will be described in more detail with reference to Table 1 below.

500 total problem solvers 80-point candidates are wrong
127 people
90-point candidates are wrong
95 people
Difference
Issue 1 - 45 people Issue 1 - 36 people 9 Issue 7 - 68 Issue 7 - 23 people 45 Problem 9 - 43 Problem 9 - 23 people 20 10 issues - 10 people Issue 10 - 7 people 3 Problem 15 - 7 people Problem 15 - 5 people 2 Issue 16 - 7 people Issue 16 - 3 people 4 Issue 18 - 2 people Issue 18 - 1 person One Issue 20 - 3 people Issue 20 - 1 person 2 Issue 23 - 23 people 23 Issue 24 - 2 people 2 Issue 26 - 2 people Issue 26 - 1 person One Issue 28 - 6 people 6 Issue 39 - 3 people 3

In Table 1, first, the candidates who got the score of 80 points analyze the wrong problem and those who got the score of 90 points are wrong. Table 1 shows that the 80-point candidates have the wrong problems and the 90-point candidates have the wrong problems. For example, in question 23, 90-point candidates are all right, but many of the 80-point candidates are wrong. Therefore, such a 23rd problem is a problem in which a candidate scores 80 points or 90 points.

In the case of Question 7, there are 23 wrong candidates for 90-point candidates, but 68 candidates for 80-point candidates are different from the 80-point candidates and 90-point candidates. 80 points or 90 points.

In the same way, you can analyze how many candidates are wrong in the scoring system by showing the difference of the percentage of correct answers. Furthermore, in addition to the way of showing the difference in the number of wrong candidates, the percentage of correct answers for each questionnaire is shown in percent (%) for each question, and the difference in percent correct percentages is compared, , And it is also possible to show the percentage of correct answers in descending order.

In this way, when the candidates belonging to the group of each score section analyze the wrong problems, it is possible to find out the types and patterns of the wrong answers of the candidates for each score group and analyze the group in which the problems are determined.

The problem sorting anomaly analyzing unit calculates and displays an average score of the right answerers and an average score of the right answerers for each problem that each candidate has in the specific test set.

If you calculate the average score of the correct answerers and the average score of the wrong answerers on this particular question, you can see the average scores of the candidates who have the wrong questions and the average scores of the wrong candidates as shown in Table 2 below.

For example, in the case of Question 3, the average score of the candidates was 68.3, which is why people with lower scores than me are right. So this problem can be judged to be wrong at my level, which should not have been wrong. In the case of questions 7, 11, and 33, the average scores of the candidates who met this problem are similar to each other, so it can be judged to be a problem that can be completely wrong at the level of these problems. For the 14th question, the average score of the candidates who met this question was 92.6, whereas the average score of the wrong candidates was 77.2, so most of the candidates who were similar to me were wrong, and the candidates with higher levels of learning As shown in Fig. Therefore, this problem can be regarded as a problem of determining the difference of the test score between the candidate who got the score of 79 and the candidate who got the score of the score of 90 or more.

I am wrong and the point of the problem Average score of all candidates by each question My score
(79 points)
Candidate Candidates
Average score
The wrong candidates
Average score
 3 - 3 points 68.3 52.7  7 - 3 points 77.7 73.0 11 - 3 points 78.7 65.9 14 - 4 points 92.6 77.2 24 - 3 points 87.2 79.1 33 - 2 points 79.3 74.3 40 - 2 points 84.4 80.3

Using the analysis contents of the problem sorting anomaly analyzing unit, it is possible to find out whether a particular candidate has a problem that should not be wrong if the user himself or herself learns about the wrong problems, or whether the problem level of the applicant is sufficiently wrong , And candidates who have a higher score than themselves can check whether the score is different because they have met some wrong problem.

Hereinafter, the operation of the system for scoring and problem analysis of the mockery log according to the embodiment of the present invention will be described.

FIG. 8 is a flowchart showing a method for analyzing a scoring and problem analysis according to an embodiment of the present invention.

Referring to FIG. 8, a method for analyzing scoring and problem analysis according to an exemplary embodiment of the present invention includes generating a test set database storing a plurality of test sets (S100), storing information about test candidates of the test sets (S200); selecting a test set of one of the test sets (S300) to test the test set; a step of scoring the candidates for the test set of the selected test set A correct score calculation step S500 for calculating a correct answer rate and an error rate for each question of the selected test set using the scoring result of the test scoring unit in operation S400, (S600) for displaying information on the correct answer rate and the error rate on the display device of each candidate, .

The positive error rate calculation step S500 includes a candidate grouping step S510 of grouping the test candidates of the selected test set into a plurality of groups according to a test score according to the score of the test scoring unit, A positive error rate calculation step S520 for calculating a wrong answer rate for each group and an upper wrong answer step S530 for finding candidates for the most frequently selected wrong answers by the candidates for each group.

In the positive error rate information output step (S600), the display unit displays the positive error rate and the top wrong answer for each group.

In the candidate grouping step (S510), the test candidates of the selected test set are grouped by a predetermined interval in a high score order. For example, in the candidate grouping step (S510), the test candidates of the selected test set can be grouped in order of 90 points, 80 points, 70 points, 60 points, and below in the high score order.

If the first candidate has applied for the first set of tests, mark the results of the first candidate's examination. When the score of the first candidate is obtained through scoring, the score is used to determine to which group the first candidate belongs. If the score of the first candidate is 68 points, the first candidate belongs to the 60 point group. When the group of the first candidates is determined, the correct answer rate calculation step (S520) for each group calculates the correct answer rate or the error rate of the candidates for each of the plurality of groups. That is, calculate the percentage of correct or incorrect answers for each group of 90-point scale, 80-point scale, 70-point scale, 60-point scale and lower scoring scale. Next, in the upper wrong answer providing step (S530), the most frequently selected wrong answers are searched for each group. That is, among the example fingerprints for the specific problem, the most incorrectly selected fingerprints are selected as the top wrong answers among the displayed fingerprints, and in particular, when the upper wrong answers are determined, Find the wrong answer. Or if the first candidate belongs to the 60-point group, he / she may not show the percentage of correct answers and other incorrect answers of the other groups, but may show only the correct answers and the wrong answers of the 60-point group.

Further, in the method of evaluating and analyzing the mock testimony according to the embodiment of the present invention, the testability of the candidate is improved by using a different test set besides the test set which the candidate took. Candidates who belong to the same group as the first candidate provide the wrong questions in the different test sets when the first candidate has made the first test set.

First, each candidate is grouped by scoring using the scoring results of the candidates who took the first test set. In other words, the candidates are graded through grading into 90-point scale, 80-point scale, 70-point scale and 60-point scale.

Of course, grouping can be done by scoring in this way, or the test takers of the first test set can be grouped in order of top 3%, top 7%, top 10%, top 20% and top 40% Grouping can be done in various ways according to grouping criteria.

In the Candidate grouping stage, if you have grouped candidates for the first test set into 90-point, 80-point, 70-point, and 60-point brackets, you can also apply for other test sets such as the second, third, They are grouped into 90-point scale, 80-point scale, 70-point scale and 60-point scale.

In the same group wrong answer problem providing step (S540), it is determined which score group belongs to the scoring group in the first test set that the first candidate has taken. If the first candidate belongs to the 80-point group as a result of the test scoring, a test set other than the first test set, i.e., candidates who have obtained an 80-point score in the second test set, are found and then they search for the most wrong test questions.

In the embodiment of the present invention, more than 50% of the candidates who obtained the score of 80 points in the second test set find only the wrong problems.

Similarly, for the third set of tests, candidates who score 80 points are found, and more than 50% of them find the wrong problem.

In this way, the other candidates belonging to the same group as the first candidate find much wrong problems in other tests.

At this time, other test takers in the same group in the other test set show a lot of wrong problems, and at the same time, you can show which view fingerprints you chose in many wrong answers when they are wrong. In other words, if the answer to a question was three times, and the same group candidate who was mistaken for the problem was more than 50%, this problem is shown and the question is also shown by the wrong candidates choosing which view fingerprint to answer. If they choose the answer 1, the most wrong person is selected, or the answer 2 is selected to show the most wrong person. At this time, it displays the error rate for each selection view fingerprint for the problem. If 40 of the 50 candidates in the same group were wrong and the correct answer was 4, choose 1 and get 30 wrong, choose 2 and get the wrong person If the number of wrong persons is 2, the error rate for the first fingerprint is 75%, the error rate for the second fingerprint is 20%, and the error rate for the third fingerprint is 5% It shows the error rate of the same group applicants for each view fingerprint.

In the similar error-correcting problem providing step (S550), the first candidate finds the wrong candidates together with the wrong candidates, and then the wrong candidates find the wrong problems in the other test sets. That is, if the first candidate who took the first test set had the wrong problem 10 times, he or she would look for the candidates of the first test set that were wrong with the 10th test set. If 20 of the first test set candidates were wrong in question 10, these 20 candidates will find many false questions in the other test sets. For example, assuming that these 20 candidates have all taken a second set of tests, 20 of those in the second set of tests will find 50% or more of the problems and then show the problem to the first candidate.

At this time, the twenty people showed many problems that were wrong, and at the same time, they could show which view fingerprints were wrongly selected when they were wrong. Four of the candidates in the top 10 were wrong and the correct answers were four, so I chose one, the wrong one was seven, the two were wrong, two were wrong, Assuming that the wrong person is 1 person, the error rate of the first fingerprint is 70%, the error rate of the second fingerprint is 20%, and the error rate of the third fingerprint is 10% View Displays the percentage of incorrect answers for each fingerprint.

In the candidate grouping step, the first candidate selects groups of respondents whose similar ratios are the same as those of the wrong candidates with a certain ratio or more, and groups them into similar incorrect groups. At this time, the percentage of the wrong candidates is 50%, and the candidates who are similar to the wrong candidates by more than 50% are selected and grouped into the similar wrong group. This ratio should be selected by the learner. Therefore, the learner can search for candidates with a similarity ratio of 70% or more, or find only respondents with a similarity ratio of 90% or more.

Or in the candidate grouping step, the test candidates of the selected test set are grouped in a high score order in a predetermined interval, then a group to which the first candidate belongs in the group of the interval is determined, and the candidates belonging to the group to which the first candidate belongs The first candidate may select a group of respondents who are similar to the wrong problems by a percentage or more and group them into a similar wrong group.

It is also possible to select candidates who belong to the same group as those of the candidates grouped by a certain section rather than all of the candidates who have taken a specific test set, You can also select groups of respondents and group them into similar incorrect groups.

It is also possible to group other candidates who are not the same as the other candidates who are not the same with themselves, as well as other candidates who have a high percentage of such wrong answers that have chosen the same incorrect answers as their own. This makes it possible to find candidates who are more similar to their learning patterns or learning levels or trends and group them separately into a similar view selection group.

In the same-group wrong-answer problem providing step, the first candidate who has applied to the first test set is grouped into the similar-right group or the similar-view selection group to the similar candidate group Other candidates who belong to it can look for more than 50% of the wrong problems in other test sets, such as the second test set.

The personalized mock test score and problem analysis method by grouping the candidates according to the embodiment of the present invention further includes a group determination problem analysis step and a problem sorting anomaly analysis step.

The group determination problem analysis step calculates the percentage of correct answers for each group for each question, and calculates the difference between the correct answers for each group for each question.

The problem-solving anomaly analysis step calculates and displays the average score of the right answerers and the average score of the right answerers for each problem that is wrong for each candidate in the specific test set.

A detailed description of the group determination problem analysis step and the problem sorting error analyzing step is described once in the description of the group determination problem analyzing unit and the problem sorting bad analyzing unit, and the description thereof is omitted here.

The personalized mock test score and problem analysis method by grouping the candidates according to the embodiment of the present invention may be performed by using a keyword included in the correct answer content of each wrong candidate for each wrong candidate in the selected test set, You can search and provide high UHD video content.

After storing various UHD video contents related to the test subject on the server, the user finds the video having the highest relevance among the video contents stored in the server by using the keywords in the correct answer and provides the candidate to the candidate I will.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the invention.

100: test set database 200: candidate database
300: Test preparation part 400: Test scoring part
500: positive correct ratio calculation unit 600: positive correct ratio output unit
700: In the same group wrong answer Problem solving 800:

Claims (32)

In the Mock score and analysis system,
A test set database storing a plurality of test sets;
A candidate database storing information about a test candidate of the test sets;
A test application unit for selecting a test set from among the test sets and issuing a test report;
A test scoring unit for scoring the candidates for the simulated exam of the selected test set;
A positive correct answer rate calculation unit for calculating a correct answer rate and an incorrect answer rate for each question of the selected test set using the scoring result of the test scoring unit;
And a correct response rate information output unit for displaying information on the percentage of correct answers and the percentage of incorrect answers for each problem calculated by the positive correct rate calculation unit on a display device of each candidate,
Wherein the positive error rate calculation unit comprises:
A candidate grouping module for grouping the test candidates of the selected test set into a plurality of groups according to a test score according to the scoring of the test scoring unit; And
And a group correct positive rate calculation module for calculating a correct rate and a false rate of the candidates for each of the plurality of groups,
Wherein the positive error rate information output unit displays a positive error rate for each group on a display device of each of the candidates.
The method according to claim 1,
Wherein the positive error rate calculation unit comprises:
And an upper wrong answer providing module for finding and providing the wrong answers most frequently selected by the candidates for each of the groups,
Wherein the positive error rate information output unit displays a positive error rate and a top wrong answer for each of the groups on a display device of each of the candidates.
3. The method according to claim 1 or 2,
Wherein the candidate grouping module groups test candidates of the selected test set in a high score order in a predetermined section.

The method of claim 3,
Wherein the candidate grouping module groups test candidates of the selected test set in order of 90 points, 80 points, 70 points, 60 points and less points in a high score order, and the personalized mock exam score through grouping of candidates Problem Analysis System.
3. The method according to claim 1 or 2,
Wherein the candidate grouping module groups the respondents having the same percentage of the same problems as those of the wrong candidates by a ratio equal to or more than a certain ratio and groups them into a similar wrong group. Analysis system.
3. The method according to claim 1 or 2,
Wherein the candidate grouping module groups test candidates of the selected test set in a high score order in a predetermined interval and then determines a group to which the first candidate belongs in each of the interval groups and selects among the candidates belonging to the group to which the first candidate belongs Wherein the first candidate selects groups of respondents whose ratios are the same as those of the wrong candidates, and groups the candidates into a similar wrong group. The system of personalized simulated examinations and problem analysis through grouping of candidates.
6. The method of claim 5,
The candidate grouping module groups candidate candidates having a certain percentage or more of the same wrong answers selected by the first candidate as false answers selected by the first candidate from the candidates belonging to the similar wrong group into the similar view selection group Personalized customized test scoring and problem analysis system through grouping of candidates.
3. The method according to claim 1 or 2,
The grouping module groups each candidate on the same basis for each test set according to a result of scoring of the candidates for the plurality of test sets,
And a similar group wrong answer question provider which shows to the first candidate who has taken the selected test set the test questions having a high error rate of the candidates belonging to the same group as the first candidate in the test set other than the selected test set A personalized mock essay scoring and problem analysis system through grouping of candidates.
9. The method of claim 8,
Wherein the same group wrong answer problem-
Wherein the first candidate who has taken the selected test set shows the test questions with the error rate of 50% or more of the candidates belonging to the same group as the first candidate in the test set other than the selected test set. Customized Mock Test Scoring and Problem Analysis System.
3. The method according to claim 1 or 2,
If the first candidate who has taken the selected test set has a wrong answer in the other test set of the wrong candidates with the wrong one among the candidates in the same group as the first candidate with respect to the selected test set, And a similar error correcting problem providing unit for providing high problems to the user.
11. The method of claim 10,
The pseudo-error problem-
If the first candidate who has taken the selected test set has a wrong answer in the other test set of the wrong candidates with the wrong one among the candidates in the same group as the first candidate with respect to the selected test set, And 50% or more of the test scores.
12. The method of claim 11,
The pseudo-error problem-
If the first candidate who has taken the selected test set has a wrong answer in the other test set of the wrong candidates with the wrong one among the candidates in the same group as the first candidate with respect to the selected test set, The system of the present invention is characterized by that the wrong candidates find and provide the most frequently selected wrong answers to the problems of 50% or more.
3. The method of claim 2,
Wherein the candidate grouping module groups the test candidates of the selected test set in the order of top 3%, top 7%, top 10%, top 20%, and top 40% in order of high score. Customized Mock Test Scoring and Problem Analysis System.
The method according to claim 1,
And a group decision problem analyzing unit for calculating a correct answer rate for each question and calculating a difference between the correct answers for each question for each question. system.
The method according to claim 1,
And the difference of the percent correct between each group is displayed in order of the problem. The personalized mock test score and problem analysis system through the grouping of the candidates.
The method according to claim 1,
Further comprising a problem specific anomaly analyzer for calculating and displaying an average score of the correct answerers and an average score of the users who have failed in each of the candidates in the selected test set, Customized Mock Test Scoring and Problem Analysis System.

In a method of scoring and problem analysis of personalized mockery articles through grouping of candidates,
Generating a test set database storing a plurality of test sets;
Generating a candidate database storing information about a test candidate of the test sets;
A test writing step of selecting a test set of any of the test sets and issuing a test report;
A test scoring step of scoring the candidates for the simulated exam of the selected test set;
Calculating a correct answer rate and an incorrect answer rate for each question in the selected test set using the scoring result of the test scoring unit;
And a correct answer rate information output step of displaying information on a correct answer rate and an incorrect answer rate for each question calculated by the positive answer rate calculation unit on a display device of each candidate,
Wherein the positive correct ratio calculation step comprises:
A candidate grouping step of grouping the test candidates of the selected test set into a plurality of groups according to a test score according to the scoring of the test scoring unit; And
And calculating a correct answer rate and a wrong answer rate of each of the plurality of candidates by the group,
And displaying the positive error rate for each group on the display device of each candidate in the step of outputting the positive error rate information.
18. The method of claim 17,
Wherein the positive correct ratio calculation step comprises:
And an upper wrong answer providing step by which candidates for each group find and provide the most frequently selected wrong answers to the problem,
Wherein the positive error rate information is displayed on the display device of each candidate at the step of outputting the positive error rate information to each of the candidates.
The method according to claim 17 or 18,
Wherein the candidate grouping step comprises grouping test candidates of the selected test set in a high score order in a predetermined section, and grouping the candidates into individual customized mock test scores and problem analysis methods.
20. The method of claim 19,
Wherein the candidate grouping step groups the test candidates of the selected test set in order of 90 points, 80 points, 70 points, 60 points, and less points in a high score order, the individual customized mock test scores through grouping of candidates Problem analysis method.
The method according to claim 17 or 18,
The candidate grouping step may include grouping candidates having similar ratios of the same problems as those of the wrong candidates by a certain ratio or more and grouping them into a similar wrong group. Analysis method.
The method according to claim 17 or 18,
The candidate grouping step may include grouping test candidates of the selected test set in a high score order in a predetermined interval, determining a group to which the first candidate belongs in each of the interval groups, and selecting one of the candidates belonging to the group to which the first candidate belongs Wherein the plurality of candidates having the same percentage of the same problems as the wrong candidates of the first candidate are selected and grouped into a similar wrong group.
22. The method of claim 21,
The candidate grouping step may include grouping the candidates having a certain ratio or more of the same wrong answers selected by the first candidate as the incorrect answers selected by the first candidate into the similar view selection group among the candidates belonging to the similar wrong- A Method for Scoring and Problem Analysis of Customized Mock Testimonials through Grouping of Candidates.
The method according to claim 17 or 18,
The candidate grouping step includes grouping the candidates for each test set on the same basis according to a result of scoring of the candidates for the plurality of test sets,
The method further comprising the steps of providing the same group wrong answer question to the first candidate who has applied to the selected test set, wherein the candidates belonging to the same group as the first candidate in the test set other than the selected test set show high test problems with high error rates A Method of Scoring and Problem Analysis of Personalized Mock Testimonials through Grouping of Candidates.
25. The method of claim 24,
In the same group wrong answer problem providing step,
Wherein the first candidate who has taken the selected test set shows the test questions with the error rate of 50% or more of the candidates belonging to the same group as the first candidate in the test set other than the selected test set. Customized Mock Test Scoring and Problem Analysis Method.
The method according to claim 17 or 18,
If the first candidate who has taken the selected test set has a wrong answer in the other test set of the wrong candidates with the wrong one among the candidates in the same group as the first candidate with respect to the selected test set, The method comprising the steps of: (a) providing a similar error correcting problem providing high problems;
27. The method of claim 26,
In the pseudo-wrong problem provision step,
If the first candidate who has taken the selected test set has a wrong answer in the other test set of the wrong candidates with the wrong one among the candidates in the same group as the first candidate with respect to the selected test set, And 50% or more of the test scores of the test takers.
28. The method of claim 27,
The method of claim 5,
If the first candidate who has taken the selected test set has a wrong answer in the other test set of the wrong candidates with the wrong one among the candidates in the same group as the first candidate with respect to the selected test set, The method according to any one of claims 1 to 3, wherein the candidate is selected from among a plurality of candidates.
19. The method of claim 18,
Wherein the candidate grouping step groups the test candidates of the selected test set in the order of the upper 3%, the upper 7%, the upper 10%, the upper 20% and the upper 40% in order of high score. Customized Mock Test Scoring and Problem Analysis Method.
18. The method of claim 17,
And a group decision problem analysis step of calculating a correct answer rate for each question and calculating a difference between the correct answer rates for each question for each question. Analysis method.
18. The method of claim 17,
A method of analyzing personalized mock test scores and problem analysis through grouping of candidates, characterized in that the difference in percent correct between the groups is displayed in order of a problem.
18. The method of claim 17,
Further comprising a problem-specific anomaly analyzing step of calculating and displaying an average score of the right answerers and an average score of the right answerers for each of the problems that each candidate has in the selected test set. Personalized customized mock scores and problem analysis methods.




KR1020150094545A 2015-07-02 2015-07-02 Applicant-customized evaluation and analysis system by grouping the test applicants and the method thereof KR20170004330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150094545A KR20170004330A (en) 2015-07-02 2015-07-02 Applicant-customized evaluation and analysis system by grouping the test applicants and the method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150094545A KR20170004330A (en) 2015-07-02 2015-07-02 Applicant-customized evaluation and analysis system by grouping the test applicants and the method thereof

Publications (1)

Publication Number Publication Date
KR20170004330A true KR20170004330A (en) 2017-01-11

Family

ID=57832744

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150094545A KR20170004330A (en) 2015-07-02 2015-07-02 Applicant-customized evaluation and analysis system by grouping the test applicants and the method thereof

Country Status (1)

Country Link
KR (1) KR20170004330A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102163704B1 (en) * 2019-07-11 2020-10-08 (주)오앤이교육 An optimal learning path presentation system and method by analyzing knowledge state of learners
KR20220026286A (en) * 2020-08-25 2022-03-04 태그하이브 주식회사 Method of providing contents controlled dynamic difficulty and server performing the same

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130141232A1 (en) 2002-05-04 2013-06-06 Richman Technology Corporation System for real time security monitoring

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130141232A1 (en) 2002-05-04 2013-06-06 Richman Technology Corporation System for real time security monitoring

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102163704B1 (en) * 2019-07-11 2020-10-08 (주)오앤이교육 An optimal learning path presentation system and method by analyzing knowledge state of learners
KR20220026286A (en) * 2020-08-25 2022-03-04 태그하이브 주식회사 Method of providing contents controlled dynamic difficulty and server performing the same

Similar Documents

Publication Publication Date Title
Rupp A systematic review of the methodology for person fit research in item response theory: Lessons about generalizability of inferences from the design of simulation studies
Ravand et al. Diagnostic classification models: Recent developments, practical issues, and prospects
US8165518B2 (en) Method and system for knowledge assessment using confidence-based measurement
Tavakol et al. Psychometric evaluation of a knowledge based examination using Rasch analysis: An illustrative guide: AMEE Guide No. 72
US20110165550A1 (en) Management system for online test assessment and method thereof
Xie Diagnosing university students’ academic writing in English: Is cognitive diagnostic modelling the way forward?
Styck et al. Evaluating the prevalence and impact of examiner errors on the Wechsler scales of intelligence: A meta-analysis.
Üstün To what extent is problem-based learning effective as compared to traditional teaching in science education? A meta-analysis study
KR101023540B1 (en) Method for evaluating the analysis of test result using variable metadata
Stevenson et al. Examining potential bias in screening measures for middle school students by special education and low socioeconomic status subgroups
Lauterbach et al. School counselor knowledge, beliefs, and practices related to the implementation of standards-based comprehensive school counseling in the United States
Batool et al. Managing higher education quality enhancement in Pakistan through communication skill to achieve international opportunities.
KR20110020662A (en) Learning system for subjective mathematics question using on-line learning and method thereof
Tetschner et al. Obtaining validity evidence during the design and development of a resonance concept inventory
Gasa et al. Supervisory support for Ethiopian doctoral students enrolled in an open and distance learning institution
KR20170004330A (en) Applicant-customized evaluation and analysis system by grouping the test applicants and the method thereof
Bennett et al. Establishing and applying performance standards for curriculum-based examinations
Stewart et al. A response to Holster and Lake regarding guessing and the Rasch model
Adams Applying the partial credit model to educational diagnosis
Nick et al. A Global Approach to Promoting Evidence‐Based Practice Knowledge: Validating the Translated Version of the Evidence‐Based Practice Knowledge Assessment in Nursing Into Spanish
Acosta et al. Peer review experiences for MOOC. Development and testing of a peer review system for a massive online course
Sung et al. Multivariate generalizability analysis of automated scoring for short answer items of social studies in large-scale assessment
Lamprianou The tendency of individuals to respond to high-stakes tests in idiosyncratic ways
Hassan et al. Professional learning community: A pilot study
Scherman et al. Constructing benchmarks for monitoring purposes: Evidence from South Africa

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E90F Notification of reason for final refusal
E601 Decision to refuse application