CN110471936A - A kind of hybrid SQL automatic scoring method - Google Patents
A kind of hybrid SQL automatic scoring method Download PDFInfo
- Publication number
- CN110471936A CN110471936A CN201910763451.3A CN201910763451A CN110471936A CN 110471936 A CN110471936 A CN 110471936A CN 201910763451 A CN201910763451 A CN 201910763451A CN 110471936 A CN110471936 A CN 110471936A
- Authority
- CN
- China
- Prior art keywords
- answer
- key
- topic
- sql
- syntax tree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/242—Query formulation
- G06F16/2433—Query languages
- G06F16/2445—Data retrieval commands; View definitions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/284—Relational databases
- G06F16/285—Clustering or classification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention discloses a kind of hybrid SQL automatic scoring method, correct option is identified by comparing the mode of implementing result, expand the Key for Reference set of each topic, the difference that the SQL answer that student is submitted is integrated on grammer or text with Key for Reference is calculated again, and completes the target evaluated and tested to all SQL answers according to this.The present invention can not only reduce teacher exhaustive a large amount of different Key for References and the workload for correcting SQL answer by hand, and automatic judgment can be carried out according to submitted answer and each Key for Reference, the subjective errors that may be introduced manually are evaluated and tested in reduction, are applicable to link of imparting knowledge to students and take an examination.
Description
Technical field
The present invention relates to the automatic judgment for link of imparting knowledge to students and take an examination more particularly to a kind of hybrid SQL automatic scoring methods.
Background technique
For SQL evaluating system, existing evaluating system is basically divided into two kinds of static evaluation and test and dynamic evaluation and test.
Static evaluating method is identified using the semanteme or syntactic information of SQL provided by the answer that student submits and teacher
Difference between Key for Reference.According to calculate when based on SQL performance of program, attribute meter can will be divided into static evaluating method
Number method and structure measure two major classes.It is similar that attribute count method can be described as a kind of calculation procedure that researchers propose first
Property technical method, as the term suggests this method is to judge the similar of program by counting in obtained program code determinant attribute
Property, it is substantially the frequency that the certain attributes of statistics occur.Attribute count method only has detected some attributes on program surface, does not have
In view of the structure semantics feature of program.Structure measure is divided into the measurement based on string, the measurement etc. based on tree.Phase based on string
Like degree measurement as the term suggests being exactly the structure for converting program to string, program is judged by the similarity mode between these strings
The similarity of code;Measurement based on tree is exactly to convert program code to the structure of tree, extrapolates generation using the similarity of tree
The similarity of code.
Dynamic evaluating method is the Key for Reference for executing the answer and teacher's offer of student's submission respectively in test data,
Final score is judged by comparison implementing result.Once the implementing result of the two is different, according to the evaluation and test of dynamic evaluating method
It is tactful then by student submit answer be determined as mistake.Dynamic evaluating method can only generate the two-value of " correct " and " mistake " mostly
As a result, can not provide range bigger evaluation result.
There are the following in existing SQL automatic evaluation system, the shortcomings that technically existing: (1) static evaluation and test technology utilizes
The semanteme or syntactic information of SQL come identify student submit answer and teacher provide Key for Reference between difference.Due to SQL
Often there are a large amount of different correct options in the powerful ability to express that language itself has, a topic.However, in number big absolutely
In actual teaching or examination surroundings, teacher can provide a Key for Reference only for SQL topic.When teacher only provides a small number of references
When answer, according to carrying out with a small number of Key for References, scoring likely results in serious erroneous judgement or even those and Key for Reference is different
Correct option also can be considered as wrong solution.(2) once student submit answer implementing result and Key for Reference execution
As a result different, the answer that student submits can be all determined as 0 point by the system based on dynamic evaluation and test technology, rather than according to answer
Correct ratio gives corresponding score.This appraising standard does not obviously meet the requirement of exam paper marking and teaching norm.(3) it moves
State evaluating method can only handle those answers that can be executed in test data, for that can not execute the answer of even compiling then
Evaluation result can not be provided.
Summary of the invention
The purpose of the present invention is to provide a kind of hybrid SQL automatic scoring methods.
The technical solution adopted by the present invention is that:
A kind of hybrid SQL automatic scoring method comprising following steps:
Step 1, typing SQL test question information, test question information include problem description, data model, test data set, with reference to answering
Case and total score;
Step 2, the answer that student submits is executed in test data set, and is classified according to implementing result to answer;
Exception or mistake are compiled when dishing out when being executed, then the answer is classified as " not can be performed ";When answer compiling is logical
It is out-of-date, further compare the implementing result of the answer and the implementing result of Key for Reference;When two results are identical, by the answer
It is classified as " correct ", is otherwise classified as " executable ".
Step 3, corresponding assessment processing is carried out for the different type of answer;
The answer is extended to the merging of Key for Reference collection and gives phase by step 3-1 when classification results are the answer of " correct "
Answer score value;
Step 3-2, when classification results are the answer of " executable ", by grammar analysis technique, being by classification results " can
The answer of execution " and each element of the topic Key for Reference set are all converted to syntax tree, and according to similar between syntax tree
Degree comparison result is evaluated and tested, and then gives the score of corresponding ratio;
Step 3-3, when classification results be " not can be performed " answer evaluated and tested when, using text comparison in difference method come
The similarity of each element in the Key for Reference set of answer and the topic is calculated, and is evaluated and tested and is scored according to comparison result.
Further, problem description is the technical matters for being directed to some specific data model in step 1;Data model is
Examinee provides the metamessage of database model relevant to SQL examination paper, and examinee can write SQL statement according to this;Test data set
It is an independent database or data file, test data set includes several data records relevant to examination question;With reference to answering
Case is teacher's answer pre-set for per pass SQL examination paper;Total score is the score obtained after examinee correctly solves a problem.
Further, data model is presented by different modes such as Data Definition Language, E-R graphs.
Further, total score is the number between one 0 to 100.
Further, the specific scoring step in step 3-2 when classification results are the answer of " executable " are as follows:
Step 3-2-1 passes through the situation of a Key for Reference Rai of one " executable " answer SA He the topic
A Key for Reference RAi of the answer SA of " executable " and the topic are respectively converted into corresponding syntax tree SS by grammar analysis technique
And Csi, and the similitude S on calculation syntaxi, similitude SiCalculation formula are as follows:
Wherein ,/SS ∩ CSi/ indicate syntax tree SS and CSiThe quantity of node intersection, | CSi| indicate syntax tree CSiNode
Quantity ,/SS-CSi/ indicate syntax tree SS and CSiThe quantity of node difference set ,/SS/ indicate the number of nodes of syntax tree SS.
It should be noted that since all syntax trees all have the root node without actual syntax meaning,
Require when relevant node operation first excluding the root node.
Step 3-2-2, topic Q when possessing n Key for Reference for one calculate separately SA and each Key for Reference RAi
Grammer similitude, and using maximum value therein and the product of total score as final scoring, scoring Score calculates as follows:
Wherein, totalScore indicates the total score of the topic.
Further, the specific steps of step 3-3 are as follows:
Step 3-3-1, for a Key for Reference RA of one " not can be performed " submission answer SA and the topiciFeelings
Shape calculates and submits answer SA and Key for Reference RAiSimilarity on text;
Step 3-3-2, topic Q when possessing n Key for Reference for one are calculated separately and are submitted answer SA and each reference
Similarity of the answer on text, and using maximum value therein and the product of total score as final scoring.
Further, pass through editing distance algorithm (Levenshtein Distance) in step 3-3-1 or step 3-3-2
Or the text similarities metric algorithm such as Hamming distance (Hamming Distance) algorithm calculates and submits answer and Key for Reference two
Similarity of the person on text.
The invention adopts the above technical scheme, 1) having the advantage that can be identified not using the implementing result of answer
Same correct option effectively evades the adverse effect caused by SQL statement characterization difference, and expands the ginseng of topic according to this
Examine answer set.2) can be using grammer and text based analytical technology be based on, answer and all references that student is submitted
Answer carries out comprehensive similarity-rough set, and then calculates a more just score.
Technical solution of the present invention identifies correct option by comparing the mode of implementing result, expands the reference of each topic
Answer set, then the difference that the SQL answer that student is submitted is integrated on grammer or text with Key for Reference is calculated, and complete according to this
The target that pairs of all SQL answers are evaluated and tested.The present invention can not only reduce teacher's a large amount of different Key for References of exhaustion by hand
With the workload for correcting SQL answer, and automatic judgment can be carried out according to submitted answer and each Key for Reference, reduce people
Work evaluates and tests the subjective errors that may be introduced, and is applicable to link of imparting knowledge to students and take an examination.
Detailed description of the invention
The present invention is described in further details below in conjunction with the drawings and specific embodiments;
Fig. 1 is a kind of flow diagram of hybrid SQL automatic scoring method of the invention;
Fig. 2 is the corresponding syntax tree SS of Ans2 of the present invention;
Fig. 3 is the corresponding syntax tree CS of first element of Key for Reference set of the present invention1;
Fig. 4 is the corresponding syntax tree CS of second element of Key for Reference set of the present invention2。
Specific embodiment
As shown in one of Fig. 1-3, the invention discloses a kind of hybrid SQL automatic scoring methods comprising following steps:
Step 1, typing SQL test question information, test question information include problem description, data model, test data set, with reference to answering
Case and total score;
Further, specifically, problem description is the technical matters for being directed to some specific data model in step 1;Number
The metamessage of database model relevant to SQL examination paper is provided for examinee according to model, data model passes through Data Definition Language, reality
The different modes such as body-relational graph are presented, and examinee can write SQL statement according to this;Test data set is an independent database
Or data file, test data set include several data records relevant to examination question;Key for Reference is teacher for per pass SQL
The pre-set answer of examination paper;Total score is the score obtained after examinee correctly solves a problem, and total score is between one 0 to 100
Number.
For example, it is as follows to set examination question:
(1) problem describes: " total number of persons being born during display nineteen thirty-five and 1940 ";
(2) test data set: a relational database has in tables of data person, person a table in the database
Possess several test datas, sample data is as shown in table 1:
Test data in table 1.person tables of data
ID | First_name | Last_name | Year_born |
2 | Leonardo | DiCaprio | 1974 |
4 | Billy | Zane | 1966 |
5 | Kathy | Bates | 1940 |
6 | Michael | Ford | 1933 |
7 | Russell | Carpenter | 1971 |
(3) data model: CREATE TABLE " person " (" id " INTEGER NOT NULL, " first_name "
CHAR(255),"last_name"CHAR(255),"year_born"INTEGER,PRIMARY KEY("id"));
(4) Key for Reference: " select count (*) from person where year_born >=1935and
Year_born≤1940 ";
(5) total score: full marks 100 divide;
Step 2, the answer that student submits is executed in test data set, and is classified according to implementing result to answer;
Exception or mistake are compiled when dishing out when being executed, then the answer is classified as " not can be performed ";When answer compiling is logical
It is out-of-date, further compare the implementing result of the answer and the implementing result of Key for Reference;When two results are identical, by the answer
It is classified as " correct ", is otherwise classified as " executable ".
For example, the answer submitted for following three students:
Ans1:select count(*)from person where year_born in(1935,1936,1937,
1938,1939,1940);
Ans2:select*from person where year_born in(1935,1940);
Ans3:select count(*)from person where year_born between(1935,1940);
Ans1, Ans2 and Ans3 are executed respectively in given test data set.When executing Ans1, the implementing result of Ans1
It is identical as the implementing result of Key for Reference.Therefore Ans1 is classified as " correct ".When executing Ans2, implementing result is answered with reference
The implementing result of case is different, therefore Ans2 is classified as " executable ".When executing Ans3, compiler will throw exception.Therefore
Ans3 is classified as " not can be performed ".
Step 3, corresponding assessment processing is carried out for the different type of answer;
The answer is extended to the merging of Key for Reference collection and gives phase by step 3-1 when classification results are the answer of " correct "
Answer score value;
Such as: it is the Ans1 of " correct " for classification, adds it in Key for Reference set, at this time the reference of the topic
The element for including in answer set is extended for two, i.e. { " select count (*) from person where year_born
>=1935and year_born≤1940 ", " Select count (*) from person where year_born in
(1935,1936,1937,1938,1939,1940)”}。
According to the score value that topic is set, the scoring of full marks is given to the answer for being classified as " correct ".Such as: be for classification
The Ans1 of " correct " gives the scoring of full marks, i.e. 100% × 100=100.
Step 3-2, when classification results are the answer of " executable ", by grammar analysis technique, being by classification results " can
The answer of execution " and each element of the topic Key for Reference set are all converted to syntax tree, and according to similar between syntax tree
Degree comparison result is evaluated and tested, and then gives the score of corresponding ratio;Further, when classification results are that " can hold in step 3-2
Specific scoring step when the answer of row " are as follows:
Step 3-2-1 passes through the situation of a Key for Reference Rai of one " executable " answer SA He the topic
Grammar analysis technique is by a Key for Reference RA of the answer SA of " executable " and the topiciIt is respectively converted into corresponding syntax tree SS
And CSi, and the similitude S on calculation syntaxi, similitude SiCalculation formula are as follows:
Wherein ,/SS ∩ CSi/ indicate syntax tree SS and CSiThe quantity of node intersection, | CSi| indicate syntax tree CSiNode
Quantity ,/SS-CSi/ indicate syntax tree SS and CSiThe quantity of node difference set ,/SS/ indicate the number of nodes of syntax tree SS.
It should be noted that since all syntax trees all have the root node without actual syntax meaning,
Require when relevant node operation first excluding the root node.
Step 3-2-2, topic Q when possessing n Key for Reference for one calculate separately SA and each Key for Reference RAi
Grammer similitude, and using maximum value therein and the product of total score as final scoring, scoring Score calculates as follows:
Wherein, totalScore indicates the total score of the topic.Calculate separately SA and each Key for Reference RAiGrammer phase
Like property, and using maximum value therein and the product of total score as final scoring.
For example, for Ans2:select*from person where year_born in (1935,1940) and reference
Answer set " select count (*) from person where year_born >=1935and year_born≤
1940 ", " Select count (*) from person where year_born in (1935,1936,1937,1938,
" }, 1939,1940) three syntax trees SS, CS as shown in Figures 2 to 4 can be converted them to by grammar analysis technique1With
CS2。
SS and CS are calculated separately first with formula 11, SS and CS2Similarity, can obtain two results is respectively 0.198 He
0.647.It is 100 × 0.647=64.7 according to the final evaluation and test achievement that formula 2 can obtain Ans2.As can be seen that relative at the beginning of teacher
Begin given Key for Reference, new to expand the Key for Reference being added and Ans2 similarity is higher, the reference according to Ans2 and being newly added
The similarity of answer is scored also more rationally.
Step 3-3, when classification results be " not can be performed " answer evaluated and tested when, using text comparison in difference method come
The similarity of each element in the Key for Reference set of answer and the topic is calculated, and is evaluated and tested and is scored according to comparison result.
Further, the specific steps of step 3-3 are as follows:
Step 3-3-1, for a Key for Reference RA of one " not can be performed " submission answer SA and the topiciFeelings
Shape calculates and submits answer SA and Key for Reference RAiSimilarity on text;
Step 3-3-2, topic Q when possessing n Key for Reference for one are calculated separately and are submitted answer SA and each reference
Similarity of the answer on text, and using maximum value therein and the product of total score as final scoring.
Further, pass through editing distance algorithm (Levenshtein Distance) in step 3-3-1 or step 3-3-2
Or Hamming distance (Hamming Distance) algorithm calculates the similarity for submitting both answer and Key for Reference on text.
For example, for Ans3:select count (*) from person where year_born between
(1935,1940) and Key for Reference set " select count (*) from person where year_born >=
1935and year_born≤1940 ", " Select count (*) from person where year_born in
(1935,1936,1937,1938,1939,1940)"}.It can be calculated by standardization editing distance formula shown in formula (3)
The similarity of Ans3 and two Key for Reference.Wherein, LevDist (SA, RA) indicate the editor of answer SA and Key for Reference RA away from
From length (RA) indicates the character length of answer RA.
The similarity that Ans3 and two Key for Reference is calculated separately using formula (3), can obtain result is 0.574 and 0.632.
It is taken out maximum value and carries out product calculation with the score value of the topic, Ans3 can be obtained and be scored at 0.632 × 100=63.2.
The invention adopts the above technical scheme, 1) having the advantage that can be identified not using the implementing result of answer
Same correct option effectively evades the adverse effect caused by SQL statement characterization difference, and expands the ginseng of topic according to this
Examine answer set.2) can be using grammer and text based analytical technology be based on, answer and all references that student is submitted
Answer carries out comprehensive similarity-rough set, and then calculates a more just score.
Technical solution of the present invention identifies correct option by comparing the mode of implementing result, expands the reference of each topic
Answer set, then the difference that the SQL answer that student is submitted is integrated on grammer or text with Key for Reference is calculated, and complete according to this
The target that pairs of all SQL answers are evaluated and tested.The present invention can not only reduce teacher's a large amount of different Key for References of exhaustion by hand
With the workload for correcting SQL answer, and automatic judgment can be carried out according to submitted answer and each Key for Reference, reduce people
Work evaluates and tests the subjective errors that may be introduced, and is applicable to link of imparting knowledge to students and take an examination.
Claims (7)
1. a kind of hybrid SQL automatic scoring method, it is characterised in that: itself the following steps are included:
Step 1, typing SQL test question information, test question information include problem description, data model, test data set, Key for Reference and
Total score;
Step 2, the answer that student submits is executed in test data set, and is classified according to implementing result to answer;
Exception or mistake are compiled when dishing out when being executed, then the answer is classified as " not can be performed ";When answer compiling passes through,
Further compare the implementing result of the answer and the implementing result of Key for Reference;When two results are identical, which is sorted out
For " correct ", otherwise it is classified as " executable ".
Step 3, corresponding assessment processing is carried out for the different type of answer;
The answer is extended to the merging of Key for Reference collection and gives corresponding point by step 3-1 when classification results are the answer of " correct "
Value;The answer that classification results are " correct " is added in the Key for Reference of affiliated topic and expands Key for Reference set;According to topic
The score value of mesh setting, the scoring of full marks is given to the answer for being classified as " correct ".
Classification results are that " can hold by grammar analysis technique when classification results are the answer of " executable " by step 3-2
The answer of row " and each element of the topic Key for Reference set are all converted to syntax tree, and according to the similarity between syntax tree
Comparison result is evaluated and tested, and then gives the score of corresponding ratio;
Step 3-3 is calculated when the answer that classification results are " not can be performed " is evaluated and tested using text comparison in difference method
The similarity of each element in the Key for Reference set of answer and the topic, and evaluated and tested and scored according to comparison result.
2. a kind of hybrid SQL automatic scoring method according to claim 1, it is characterised in that: problem description in step 1
It is the technical matters for some specific data model;Data model provides database mould relevant to SQL examination paper for examinee
The metamessage of type, examinee can write SQL statement according to this;Test data set is an independent database or data file, is surveyed
Trying data set includes several data records relevant to examination question;Key for Reference is that teacher presets for per pass SQL examination paper
Good answer;Total score is the score obtained after examinee correctly solves a problem.
3. a kind of hybrid SQL automatic scoring method according to claim 1, it is characterised in that: data model passes through number
It is presented according to different modes such as definitional language, E-R graphs.
4. a kind of hybrid SQL automatic scoring method according to claim 2, it is characterised in that: total score be one 0 to
Number between 100.
5. a kind of hybrid SQL automatic scoring method according to claim 1, it is characterised in that: work as classification in step 3-2
Specific scoring step when being as a result the answer of " executable " are as follows:
Step 3-2-1, for a Key for Reference RA of one " executable " answer SA and the topiciSituation, pass through grammer point
Analysis technology is by a Key for Reference RA of the answer SA of " executable " and the topiciIt is respectively converted into corresponding syntax tree SS and CSi,
And the similitude S on calculation syntaxi, similitude SiCalculation formula are as follows:
Wherein ,/SS ∩ CSi/ indicate syntax tree SS and CSiThe quantity of node intersection, | CSi| indicate syntax tree CSiNumber of nodes
Amount ,/SS-CSi/ indicate syntax tree SS and CSiThe quantity of node difference set ,/SS/ indicate the number of nodes of syntax tree SS;
Step 3-2-2, topic Q when possessing n Key for Reference for one calculate separately SA and each Key for Reference RAiGrammer
Similitude, and using maximum value therein and the product of total score as final scoring, scoring Score calculates as follows:
Wherein, totalScore indicates the total score of the topic.
6. a kind of hybrid SQL automatic scoring method according to claim 1, it is characterised in that: the specific step of step 3-3
Suddenly are as follows:
Step 3-3-1, for a Key for Reference RA of one " not can be performed " submission answer SA and the topiciSituation, calculate
Submit answer SA and Key for Reference RAiSimilarity on text;
Step 3-3-2, topic Q when possessing n Key for Reference for one are calculated separately and are submitted answer SA and each Key for Reference
Similarity on text, and using maximum value therein and the product of total score as final scoring.
7. a kind of hybrid SQL automatic scoring method according to claim 6, it is characterised in that: step 3-3-1 or step
Being calculated in 3-3-2 by editing distance algorithm or Hamming distance algorithm submits both answer and Key for Reference similar on text
Degree.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910763451.3A CN110471936B (en) | 2019-08-19 | 2019-08-19 | Hybrid SQL automatic scoring method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910763451.3A CN110471936B (en) | 2019-08-19 | 2019-08-19 | Hybrid SQL automatic scoring method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110471936A true CN110471936A (en) | 2019-11-19 |
CN110471936B CN110471936B (en) | 2022-06-07 |
Family
ID=68511871
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910763451.3A Active CN110471936B (en) | 2019-08-19 | 2019-08-19 | Hybrid SQL automatic scoring method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110471936B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110852653A (en) * | 2019-11-22 | 2020-02-28 | 成都国腾实业集团有限公司 | Automatic scoring system applied to computer programming questions |
CN111652595A (en) * | 2020-06-15 | 2020-09-11 | 南京倍时佳信息科技有限公司 | Training system for enterprise management consultation |
CN111737424A (en) * | 2020-02-21 | 2020-10-02 | 北京沃东天骏信息技术有限公司 | Question matching method, device, equipment and storage medium |
CN112132420A (en) * | 2020-09-04 | 2020-12-25 | 广西大学 | SQL query-oriented refinement scoring method |
CN114357038A (en) * | 2022-02-25 | 2022-04-15 | 北京贝壳时代网络科技有限公司 | Structured query language sentence display method and electronic equipment |
CN117520522A (en) * | 2023-12-29 | 2024-02-06 | 华云天下(南京)科技有限公司 | Intelligent dialogue method and device based on combination of RPA and AI and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101593243A (en) * | 2008-05-26 | 2009-12-02 | 北京智慧东方信息技术有限公司 | A kind of examining method of Word operation questions |
CN106780224A (en) * | 2017-02-27 | 2017-05-31 | 牡丹江师范学院 | A kind of Modeling Teaching of Mathematics learning system |
CN106846088A (en) * | 2016-12-22 | 2017-06-13 | 福建工程学院 | A kind of Method of Commodity Recommendation of the product electric business website that disappears soon |
US20180336640A1 (en) * | 2017-05-22 | 2018-11-22 | Insurance Zebra Inc. | Rate analyzer models and user interfaces |
CN109213999A (en) * | 2018-08-20 | 2019-01-15 | 成都佳发安泰教育科技股份有限公司 | A kind of subjective item methods of marking |
CN109740473A (en) * | 2018-12-25 | 2019-05-10 | 东莞市七宝树教育科技有限公司 | A kind of image content automark method and system based on marking system |
CN110096702A (en) * | 2019-04-22 | 2019-08-06 | 安徽省泰岳祥升软件有限公司 | A kind of subjective item methods of marking and device |
-
2019
- 2019-08-19 CN CN201910763451.3A patent/CN110471936B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101593243A (en) * | 2008-05-26 | 2009-12-02 | 北京智慧东方信息技术有限公司 | A kind of examining method of Word operation questions |
CN106846088A (en) * | 2016-12-22 | 2017-06-13 | 福建工程学院 | A kind of Method of Commodity Recommendation of the product electric business website that disappears soon |
CN106780224A (en) * | 2017-02-27 | 2017-05-31 | 牡丹江师范学院 | A kind of Modeling Teaching of Mathematics learning system |
US20180336640A1 (en) * | 2017-05-22 | 2018-11-22 | Insurance Zebra Inc. | Rate analyzer models and user interfaces |
CN109213999A (en) * | 2018-08-20 | 2019-01-15 | 成都佳发安泰教育科技股份有限公司 | A kind of subjective item methods of marking |
CN109740473A (en) * | 2018-12-25 | 2019-05-10 | 东莞市七宝树教育科技有限公司 | A kind of image content automark method and system based on marking system |
CN110096702A (en) * | 2019-04-22 | 2019-08-06 | 安徽省泰岳祥升软件有限公司 | A kind of subjective item methods of marking and device |
Non-Patent Citations (1)
Title |
---|
李少芳: "《基于ASP.NET平台的高校成绩管理***的设计与实现》", 《黄山学院学报》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110852653A (en) * | 2019-11-22 | 2020-02-28 | 成都国腾实业集团有限公司 | Automatic scoring system applied to computer programming questions |
CN111737424A (en) * | 2020-02-21 | 2020-10-02 | 北京沃东天骏信息技术有限公司 | Question matching method, device, equipment and storage medium |
CN111652595A (en) * | 2020-06-15 | 2020-09-11 | 南京倍时佳信息科技有限公司 | Training system for enterprise management consultation |
CN112132420A (en) * | 2020-09-04 | 2020-12-25 | 广西大学 | SQL query-oriented refinement scoring method |
CN112132420B (en) * | 2020-09-04 | 2023-11-28 | 广西大学 | SQL query-oriented refinement scoring method |
CN114357038A (en) * | 2022-02-25 | 2022-04-15 | 北京贝壳时代网络科技有限公司 | Structured query language sentence display method and electronic equipment |
CN117520522A (en) * | 2023-12-29 | 2024-02-06 | 华云天下(南京)科技有限公司 | Intelligent dialogue method and device based on combination of RPA and AI and electronic equipment |
CN117520522B (en) * | 2023-12-29 | 2024-03-22 | 华云天下(南京)科技有限公司 | Intelligent dialogue method and device based on combination of RPA and AI and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110471936B (en) | 2022-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110471936A (en) | A kind of hybrid SQL automatic scoring method | |
Ahadi et al. | Students' syntactic mistakes in writing seven different types of SQL queries and its application to predicting students' success | |
CN103823794B (en) | A kind of automatization's proposition method about English Reading Comprehension test query formula letter answer | |
CN105824802A (en) | Method and device for acquiring knowledge graph vectoring expression | |
US20180039865A1 (en) | Analog circuit fault mode classification method | |
CN109213999A (en) | A kind of subjective item methods of marking | |
CN105912625A (en) | Linked data oriented entity classification method and system | |
CN105389583A (en) | Image classifier generation method, and image classification method and device | |
CN107145514B (en) | Chinese sentence pattern classification method based on decision tree and SVM mixed model | |
US20140317032A1 (en) | Systems and Methods for Generating Automated Evaluation Models | |
Necşulescu et al. | Reading between the lines: Overcoming data sparsity for accurate classification of lexical relationships | |
CN102023921A (en) | Automatic grading method and device of structured query language (SQL) program | |
CN105468468A (en) | Data error correction method and apparatus facing question answering system | |
CN105512132A (en) | Method and system for intelligent evaluation | |
CN103034627A (en) | Method and device for calculating sentence similarity and method and device for machine translation | |
KR20050093765A (en) | Automated evaluation of overly repetitive word use in an essay | |
CN108717459A (en) | A kind of mobile application defect positioning method of user oriented comment information | |
Wang et al. | Combining dynamic and static analysis for automated grading sql statements | |
CN112132420A (en) | SQL query-oriented refinement scoring method | |
Ghofrani et al. | A conceptual framework for clone detection using machine learning | |
CN117194258A (en) | Method and device for evaluating large code model | |
CN110164216A (en) | A kind of SQL Online Judge system | |
CN112528011B (en) | Open type mathematic operation correction method, system and equipment driven by multiple data sources | |
Seiler et al. | Comparing traceability through information retrieval, commits, interaction logs, and tags | |
Pal et al. | MultiTabQA: Generating tabular answers for multi-table question answering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |