WO2022252643A1 - 基于神经网络的知识掌握程度测评方法、装置及相关设备 - Google Patents

基于神经网络的知识掌握程度测评方法、装置及相关设备 Download PDF

Info

Publication number
WO2022252643A1
WO2022252643A1 PCT/CN2022/072291 CN2022072291W WO2022252643A1 WO 2022252643 A1 WO2022252643 A1 WO 2022252643A1 CN 2022072291 W CN2022072291 W CN 2022072291W WO 2022252643 A1 WO2022252643 A1 WO 2022252643A1
Authority
WO
WIPO (PCT)
Prior art keywords
answer
information
state information
status information
neural network
Prior art date
Application number
PCT/CN2022/072291
Other languages
English (en)
French (fr)
Inventor
陈聪
舒畅
胡忆云
陈又新
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022252643A1 publication Critical patent/WO2022252643A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular to a method, device, computer equipment and storage medium for assessing knowledge mastery based on a neural network.
  • the online education platform evaluates the knowledge level of each user, and then recommends corresponding learning courses for each user based on the evaluation results.
  • the way to evaluate the user's knowledge level is mainly to use a set of test papers with fixed test questions for the user to answer, and to evaluate the user's knowledge level according to the obtained test results, and finally recommend the user according to the evaluation results.
  • Learning courses so that users are evaluated only by using a set of test papers with fixed test questions, for higher-level users, the same evaluation results may be obtained, which leads to the inability to make a judgment on the knowledge level of higher-level users. Accurate judgment makes it impossible for the online education platform to accurately recommend learning courses corresponding to the user's knowledge level.
  • the embodiments of the present application provide a neural network-based method, device, computer equipment, and storage medium for assessing knowledge mastery, so as to accurately evaluate the target user's knowledge mastery in preferred fields.
  • the embodiment of the present application provides a neural network-based knowledge mastery evaluation method, including:
  • the current level of the target user in the preferred field is determined.
  • the embodiment of the present application also provides a neural network-based knowledge mastery evaluation device, including:
  • the first obtaining module is used to obtain the current answer record of the target user
  • An extraction module configured to extract information from the constituent elements of the current answer record to obtain answer status information
  • the second obtaining module is used to obtain the next answer record of the target user, and perform information extraction for each component element of the next answer record to obtain temporary state information;
  • a fusion update module configured to use the temporary state information to fuse and update the answer state information based on the long-short-term memory neural network, to obtain updated answer state information
  • a judging module configured to judge whether the updated answer status information meets the preset update termination condition; if so, use the updated answer status information as the target status information;
  • the next answer record, and information extraction is performed for each component element of the next answer record, and the step of obtaining temporary state information continues to be executed;
  • a determining module configured to determine the current level of the target user in the preferred field based on the target state information and a preset evaluation method.
  • an embodiment of the present application further provides a computer device, including a memory, a processor, and computer-readable instructions stored in the memory and operable on the processor, and the processor executes the
  • a computer device including a memory, a processor, and computer-readable instructions stored in the memory and operable on the processor, and the processor executes the
  • the steps of the neural network-based knowledge mastery evaluation method it is realized as follows: the steps of the neural network-based knowledge mastery evaluation method:
  • the current level of the target user in the preferred field is determined.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores computer-readable instructions, and the computer-readable instructions are implemented as follows when executed by a processor: The steps of the network knowledge mastery evaluation method:
  • the current level of the target user in the preferred field is determined.
  • the neural network-based evaluation method, device, computer equipment and storage medium proposed in this application can more accurately determine the current level of the target user in the preferred field.
  • FIG. 1 is an exemplary system architecture diagram to which the present application can be applied;
  • Fig. 2 is the flow chart of an embodiment of the neural network-based knowledge mastery evaluation method of the present application
  • Fig. 3 is a schematic structural diagram of an embodiment of a neural network-based knowledge mastery evaluation device according to the present application.
  • Fig. 4 is a schematic structural diagram of an embodiment of a computer device according to the present application.
  • a system architecture 100 may include terminal devices 101 , 102 , 103 , a network 104 and a server 105 .
  • the network 104 is used as a medium for providing communication links between the terminal devices 101 , 102 , 103 and the server 105 .
  • Network 104 may include various connection types, such as wires, wireless communication links, or fiber optic cables, among others.
  • Terminal devices 101 , 102 , 103 Users can use terminal devices 101 , 102 , 103 to interact with server 105 via network 104 to receive or send messages and the like.
  • Terminal devices 101, 102, 103 can be various electronic devices with display screens and support web browsing, including but not limited to smartphones, tablet computers, e-book readers, MP3 players (Moving Picture Eperts Group Audio Layer III, Dynamic Image expert compression standard audio layer 3), MP4 (Moving Picture Eperts Group Audio Layer IV, moving picture expert compression standard audio layer 4) player, laptop portable computer and desktop computer, etc.
  • MP3 players Moving Picture Eperts Group Audio Layer III, Dynamic Image expert compression standard audio layer 3
  • MP4 Moving Picture Eperts Group Audio Layer IV, moving picture expert compression standard audio layer 4
  • laptop portable computer and desktop computer etc.
  • the server 105 may be a server that provides various services, such as a background server that provides support for pages displayed on the terminal devices 101 , 102 , 103 .
  • the neural network-based knowledge mastery evaluation method provided in the embodiment of the present application is executed by the server, and accordingly, the neural network-based knowledge mastery evaluation device is set in the server.
  • terminal devices, networks and servers in Fig. 1 are only illustrative. According to implementation requirements, there may be any number of terminal devices, networks, and servers.
  • the terminal devices 101, 102, and 103 in the embodiment of the present application may specifically correspond to application systems in actual production.
  • FIG. 2 shows a neural network-based knowledge mastery evaluation method provided by the embodiment of the present application.
  • the application of the method to the server in FIG. 1 is taken as an example for illustration, and the details are as follows:
  • the target user is assessed, and some basic test questions are extracted from the question bank and sent to the target user. After the target user answers, the answer to the test question is sent to the server through the network transmission protocol, and the server will The target user's answer is associated with the question content, question field and reference answer in the question bank to generate the current answer record.
  • the current answer record refers to the answer record of the target user collected by the server this time, specifically including but not limited to question content, question field, reference answer and user answer.
  • the above-mentioned topic fields, topic content and reference answers are pre-stored in the question bank, and the topic fields, topic content and reference answers are associated with user answers through topic identification, so as to reduce network transmission data and improve data interaction.
  • the question identifier is a character string used to uniquely identify the test question, specifically, it may be composed of at least one of numbers, words, English letters and other symbols.
  • S202 Perform information extraction on the constituent elements of the current answer record to obtain answer status information.
  • the answer status information is a representation of the corresponding ability of the target user in each topic field. Specifically, it can be realized in the following ways: perform semantic identification based on the topic content, reference answers, and user answers of the current answer record, and determine the relationship between the reference answer and the user answer. The degree of similarity between them, and then based on the degree of similarity, the answer status information is determined.
  • the information extraction of the constituent elements of the current answer record specifically includes: extracting the question content, question field, reference answer and user answer in the current answer record, and firstly extracting the mentioned question content, question field, reference answer Segment with user answers to obtain the segmented topic content, topic field, reference answer and user answer, and then segment the segmented topic content, topic field, reference answer and user answer, and mark the information content of the word segmentation. Finally, the tagged word segmentation is classified, and the neural network is used for feature extraction according to the classification results, and the semantic association information among the topic content, topic field, reference answer and user answer is obtained.
  • S203 Obtain the next answer record of the target user, and perform information extraction on each component element of the next answer record to obtain temporary state information.
  • the temporary state information is the semantic correlation information between the question content, question field, reference answer and user answer recorded in the next answer record.
  • the temporary state information is used to fuse and update the answering state information to obtain updated answering state information.
  • the long-short-term memory neural network is a special recurrent neural network that can learn long-term dependent networks.
  • the long-short-term memory neural network includes unit states and thresholds, where the unit states are used to store information, and information can be organized according to time order Passed to the next unit state transfer, the threshold includes forgetting layer, input layer, output layer, the forgetting layer is used to judge whether the information needs to be passed on, the input layer is used to update the state of the previous unit according to the result of the forgetting layer, and the output layer is used to decide The information that needs to be output in the unit state.
  • the unit state transmits the answer status information to the next unit state, and in the next unit state, the fusion update of the temporary state information to the answer status information is realized through the threshold, and the updated Answer status information, through the integration and update of temporary status information on answer status information, the updated answer status information contains elements of temporary status information and answer status information, and the updated answer status information is continuously updated according to the time sequence , the constituent elements of each temporary state information of the current user are retained in chronological order, so that after analyzing and evaluating the updated answering state information, the knowledge mastery degree of the target user can be accurately judged.
  • S205 Determine whether the updated answer status information meets the preset update termination condition, if yes, use the updated answer status information as the target status information, if not, return to obtain the user's next answer record, and target the next Information extraction is carried out for each component element of an answer record, and the step of obtaining temporary state information is continued.
  • the preset update termination condition is that when there are N consecutive test questions in the updated answer status information, among the N continuous test questions, if the semantic information between the topic content and the reference answer and the semantic information between the topic content and the user's answer If the number of times the semantic similarity of information is greater than or equal to the preset first threshold is greater than or equal to the preset second threshold, then the updated answer status information is used as the target status information; otherwise, return to obtain the user's next answer record, and for the next Information extraction is performed on each constituent element of the answer record to obtain temporary state information, wherein the preset first threshold and the preset second threshold can be obtained by analyzing historical experience data.
  • S206 Determine the current level of the target user in the preferred field based on the target status information and the preset evaluation method.
  • the target user's preferred field is determined first, and then the level of the target user in the preferred field is determined.
  • the following method can be used to determine the preferred field: count the number of test questions in each topic field; use the topic field whose number of test questions exceeds the preset threshold as the field to be analyzed; based on the target state information, determine the The average evaluation score; from all the fields to be analyzed, the field to be analyzed with the largest average evaluation score is selected as the preferred field of the target user.
  • the current level of the target user in the preferred field is that when there are M test questions in the preferred field in the target state information, in the M test questions, the preset score of each test question is b.
  • the answer state information is obtained by extracting information from the constituent elements of the current answer record; the next answer record of the target user is obtained, and information is extracted for each constituent element of the next answer record to obtain the temporary state information; based on the long-short-term memory neural network, the temporary status information is used to fuse and update the answer status information to obtain the updated answer status information; judge whether the updated answer status information meets the preset update termination conditions, and if so, update
  • the final answer status information is used as the target status information, and based on the target status information and the preset evaluation method, the current level of the target user in the preferred field can be determined more accurately.
  • step S202 information is extracted from the constituent elements of the current answer record, and the answer status information obtained includes S2020 to S2022:
  • the element splicing sequence is P
  • the topic content is P1
  • the topic field P2 the reference answer P3
  • the Bert model is an NLP (Natural Language Processing, Natural Language Processing) model.
  • the answering status information includes data information used to represent the subject area and the difficulty coefficient corresponding to the item area.
  • the Bert model is used to obtain the semantic information of the topic content corresponding to the topic field and the semantic information of the reference answer in the element splicing sequence, and the semantic information of the topic content corresponding to the topic field and the user’s answer, and calculate the semantic information of the topic content and the reference answer, and the correlation between the topic content and the user’s answer.
  • the number of thresholds is set as the data information used to characterize the topic field and the difficulty coefficient corresponding to the topic field, and the difficulty coefficient is used to represent the degree of knowledge mastery of the target user in the topic field.
  • the accurate It is easy to understand that the higher the accuracy rate, the better the knowledge mastery of the target user in the topic field, and correspondingly, the smaller the difficulty coefficient.
  • the element splicing sequence is obtained by concatenating the topic content, topic field, reference answer, and user answer, and the Bert model is used to extract the element splicing sequence to obtain the corresponding difficulty for representing the topic field and the topic field
  • the data information of the coefficient can effectively reflect the difficulty coefficient of the target user in the topic field.
  • step S2020 the step of splicing the topic content, topic field, reference answer, and user answer to obtain an element splicing sequence includes:
  • the content of the question, the field of the question, the reference answer and the user's answer are spliced using separators to obtain a spliced sequence of elements.
  • separators include but are not limited to spaces, commas, commas, and semicolons; assuming that the element splicing sequence is P, the topic content is P1, the topic field is P2, the reference answer is P3, and the user’s answer is P4.
  • the separator is a space.
  • the question content, question field, reference answer, and user answer are separated by separators, which can ensure that the finally extracted answer status information can effectively reflect the difficulty factor of the target user in the question field.
  • step S2022 the Bert model is used to extract the spliced sequence of elements to obtain the answer status information.
  • the steps of data information are detailed as follows:
  • the data information used to represent the difficulty coefficient corresponding to the topic field in the answer status information is obtained.
  • the semantic similarity between the first semantic information and the second semantic information is determined by means of cosine similarity, Manhattan distance, Euclidean distance and the like.
  • the semantic similarity is greater than or equal to the preset threshold, it means that the reference answer under the same topic content is consistent with the user’s answer;
  • the error rate can therefore be used to characterize the data information of the difficulty coefficient corresponding to the topic field.
  • the element splicing sequence is obtained by concatenating the topic content, topic field, reference answer, and user answer, and the Bert model is used to extract the element splicing sequence to obtain the corresponding difficulty for representing the topic field and the topic field
  • the data information of the coefficient can effectively reflect the difficulty coefficient of the target user in the topic field.
  • step S204 using a long-short-term memory neural network and using temporary state information to fuse and update the answer state information, the steps of obtaining the updated answer state information include steps from S2041 to S2044:
  • the temporary state information and the answering state information are calculated and processed according to the following formula (1) to obtain candidate information:
  • i t is the candidate information
  • is the activation function
  • w i is the weight matrix
  • h t-1 is the answer status information at time t-1
  • x t is the answer status information at time t (that is, the above-mentioned temporary status information)
  • b i is the bias value.
  • the temporary state information and the answering state information are calculated and processed according to the following formula (2) to obtain the forgotten information:
  • f t is the forgotten information
  • is the activation function
  • w f is the weight matrix
  • h t-1 is the answer status information at time t-1
  • x t is the answer status information at time t (that is, the above-mentioned temporary status information)
  • b f is the bias value.
  • the temporary state information and the answering state information are calculated and processed according to the following formula (3) to obtain candidate update information:
  • tanh is the tangent function
  • w c is the weight matrix
  • h t-1 is the answer status information at time t-1
  • x t is the answer status information at time t (that is, the above-mentioned temporary status information)
  • b c is the offset value.
  • the temporary state information, candidate information, forgotten information, and candidate update information are calculated and processed according to the following formula (4), and the updated answer state information is obtained:
  • c t is the updated answer status information at time t, it is candidate information, is the candidate update information, f t is the forgotten information, and c t-1 is the answer status information at time t-1 .
  • the temporary state information, candidate update information, forgotten information, and update information are calculated in a preset manner to obtain updated answer state information, which can record the target user's previous answer state. information, which is beneficial to determine the current level of the target user under the topic field according to the updated answer status information.
  • step S206 based on the target state information and the preset evaluation method, the step of determining the current level of the target user in the preferred field includes S2061 to S2062:
  • S2062. Determine the current level of the target user in the preferred field by using the evaluation score of the preferred field and the preset level mapping relationship.
  • the preset level mapping relationship is the corresponding relationship between the assessment score and the preset level, wherein the preset level can be obtained by analyzing historical experience data, for example, assuming that the target user completes the test questions in the preferred field The number is 100, the total score is 100, and the evaluation score obtained by the target user in the preference field is x.
  • the corresponding level is pass.
  • 60 ⁇ x ⁇ 70 the corresponding level is general.
  • 70 ⁇ x ⁇ 90 the corresponding level is good
  • x ⁇ 90 the corresponding level is excellent.
  • the preferred field of the target user and the evaluation score of the target user in the preferred field are determined through the target state information and the preset evaluation method, and then according to the evaluation score of the preferred field and the preset level mapping relationship , to determine the current level of the target user in the preferred field, can more accurately determine the degree of knowledge mastery of the target user in the preferred field, and recommend suitable learning courses for the target user according to the user's mastery of knowledge in the preferred field.
  • the target state information at least includes data information used to characterize the topic domain and the difficulty coefficient corresponding to the topic domain, based on the target status information and the preset evaluation method,
  • the steps of determining the preferred field of the target user and the evaluation score of the target user in the preferred field include S20620 to S20622:
  • S20620 Input the target state information into a preset neural network model, where the preset neural network model includes a domain classifier and a grade classifier.
  • the domain classifier and the grade classifier can be implemented by using a softmax normalized multi-classification function.
  • the domain classifier can obtain the data information related to the topic domain in the target state information, and normalize the data information according to the softmax normalized multi-classification function, and realize the domain classification according to the normalized result, for example, if normalized If the result of normalization is 1, it is determined to be the target field, otherwise it is a non-target field.
  • the field classifier can obtain the data information of the difficulty coefficient corresponding to the topic field in the target state information, and classify the data according to the softmax normalized multi-classification function.
  • the information is normalized and classified according to the result of normalization. For example, if the result of normalization is 1, it is determined as the target level, otherwise it is determined as the non-target level.
  • S20621 Use the preset neural network to perform domain classification processing and grade classification processing on the target state information, and obtain domain classification results and grade classification results.
  • the domain classification result of the target user is the grammatical domain
  • the grade classification result is excellent
  • the evaluation score is 92
  • the target user is in the grammatical domain
  • the grade of the target user is excellent, and then the evaluation score of the target user is obtained as 92 according to the mapping relationship between the grade corresponding to the grammatical field and the evaluation score.
  • the target state information is input into a preset neural network model
  • the preset neural network model includes a domain classifier and a grade classifier
  • the preset neural network is used to perform domain classification processing and grades on the target state information Classification processing, obtain the domain classification results and grade classification results, determine the target user's preferred domain according to the domain classification results and grade classification results, as well as the evaluation score of the target user in the preferred domain, so as to judge the knowledge of the target user in the preferred domain more accurately Master the level and recommend suitable learning courses to the target users.
  • FIG. 3 shows a functional block diagram of a neural network-based knowledge mastery evaluation device corresponding to the neural network-based knowledge mastery evaluation method of the above-mentioned embodiment.
  • the neural network-based knowledge mastery evaluation device includes a first acquisition module 30 , an extraction module 31 , a second acquisition module 32 , a fusion update module 33 , a judgment module 34 and a determination module 35 .
  • the detailed description of each functional module is as follows:
  • the first acquiring module 30 is configured to acquire the current answer record of the target user.
  • the extraction module 31 is used to extract information from the constituent elements of the current answer record to obtain answer status information.
  • the second acquisition module 32 is configured to acquire the next answer record of the target user, and perform information extraction for each component element of the next answer record to obtain temporary state information.
  • the fusion update module 33 is configured to fuse and update the answer state information by using the temporary state information based on the long-short-term memory neural network to obtain updated answer state information.
  • Judging module 34 used to judge whether the updated answer status information meets the preset update termination condition, if yes, use the updated answer status information as the target status information, if not, return to obtain the user's next answer record , and perform information extraction for each component element of the next answer record, and continue to execute the step of obtaining temporary state information.
  • the determining module 35 is configured to determine the current level of the target user in the preferred field based on the target state information and the preset evaluation method.
  • the extraction module 31 includes:
  • the first splicing unit is configured to splice the content of the question, the field of the question, the reference answer and the user's answer to obtain an element splicing sequence.
  • the first input unit is used to input the spliced sequence of features into the Bert model.
  • the first extraction unit is configured to use the Bert model to extract the spliced sequence of elements to obtain answer status information, where the answer status information includes data information used to represent the subject domain and the difficulty coefficient corresponding to the subject domain.
  • the first splicing unit includes:
  • the second splicing unit is configured to splice the content of the question, the field of the question, the reference answer and the user's answer using a separator to obtain a splicing sequence of elements.
  • the first extraction unit includes:
  • the first information acquisition unit is used to obtain the first semantic information according to the content of the question and the reference answer.
  • the second information obtaining unit is used to obtain the second semantic information according to the content of the question and the user's answer.
  • a similarity calculation unit configured to calculate the semantic similarity between the first semantic information and the second semantic information.
  • the third information acquisition unit is used to obtain the data information used to represent the difficulty coefficient corresponding to the topic field in the answer status information according to the semantic similarity and the preset threshold.
  • the fusion update module 33 includes:
  • the first calculation unit is used to input the temporary state information and the answering state information to the input layer of the long-short-term memory neural network for calculation and processing to obtain candidate information.
  • the second calculation unit is used to input the temporary state information and the answering state information to the forgetting layer in the long-short-term memory neural network for calculation and processing to obtain the forgetting information.
  • the third calculation unit is used for inputting the temporary state information and the answering state information to the output layer of the long-short memory neural network for calculation and processing to obtain candidate update information.
  • the fourth calculation unit is used to calculate the answer status information, candidate information, forgotten information and candidate update information in a preset manner to obtain updated answer status information.
  • the determining module 35 includes:
  • the first determining unit is configured to determine the target user's preferred field and the target user's evaluation score in the preferred field based on the target state information and a preset evaluation method.
  • the second determining unit is configured to determine the current level of the target user in the preferred field through the evaluation score of the preferred field and the preset level mapping relationship.
  • the first determination unit includes:
  • the second input unit is used to input the target state information into a preset neural network model, and the preset neural network model includes a domain classifier and a grade classifier;
  • the classification unit is used to perform domain classification processing and grade classification processing on the target state information by using a preset neural network to obtain domain classification results and grade classification results;
  • the third determining unit is configured to determine the preferred field of the target user and the evaluation score of the target user in the preferred field according to the field classification result and the level classification result.
  • Each module in the above-mentioned neural network-based knowledge mastery evaluation device can be fully or partially realized by software, hardware and combinations thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • FIG. 4 is a block diagram of the basic structure of the computer device in this embodiment.
  • the computer device 4 includes a memory 41 , a processor 42 and a network interface 43 connected to each other through a system bus. It should be pointed out that the figure only shows the computer device 4 with components connected to the memory 41, the processor 42, and the network interface 43, but it should be understood that it is not required to implement all the components shown, and more more or fewer components. Among them, those skilled in the art can understand that the computer device here is a device that can automatically perform numerical calculation and/or information processing according to preset or stored instructions, and its hardware includes but is not limited to microprocessors, dedicated Integrated circuit (Application Specific Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processor (Digital Signal Processor, DSP), embedded devices, etc.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Signal Processor
  • the computer equipment may be computing equipment such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the computer device can perform human-computer interaction with the user through keyboard, mouse, remote controller, touch panel or voice control device.
  • the memory 41 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card type memory (for example, SD or D interface display memory, etc.), random access memory (RAM) , static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, etc.
  • the memory 41 may be an internal storage unit of the computer device 4 , such as a hard disk or memory of the computer device 4 .
  • the memory 41 can also be an external storage device of the computer device 4, such as a plug-in hard disk equipped on the computer device 4, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.
  • the memory 41 may also include both an internal storage unit of the computer device 4 and an external storage device thereof.
  • the memory 41 is generally used to store the operating system and various application software installed in the computer device 4 , such as computer-readable instructions for evaluating knowledge mastery based on neural networks.
  • the memory 41 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 42 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chips in some embodiments. This processor 42 is generally used to control the general operation of said computer device 4 . In this embodiment, the processor 42 is configured to execute computer-readable instructions stored in the memory 41 or process data, for example, execute computer-readable instructions for assessing knowledge mastery based on a neural network.
  • CPU Central Processing Unit
  • controller microcontroller
  • microprocessor microprocessor
  • This processor 42 is generally used to control the general operation of said computer device 4 .
  • the processor 42 is configured to execute computer-readable instructions stored in the memory 41 or process data, for example, execute computer-readable instructions for assessing knowledge mastery based on a neural network.
  • the network interface 43 may include a wireless network interface or a wired network interface, and the network interface 43 is generally used to establish a communication connection between the computer device 4 and other electronic devices.
  • the present application also provides another implementation manner, which is to provide a computer-readable storage medium
  • the computer-readable storage medium may be non-volatile or volatile
  • the computer-readable storage medium stores An interface display program, the interface display program can be executed by at least one processor, so that the at least one processor executes the steps of the above-mentioned neural network-based knowledge mastery evaluation method.
  • the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD) contains several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请涉及人工智能技术领域,公开了一种基于神经网络的知识掌握程度测评方法、装置、计算机设备及存储介质,所述方法包括:通过对当前答题记录的组成要素进行信息提取,得到答题状态信息;获取目标用户的下一答题记录,并针对下一答题记录的每个组成要素进行信息提取,得到临时状态信息;基于长短时记忆神经网络,采用临时状态信息对答题状态信息进行融合更新,得到更新后的答题状态信息;判断更新后的答题状态信息是否符合预设的更新终止条件,若符合,将更新后的答题状态信息作为目标状态信息,基于目标状态信息和预设的评估方式,确定目标用户在偏好领域的当前级别,采用本申请能够更加准确的确定目标用户在偏好领域的知识掌握程度。

Description

基于神经网络的知识掌握程度测评方法、装置及相关设备
本申请要求于2021年6月1日提交中国专利局、申请号为202110611216.1,发明名称为“基于神经网络的知识掌握程度测评方法、装置及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种基于神经网络的知识掌握程度测评方法、装置、计算机设备及存储介质。
背景技术
在互联网的普及下,传统的教学方式发生了变化,在线教育应运而生。为了给用户提供更好的教育服务,在进行在线教育之前,在线教育平台上对每个用户的知识水平进行评估,再根据评估结果为每个用户推荐相应的学习课程。
在实现本申请的过程中,发明人意识到现有的方法至少存在如下问题:
目前,对用户进行知识水平的评估方式,主要是采用一套具有固定测试题的试卷让用户进行作答,并根据得到的测试结果对用户的知识水平进行评估,最后根据评估结果为用户推荐相应的学***较高的用户来说,可能会得到相同的测评结果,进而导致无法对水平较高的用户的知识水平作出准确的判断,使得在线教育平台无法准确的为用户推荐与用户知识水平相应的学习课程。
发明内容
本申请实施例提供一种基于神经网络的知识掌握程度测评方法、装置、计算机设备和存储介质,以准确的测评目标用户在偏好领域的知识掌握程度。
为了解决上述技术问题,本申请实施例提供一种基于神经网络的知识掌握程度测评方法,包括:
获取目标用户的当前答题记录;
对所述当前答题记录的组成要素进行信息提取,得到答题状态信息;
获取所述目标用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息;
基于长短时记忆神经网络,采用所述临时状态信息对所述答题状态信息进行融合更新,得到更新后的答题状态信息;
判断所述更新后的答题状态信息是否符合预设的更新终止条件,若符合,将所述更新后的答题状态信息作为目标状态信息,若不符合,则返回所述获取用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息的步骤继续执行;
基于所述目标状态信息和预设的评估方式,确定所述目标用户在偏好领域的当前级别。
为了解决上述技术问题,本申请实施例还提供一种基于神经网络的知识掌握程度测评装置,包括:
第一获取模块,用于获取目标用户的当前答题记录;
提取模块,用于对所述当前答题记录的组成要素进行信息提取,得到答题状态信息;
第二获取模块,用于获取所述目标用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息;
融合更新模块,用于基于长短时记忆神经网络,采用所述临时状态信息对所述答题状态信息进行融合更新,得到更新后的答题状态信息;
判断模块,用于判断所述更新后的答题状态信息是否符合预设的更新终止条件,若符合,将所述更新后的答题状态信息作为目标状态信息,若不符合,则返回所述获取用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息的步骤继续执行;
确定模块,用于基于所述目标状态信息和预设的评估方式,确定所述目标用户在偏好领域的当前级别。
为了解决上述技术问题,本申请实施例还提供一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下:基于神经网络的知识掌握程度测评方法的步骤:
获取目标用户的当前答题记录;
对所述当前答题记录的组成要素进行信息提取,得到答题状态信息;
获取所述目标用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息;
基于长短时记忆神经网络,采用所述临时状态信息对所述答题状态信息进行融合更新,得到更新后的答题状态信息;
判断所述更新后的答题状态信息是否符合预设的更新终止条件,若符合,将所述更新后的答题状态信息作为目标状态信息,若不符合,则返回所述获取用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息的步骤继续执行;
基于所述目标状态信息和预设的评估方式,确定所述目标用户在偏好领域的当前级别。
为了解决上述技术问题,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如下:基于神经网络的知识掌握程度测评方法的步骤:
获取目标用户的当前答题记录;
对所述当前答题记录的组成要素进行信息提取,得到答题状态信息;
获取所述目标用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息;
基于长短时记忆神经网络,采用所述临时状态信息对所述答题状态信息进行融合更新,得到更新后的答题状态信息;
判断所述更新后的答题状态信息是否符合预设的更新终止条件,若符合,将所述更新后的答题状态信息作为目标状态信息,若不符合,则返回所述获取用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息的步骤继续执行;
基于所述目标状态信息和预设的评估方式,确定所述目标用户在偏好领域的当前级别。
本申请提出的基于神经网络的知识掌握程度测评方法、装置、计算机设备及存储介质,更加准确的确定目标用户在偏好领域的当前级别。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请可以应用于其中的示例性***架构图;
图2是本申请的基于神经网络的知识掌握程度测评方法的一个实施例的流程图;
图3是根据本申请的基于神经网络的知识掌握程度测评装置的一个实施例的结构示意图;
图4是根据本申请的计算机设备的一个实施例的结构示意图。
具体实施方式
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同;本文中在申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请;本申请的说明书和权利要求书及上述附图说明中的 术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。本申请的说明书和权利要求书或上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
请参阅图1,如图1所示,***架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。
终端设备101、102、103可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Eperts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Eperts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103上显示的页面提供支持的后台服务器。
需要说明的是,本申请实施例所提供的基于神经网络的知识掌握程度测评方法由服务器执行,相应地,基于神经网络的知识掌握程度测评装置设置于服务器中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器,本申请实施例中的终端设备101、102、103具体可以对应的是实际生产中的应用***。
请参阅图2,图2示出本申请实施例提供的一种基于神经网络的知识掌握程度测评方法,以该方法应用在图1中的服务端为例进行说明,详述如下:
S201:获取目标用户的当前答题记录。
具体的,本实施例对目标用户进行考核,先从题库中抽取若干基础考题发送给目标用户,目标用户在进行作答后,通过网络传输协议,将针对考题的答案发送给服务端,服务端将目标用户的答案,与题库中的题目内容、题目领域和参***进行关联,生成当前答题记录。
其中,当前答题记录是指服务端本次采集到的目标用户的答题记录,具体包括但不限于题目内容、题目领域、参***和用户答案。
作为一种可选方式,上述题目领域、题目内容和参***,预先存储于题库中,题目领域、题目内容和参***,通过题目标识与用户答案进行关联,以便减少网络传输数据,提高数据交互效率,其中,题目标识是用于进行对考题进行唯一标识的字符串,具体可以是由数字、文字、英文字母和其他符号中的至少一种组成。
进一步地,在获取到题目领域、题目内容、参***和用户答案之后,将上述要素进行拼接,得到当前答题记录。
S202:对当前答题记录的组成要素进行信息提取,得到答题状态信息。
具体的,答题状态信息为目标用户在每个题目领域对应能力的表征,具体可通过如下方式实现:基于当前答题记录的题目内容、参***和用户答案进行语义识别,确定参***和用户答案之间的相似程度,进而基于该相似程度,确定答案状态信息。
进一步地,对当前答题记录的组成要素进行信息提取具体包括:通过提取当前答题记录中的题目内容、题目领域、参***和用户答案,并通过先对提到的题目内容、题目领域、参***和用户答案进行分割,得到分割后的题目内容、题目领域、参***和用户答案,然后再对分割后的题目内容、题目领域、参***和用户答案进行分词,并对分词进行信息内容标注,最后对已标注的分词进行分类,并根据分类的结果采用神经网络进行特征提取,得到题目内容、题目领域、参***和用户答案之间的语义关联信息。
S203:获取目标用户的下一答题记录,并针对下一答题记录的每个组成要素进行信息提取,得到临时状态信息。
具体的,临时状态信息为下一答题记录的题目内容、题目领域、参***和用户答案之间的语义关联信息。
S204:基于长短时记忆神经网络,采用临时状态信息对答题状态信息进行融合更新,得到更新后的答题状态信息。
具体的,长短时记忆神经网络是一种特殊的递归神经网络,能够学习长期依赖网络,长短时记忆神经网络包括单元状态和门限,其中,单元状态用于存储信息,并可以根据时间顺序将信息传递给下一单元状态传递,门限包括遗忘层、输入层、输出层,遗忘层用于判断信息是否需要继续传递,输入层用于根据遗忘层的结果更新上一单元状态,输出层用于决定单元状态中需要输出的信息,在本方案中,单元状态将答题状态信息传递给下一单元状态,并在下一单元状态中通过门限实现临时状态信息对答题状态信息的融合更新,得到更新后的答题状态信息,通过临时状态信息对答题状态信息的融合更新,使得更新后的答题状态信息内包含有临时状态信息和答题状态信息的组成要素,并且更新后的答题状态信息根据时间顺序不断进行更新,按时间顺序保留了当前用户的每一次临时状态信息的组成要素,使得通过对更新后的答题状态信息进行分析评估后,可以准确的判断出目标用户的知识掌握程度。
S205:判断更新后的答题状态信息是否符合预设的更新终止条件,若符合,将更新后的答题状态信息作为目标状态信息,若不符合,则返回获取用户的下一答题记录,并针对下一答题记录的每个组成要素进行信息提取,得到临时状态信息的步骤继续执行。
具体的,预设的更新终止条件为当更新后的答题状态信息中存在连续N道测试题目,在连续N道测试题目中,若题目内容与参***的语义信息和题目内容与用户答案的语义信息的语义相似度大于等于预设第一阈值的次数大于等于预设第二阈值,则将更新后的答题状态信息作为目标状态信息,否则,返回获取用户的下一答题记录,并针对下一答题记录的每个组成要素进行信息提取,得到临时状态信息,其中预设第一阈值和预设第二阈值可以根据对历史经验数据分析获得。
S206:基于目标状态信息和预设的评估方式,确定目标用户在偏好领域的当前级别。
具体的,基于目标状态信息和预设的评估方式,先确定目标用户的偏好领域,进而确定目标用户在偏好领域的级别。
可选地,确定偏好领域具体可以采用如下方式:统计每个题目领域的考题数量;将考题数量超过预设阈值的题目领域,作为待分析领域;基于目标状态信息,确定每个待分析领域的平均评估分值;从所有待分析领域中,选取平均评估分值中数值最大的待分析领域,作为目标用户的偏好领域。
进一步地,确定目标用户在偏好领域的当前级别为当目标状态信息中存在M道偏好领域的测试题目,在M道测试题目中,预设每道测试题目的分值为b,若题目内容与参***的语义信息和题目内容与用户答案的语义信息的语义相似度大于等于预设第一阈值的次数为a,则确定目标用户的评估分值L=a*b,进而根据目标用户的评估分值确定目标用户在偏好领域的当前级别。
在本实施例中,通过对当前答题记录的组成要素进行信息提取,得到答题状态信息;获取目标用户的下一答题记录,并针对下一答题记录的每个组成要素进行信息提取,得到临时状态信息;基于长短时记忆神经网络,采用临时状态信息对答题状态信息进行融合更新,得到更新后的答题状态信息;判断更新后的答题状态信息是否符合预设的更新终止条件,若符合,将更新后的答题状态信息作为目标状态信息,基于目标状态信息和预设的评估方式,能够更加准确的确定目标用户在偏好领域的当前级别。
在本实施例的一些可选的实现方式中,步骤S202中,对当前答题记录的组成要素进行信息提取,得到答题状态信息包括S2020至S2022:
S2020、将题目内容、题目领域、参***和用户答案进行拼接,得到要素拼接序列。
具体的,假设要素拼接序列为P,题目内容为P1,题目领域P2,参***P3,用户答案P4,要素拼接序列可表示为P=[P1P2P3P4]。
S2021、将要素拼接序列输入到Bert模型中。
具体的,Bert模型是一种NLP(Natural Language Processing,自然语言处理)模型。
S2022、采用Bert模型对要素拼接序列进行提取,得到答题状态信息,答题状态信息 包括用于表征题目领域和题目领域对应的难度系数的数据信息。
具体的,采用Bert模型获取要素拼接序列中题目领域对应的题目内容与参***的语义信息和题目领域对应的题目内容与用户答案的语义信息,计算题目内容与参***的语义信息和题目内容与用户答案的语义信息的语义相似度,并记录语义相似度大于等于预设阈值的次数和语义相似度小于预设阈值的次数,将语义相似度大于等于预设阈值的次数和语义相似度小于预设阈值的次数作为用于表征题目领域和题目领域对应的难度系数的数据信息,难度系数用于表征目标用户在该题目领域的知识掌握程度,本实施例中,根据用户答案和参***确定准确率,易理解地,准确率越高,表明目标用户在该题目领域的知识掌握程度越好,相对应地,难度系数越小。
在本实施例中,通过将题目内容、题目领域、参***和用户答案进行拼接,得到要素拼接序列,并采用Bert模型对要素拼接序列进行提取,得到用于表征题目领域和题目领域对应的难度系数的数据信息,能够有效的反应目标用户在题目领域下的难度系数。
在本实施例的一些可选的实现方式中,在步骤S2020中,将题目内容、题目领域、参***和用户答案进行拼接,得到要素拼接序列的步骤包括:
将题目内容、题目领域、参***和用户答案采用分割符进行拼接,得到要素拼接序列。
具体的,分割符包括但不限于空格号、逗号、顿号、分号;假设要素拼接序列为P,题目内容为P1,题目领域P2,参***P3,用户答案P4,分隔符为空格号,要素拼接序列可表示为P=[P1 P2 P3 P4]。
在本实施例中,通过分隔符将题目内容、题目领域、参***和用户答案区分开,可以保证最后提取的答题状态信息能够有效的反应目标用户在题目领域下的难度系数。
在本实施例的一些可选的实现方式中,在步骤S2022中,采用Bert模型对要素拼接序列进行提取,得到答题状态信息,答题状态信息包括用于表征题目领域和题目领域对应的难度系数的数据信息的步骤,详述如下:
S20220、根据题目内容和参***得到第一语义信息。
S20221、根据题目内容和用户答案得到第二语义信息。
S20222、计算第一语义信息与第二语义信息的语义相似度。
S20223、根据语义相似度和预设阈值得到答题状态信息中用于表征题目领域对应的难度系数的数据信息。
本实施例中,采用余弦相似度、曼哈顿距离、欧几里得距离等方式确定第一语义信息与第二语义信息的语义相似度。
具体的,若语义相似度大于等于预设阈值,则说明在同一题目内容下的参***和用户答案是一致的,否则说明参***和用户答案不一致的,由此判断目标用户的答题正确率和错误率,因此可以用于表征题目领域对应的难度系数的数据信息。
在本实施例中,通过将题目内容、题目领域、参***和用户答案进行拼接,得到要 素拼接序列,并采用Bert模型对要素拼接序列进行提取,得到用于表征题目领域和题目领域对应的难度系数的数据信息,能够有效的反应目标用户在题目领域下的难度系数。
在本实施例的一些可选的实现方式中,在步骤S204中,采用基于长短时记忆神经网络,采用临时状态信息对答题状态信息进行融合更新,得到更新后的答题状态信息的步骤包括S2041至S2044:
S2041、将临时状态信息和答题状态信息输入到长短时记忆神经网络中的输入层进行计算处理,得到候选信息。
具体的,根据以下算式(1)对临时状态信息和答题状态信息进行计算处理,得到候选信息:
i t=σ(W i·[h t-1,x t]+b i)  (1)
其中,i t为候选信息,σ为激活函数,w i为权重矩阵,h t-1为t-1时刻的答题状态信息,x t为t时刻答题状态信信息(即上述临时状态信息),b i为偏置值。
S2042、将临时状态信息和答题状态信息输入到长短时记忆神经网络中的遗忘层进行计算处理,得到遗忘信息。
具体的,根据以下算式(2)对临时状态信息和答题状态信息进行计算处理,得到遗忘信息:
f t=σ(w f·[h t-1,x t]+b f)  (2)
其中,f t为遗忘信息,σ为激活函数,w f为权重矩阵,h t-1为t-1时刻的答题状态信息,x t为t时刻答题状态信信息(即上述临时状态信息),b f为偏置值。
S2043、将临时状态信息和答题状态信息输入到长短是记忆神经网络中的输出层进行计算处理,得到候选更新信息。
具体的,根据以下算式(3)对临时状态信息和答题状态信息进行计算处理,得到候选更新信息:
Figure PCTCN2022072291-appb-000001
其中,
Figure PCTCN2022072291-appb-000002
为候选更新信息,为tanh为正切函数,w c为权重矩阵,h t-1为t-1时刻的答题状态信息,x t为t时刻答题状态信信息(即上述临时状态信息),b c为偏置值。
S2044、通过预设方式对答题状态信息、候选信息、遗忘信息和候选更新信息进行计算,得到更新后的答题状态信息。
具体的,根据以下算式(4)对临时状态信息、候选信息、遗忘信息和候选更新信息进行计算处理,得到更新后的答题状态信息:
Figure PCTCN2022072291-appb-000003
其中,c t为t时刻更新后的答题状态信息,i t为候选信息,
Figure PCTCN2022072291-appb-000004
为候选更新信息,f t为遗忘信息,c t-1为t-1时刻的答题状态信息
在本实施例中,通过预设方式对临时状态信息、候选更新信息、遗忘信息和更新信息进行计算,得到更新后的答题状态信息,可以使更新后的答题状态信息记录目标用户之前 的答题状态信息,从而有利于根据更新后的答题状态信息确定目标用户在题目领域下的当前级别。
在本实施例的一些可选的实现方式中,在步骤S206中,基于目标状态信息和预设的评估方式,确定目标用户在偏好领域的当前级别的步骤包括S2061至S2062:
S2061、基于目标状态信息和预设的评估方式,确定目标用户的偏好领域,以及目标用户在偏好领域的评估分值。
具体的,预设的评估方式为当目标状态信息中存在M道偏好领域的测试题目,在M道测试题目中,预设每道测试题目的分值为b,若题目内容与参***的语义信息和题目内容与用户答案的语义信息的语义相似度大于等于预设第一阈值的次数为a,则目标用户的评分估值L=a*b。
S2062、通过偏好领域的评估分值和预设的级别映射关系,确定目标用户在偏好领域的当前级别。
具体的,预设的级别映射关系为评估分值与预设的级别之间的对应关系,其中,预设的级别可以根据历史经验数据分析得到,例如,假设目标用户在偏好领域完成测试题目的数量为100,总分值为100,目标用户在偏好领域获得的评估分值为x,当0≤x≤60时,对应的级别为及格,当60<x≤70时,对应的级别为一般,当70<x<90时,对应的级别为良好,当x≥90时,对应的级别为优秀。
在本实施例中,通过目标状态信息和预设的评估方式,确定目标用户的偏好领域,以及目标用户在偏好领域的评估分值,再根据偏好领域的评估分值和预设的级别映射关系,确定目标用户在偏好领域的当前级别,可以更准确的确定目标用户在偏好领域的知识掌握程度,并根据用户在偏好领域的知识掌握程度为目标用户推荐合适的学习课程。
在本实施例的一些可选的实现方式中,在步骤S2062中,目标状态信息至少包括用于表征题目领域和题目领域对应的难度系数的数据信息,基于目标状态信息和预设的评估方式,确定目标用户的偏好领域,以及目标用户在偏好领域的评估分值的步骤包括S20620至S20622:
S20620、将目标状态信息输入到预设的神经网络模型,预设的神经网络模型包括领域分类器和等级分类器。
具体的,领域分类器和等级分类器可以采用softmax归一化多分类函数实现。领域分类器可以获取目标状态信息中与题目领域相关的数据信息,并根据softmax归一化多分类函数对数据信息进行归一化处理,并根据归一化的结果实现领域分类,如,若归一化的结果为1,则确定为目标领域,否则为非目标领域,领域分类器可以获取目标状态信息中与题目领域对应的难度系数的数据信息,并根据softmax归一化多分类函数对数据信息进行归一化处理,并根据归一化的结果实现等级分类,如,若归一化的结果为1,则确定为目标等级,否则为非目标等级。
S20621、采用预设的神经网络对目标状态信息进行领域分类处理和等级分类处理,得 到领域分类结果和等级分类结果。
S20622、根据领域分类结果和等级分类结果确定目标用户的偏好领域,以及目标用户在偏好领域的评估分值。
具体的,假设根据目标用户的领域分类结果为语法领域,等级分类结果为优秀,评估分值为92,则根据领域分类结果和等级分类结果的等级分类之间的映射关系获取目标用户在语法领域的等级为优秀,再根据语法领域对应的等级与评估分值之间的映射关系获取目标用户的评估分值为92。
在本实施例中,将目标状态信息输入到预设的神经网络模型,预设的神经网络模型包括领域分类器和等级分类器,采用预设的神经网络对目标状态信息进行领域分类处理和等级分类处理,得到领域分类结果和等级分类结果,根据领域分类结果和等级分类结果确定目标用户的偏好领域,以及目标用户在偏好领域的评估分值,可以更准确的判断目标用户在偏好领域的知识掌握水平,并向目标用户推荐合适的学习课程。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
图3示出与上述实施例基于神经网络的知识掌握程度测评方法一一对应的基于神经网络的知识掌握程度测评装置的原理框图。如图3所示,该基于神经网络的知识掌握程度测评装置包括第一获取模块30、提取模块31、第二获取模块32、融合更新模块33、判断模块34和确定模块35。各功能模块详细说明如下:
第一获取模块30,用于获取目标用户的当前答题记录。
提取模块31,用于对当前答题记录的组成要素进行信息提取,得到答题状态信息。
第二获取模块32,用于获取目标用户的下一答题记录,并针对下一答题记录的每个组成要素进行信息提取,得到临时状态信息。
融合更新模块33,用于基于长短时记忆神经网络,采用临时状态信息对答题状态信息进行融合更新,得到更新后的答题状态信息。
判断模块34,用于判断更新后的答题状态信息是否符合预设的更新终止条件,若符合,将更新后的答题状态信息作为目标状态信息,若不符合,则返回获取用户的下一答题记录,并针对下一答题记录的每个组成要素进行信息提取,得到临时状态信息的步骤继续执行。
确定模块35,用于基于目标状态信息和预设的评估方式,确定目标用户在偏好领域的当前级别。
可选地,提取模块31包括:
第一拼接单元,用于将题目内容、题目领域、参***和用户答案进行拼接,得到要素拼接序列。
第一输入单元,用于将要素拼接序列输入到Bert模型中。
第一提取单元,用于采用Bert模型对要素拼接序列进行提取,得到答题状态信息,答题状态信息包括用于表征题目领域和题目领域对应的难度系数的数据信息。
可选的,第一拼接单元包括:
第二拼接单元,用于将题目内容、题目领域、参***和用户答案采用分割符进行拼接,得到要素拼接序列。
可选的,第一提取单元包括:
第一信息获取单元,用于根据题目内容和参***得到第一语义信息。
第二信息获取单元,用于根据题目内容和用户答案得到第二语义信息。
相似度计算单元,用于计算第一语义信息与第二语义信息的语义相似度。
第三信息获取单元,用于根据语义相似度和预设阈值得到答题状态信息中用于表征题目领域对应的难度系数的数据信息。
可选的,融合更新模块33包括:
第一计算单元,用于将临时状态信息和答题状态信息输入到长短时记忆神经网络中的输入层进行计算处理,得到候选信息。
第二计算单元,用于将临时状态信息和答题状态信息输入到长短时记忆神经网络中的遗忘层进行计算处理,得到遗忘信息。
第三计算单元,用于将临时状态信息和答题状态信息输入到长短是记忆神经网络中的输出层进行计算处理,得到候选更新信息。
第四计算单元,用于通过预设方式对答题状态信息、候选信息、遗忘信息和候选更新信息进行计算,得到更新后的答题状态信息。
可选的,确定模块35包括:
第一确定单元,用于基于目标状态信息和预设的评估方式,确定目标用户的偏好领域,以及目标用户在偏好领域的评估分值。
第二确定单元,用于通过偏好领域的评估分值和预设的级别映射关系,确定目标用户在偏好领域的当前级别。
可选的,第一确定单元包括:
第二输入单元,用于将目标状态信息输入到预设的神经网络模型,预设的神经网络模型包括领域分类器和等级分类器;
分类单元,用于采用预设的神经网络对目标状态信息进行领域分类处理和等级分类处理,得到领域分类结果和等级分类结果;
第三确定单元,用于根据领域分类结果和等级分类结果确定目标用户的偏好领域,以及目标用户在偏好领域的评估分值。
关于基于神经网络的知识掌握程度测评装置的具体限定可以参见上文中对于基于神经网络的知识掌握程度测评方法的限定,在此不再赘述。上述基于神经网络的知识掌握程度测评装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
为解决上述技术问题,本申请实施例还提供计算机设备。具体请参阅图4,图4为本实施例计算机设备基本结构框图。
所述计算机设备4包括通过***总线相互通信连接存储器41、处理器42、网络接口43。需要指出的是,图中仅示出了具有组件连接存储器41、处理器42、网络接口43的计算机设备4,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。其中,本技术领域技术人员可以理解,这里的计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
所述计算机设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述计算机设备可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。
所述存储器41至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或D界面显示存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器41可以是所述计算机设备4的内部存储单元,例如该计算机设备4的硬盘或内存。在另一些实施例中,所述存储器41也可以是所述计算机设备4的外部存储设备,例如该计算机设备4上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,所述存储器41还可以既包括所述计算机设备4的内部存储单元也包括其外部存储设备。本实施例中,所述存储器41通常用于存储安装于所述计算机设备4的操作***和各类应用软件,例如基于神经网络的知识掌握程度测评的计算机可读指令等。此外,所述存储器41还可以用于暂时地存储已经输出或者将要输出的各类数据。
所述处理器42在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器42通常用于控制所述计算机设备4的总体操作。本实施例中,所述处理器42用于运行所述存储器41中存储的计算机可读指令或者处理数据,例如运行基于神经网络的知识掌握程度测评的计算机可读指令。
所述网络接口43可包括无线网络接口或有线网络接口,该网络接口43通常用于在所述计算机设备4与其他电子设备之间建立通信连接。
本申请还提供了另一种实施方式,即提供一种计算机可读存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性,所述计算机可读存储介质存储有界面显示程序,所述界面显示程序可被至少一个处理器执行,以使所述至少一个处理器执行如上述的 基于神经网络的知识掌握程度测评方法的步骤。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
显然,以上所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例,附图中给出了本申请的较佳实施例,但并不限制本申请的专利范围。本申请可以以许多不同的形式来实现,相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。尽管参照前述实施例对本申请进行了详细的说明,对于本领域的技术人员来而言,其依然可以对前述各具体实施方式所记载的技术方案进行修改,或者对其中部分技术特征进行等效替换。凡是利用本申请说明书及附图内容所做的等效结构,直接或间接运用在其他相关的技术领域,均同理在本申请专利保护范围之内。

Claims (20)

  1. 一种基于神经网络的知识掌握程度测评方法,其中,所述方法包括:
    获取目标用户的当前答题记录;
    对所述当前答题记录的组成要素进行信息提取,得到答题状态信息;
    获取所述目标用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息;
    基于长短时记忆神经网络,采用所述临时状态信息对所述答题状态信息进行融合更新,得到更新后的答题状态信息;
    判断所述更新后的答题状态信息是否符合预设的更新终止条件,若符合,将所述更新后的答题状态信息作为目标状态信息,若不符合,则返回所述获取用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息的步骤继续执行;
    基于所述目标状态信息和预设的评估方式,确定所述目标用户在偏好领域的当前级别。
  2. 根据权利要求1所述的基于神经网络的知识掌握程度测评方法,其中,所述当前答题记录的组成要素包括题目内容、题目领域、参***和用户答案,所述对所述当前答题记录的组成要素进行信息提取,得到答题状态信息的步骤包括:
    将所述题目内容、所述题目领域、所述参***和所述用户答案进行拼接,得到要素拼接序列;
    将所述要素拼接序列输入到Bert模型中;
    采用所述Bert模型对所述要素拼接序列进行提取,得到所述答题状态信息,所述答题状态信息包括用于表征题目领域和题目领域对应的难度系数的数据信息。
  3. 根据权利要求2所述的基于神经网络的知识掌握程度测评方法,其中,所述将所述题目内容、所述题目领域、所述参***和所述用户答案进行拼接,得到要素拼接序列的步骤包括:
    将所述题目内容、所述题目领域、所述参***和所述用户答案采用分割符进行拼接,得到要素拼接序列。
  4. 根据权利要求2所述的基于神经网络的知识掌握程度测评方法,其中,所述采用所述Bert模型对所述要素拼接序列进行提取,得到所述答题状态信息,所述答题状态信息包括用于表征题目领域和题目领域对应的难度系数的数据信息的步骤包括:
    根据所述题目内容和所述参***得到第一语义信息;
    根据所述题目内容和所述用户答案得到第二语义信息;
    计算所述第一语义信息与所述第二语义信息的语义相似度;
    根据所述语义相似度和预设阈值得到所述答题状态信息中用于表征所述题目领域对 应的难度系数的数据信息。
  5. 根据权利要求1所述的基于神经网络的知识掌握程度测评方法,其中,所述基于长短时记忆神经网络,采用所述临时状态信息对所述答题状态信息进行融合更新,得到更新后的答题状态信息的步骤包括:
    将所述临时状态信息和所述答题状态信息输入到所述长短时记忆神经网络中的输入层进行计算处理,得到候选信息;
    将所述临时状态信息和所述答题状态信息输入到所述长短时记忆神经网络中的遗忘层进行计算处理,得到遗忘信息;
    将所述临时状态信息和所述答题状态信息输入到所述长短是记忆神经网络中的输出层进行计算处理,得到候选更新信息;
    通过预设方式对所述答题状态信息、所述候选信息、所述遗忘信息和所述候选更新信息进行计算,得到更新后的答题状态信息。
  6. 根据权利要求1所述的基于神经网络的知识掌握程度测评方法,其中,所述基于所述目标状态信息和预设的评估方式,确定所述目标用户在所述偏好领域的当前级别的步骤包括:
    基于所述目标状态信息和预设的评估方式,确定所述目标用户的偏好领域,以及所述目标用户在所述偏好领域的评估分值;
    通过偏好领域的评估分值和预设的级别映射关系,确定所述目标用户在所述偏好领域的当前级别。
  7. 根据权利要求6所述的基于神经网络的知识掌握程度测评方法,其中,所述目标状态信息至少包括用于表征题目领域和题目领域对应的难度系数的数据信息,所述基于所述目标状态信息和预设的评估方式,确定所述目标用户的偏好领域,以及所述目标用户在所述偏好领域的评估分值的步骤包括:
    将所述目标状态信息输入到预设的神经网络模型,所述预设的神经网络模型包括领域分类器和等级分类器;
    采用所述预设的神经网络对所述目标状态信息进行领域分类处理和等级分类处理,得到领域分类结果和等级分类结果;
    根据所述领域分类结果和所述等级分类结果确定所述目标用户的偏好领域,以及所述目标用户在所述偏好领域的评估分值。
  8. 一种基于神经网络的知识掌握程度测评装置,其中,包括:
    第一获取模块,用于获取目标用户的当前答题记录;
    提取模块,用于对所述当前答题记录的组成要素进行信息提取,得到答题状态信息;
    第二获取模块,用于获取所述目标用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息;
    融合更新模块,用于基于长短时记忆神经网络,采用所述临时状态信息对所述答题状 态信息进行融合更新,得到更新后的答题状态信息;
    判断模块,用于判断所述更新后的答题状态信息是否符合预设的更新终止条件,若符合,将所述更新后的答题状态信息作为目标状态信息,若不符合,则返回所述获取用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息的步骤继续执行;
    确定模块,用于基于所述目标状态信息和预设的评估方式,确定所述目标用户在所述偏好领域的当前级别。
  9. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下基于神经网络的知识掌握程度测评方法的步骤:
    获取目标用户的当前答题记录;
    对所述当前答题记录的组成要素进行信息提取,得到答题状态信息;
    获取所述目标用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息;
    基于长短时记忆神经网络,采用所述临时状态信息对所述答题状态信息进行融合更新,得到更新后的答题状态信息;
    判断所述更新后的答题状态信息是否符合预设的更新终止条件,若符合,将所述更新后的答题状态信息作为目标状态信息,若不符合,则返回所述获取用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息的步骤继续执行;
    基于所述目标状态信息和预设的评估方式,确定所述目标用户在偏好领域的当前级别。
  10. 根据权利要求9所述的计算机设备,其中,所述当前答题记录的组成要素包括题目内容、题目领域、参***和用户答案,所述对所述当前答题记录的组成要素进行信息提取,得到答题状态信息的步骤包括:
    将所述题目内容、所述题目领域、所述参***和所述用户答案进行拼接,得到要素拼接序列;
    将所述要素拼接序列输入到Bert模型中;
    采用所述Bert模型对所述要素拼接序列进行提取,得到所述答题状态信息,所述答题状态信息包括用于表征题目领域和题目领域对应的难度系数的数据信息。
  11. 根据权利要求10所述的计算机设备,其中,所述将所述题目内容、所述题目领域、所述参***和所述用户答案进行拼接,得到要素拼接序列的步骤包括:
    将所述题目内容、所述题目领域、所述参***和所述用户答案采用分割符进行拼接,得到要素拼接序列。
  12. 根据权利要求10所述的计算机设备,其中,所述采用所述Bert模型对所述要素拼 接序列进行提取,得到所述答题状态信息,所述答题状态信息包括用于表征题目领域和题目领域对应的难度系数的数据信息的步骤包括:
    根据所述题目内容和所述参***得到第一语义信息;
    根据所述题目内容和所述用户答案得到第二语义信息;
    计算所述第一语义信息与所述第二语义信息的语义相似度;
    根据所述语义相似度和预设阈值得到所述答题状态信息中用于表征所述题目领域对应的难度系数的数据信息。
  13. 根据权利要求9所述的计算机设备,其中,所述基于长短时记忆神经网络,采用所述临时状态信息对所述答题状态信息进行融合更新,得到更新后的答题状态信息的步骤包括:
    将所述临时状态信息和所述答题状态信息输入到所述长短时记忆神经网络中的输入层进行计算处理,得到候选信息;
    将所述临时状态信息和所述答题状态信息输入到所述长短时记忆神经网络中的遗忘层进行计算处理,得到遗忘信息;
    将所述临时状态信息和所述答题状态信息输入到所述长短是记忆神经网络中的输出层进行计算处理,得到候选更新信息;
    通过预设方式对所述答题状态信息、所述候选信息、所述遗忘信息和所述候选更新信息进行计算,得到更新后的答题状态信息。
  14. 根据权利要求9所述的计算机设备,其中,所述基于所述目标状态信息和预设的评估方式,确定所述目标用户在所述偏好领域的当前级别的步骤包括:
    基于所述目标状态信息和预设的评估方式,确定所述目标用户的偏好领域,以及所述目标用户在所述偏好领域的评估分值;
    通过偏好领域的评估分值和预设的级别映射关系,确定所述目标用户在所述偏好领域的当前级别。
  15. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时实现如下基于神经网络的知识掌握程度测评方法的步骤:
    获取目标用户的当前答题记录;
    对所述当前答题记录的组成要素进行信息提取,得到答题状态信息;
    获取所述目标用户的下一答题记录,并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息;
    基于长短时记忆神经网络,采用所述临时状态信息对所述答题状态信息进行融合更新,得到更新后的答题状态信息;
    判断所述更新后的答题状态信息是否符合预设的更新终止条件,若符合,将所述更新后的答题状态信息作为目标状态信息,若不符合,则返回所述获取用户的下一答题记录, 并针对所述下一答题记录的每个组成要素进行信息提取,得到临时状态信息的步骤继续执行;
    基于所述目标状态信息和预设的评估方式,确定所述目标用户在偏好领域的当前级别。
  16. 根据权利要求15所述的计算机可读存储介质,其中,所述当前答题记录的组成要素包括题目内容、题目领域、参***和用户答案,所述对所述当前答题记录的组成要素进行信息提取,得到答题状态信息的步骤包括:
    将所述题目内容、所述题目领域、所述参***和所述用户答案进行拼接,得到要素拼接序列;
    将所述要素拼接序列输入到Bert模型中;
    采用所述Bert模型对所述要素拼接序列进行提取,得到所述答题状态信息,所述答题状态信息包括用于表征题目领域和题目领域对应的难度系数的数据信息。
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述将所述题目内容、所述题目领域、所述参***和所述用户答案进行拼接,得到要素拼接序列的步骤包括:
    将所述题目内容、所述题目领域、所述参***和所述用户答案采用分割符进行拼接,得到要素拼接序列。
  18. 根据权利要求16所述的计算机可读存储介质,其中,所述采用所述Bert模型对所述要素拼接序列进行提取,得到所述答题状态信息,所述答题状态信息包括用于表征题目领域和题目领域对应的难度系数的数据信息的步骤包括:
    根据所述题目内容和所述参***得到第一语义信息;
    根据所述题目内容和所述用户答案得到第二语义信息;
    计算所述第一语义信息与所述第二语义信息的语义相似度;
    根据所述语义相似度和预设阈值得到所述答题状态信息中用于表征所述题目领域对应的难度系数的数据信息。
  19. 根据权利要求15所述的计算机可读存储介质,其中,所述基于长短时记忆神经网络,采用所述临时状态信息对所述答题状态信息进行融合更新,得到更新后的答题状态信息的步骤包括:
    将所述临时状态信息和所述答题状态信息输入到所述长短时记忆神经网络中的输入层进行计算处理,得到候选信息;
    将所述临时状态信息和所述答题状态信息输入到所述长短时记忆神经网络中的遗忘层进行计算处理,得到遗忘信息;
    将所述临时状态信息和所述答题状态信息输入到所述长短是记忆神经网络中的输出层进行计算处理,得到候选更新信息;
    通过预设方式对所述答题状态信息、所述候选信息、所述遗忘信息和所述候选更新信息进行计算,得到更新后的答题状态信息。
  20. 根据权利要求15所述的计算机可读存储介质,其中,所述基于所述目标状态信息和预设的评估方式,确定所述目标用户在所述偏好领域的当前级别的步骤包括:
    基于所述目标状态信息和预设的评估方式,确定所述目标用户的偏好领域,以及所述目标用户在所述偏好领域的评估分值;
    通过偏好领域的评估分值和预设的级别映射关系,确定所述目标用户在所述偏好领域的当前级别。
PCT/CN2022/072291 2021-06-01 2022-01-17 基于神经网络的知识掌握程度测评方法、装置及相关设备 WO2022252643A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110611216.1A CN113220847B (zh) 2021-06-01 2021-06-01 基于神经网络的知识掌握程度测评方法、装置及相关设备
CN202110611216.1 2021-06-01

Publications (1)

Publication Number Publication Date
WO2022252643A1 true WO2022252643A1 (zh) 2022-12-08

Family

ID=77082374

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/072291 WO2022252643A1 (zh) 2021-06-01 2022-01-17 基于神经网络的知识掌握程度测评方法、装置及相关设备

Country Status (2)

Country Link
CN (1) CN113220847B (zh)
WO (1) WO2022252643A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113220847B (zh) * 2021-06-01 2024-06-21 平安科技(深圳)有限公司 基于神经网络的知识掌握程度测评方法、装置及相关设备
CN114529436B (zh) * 2022-02-10 2023-01-10 珠海读书郎软件科技有限公司 一种知识点掌握程度评估方法、装置及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111159419A (zh) * 2019-12-09 2020-05-15 浙江师范大学 基于图卷积的知识追踪数据处理方法、***和存储介质
CN111582694A (zh) * 2020-04-29 2020-08-25 腾讯科技(深圳)有限公司 一种学习评估方法及装置
WO2020205048A1 (en) * 2019-03-29 2020-10-08 Microsoft Technology Licensing, Llc Ontology entity type detection from tokenized utterance
CN113220847A (zh) * 2021-06-01 2021-08-06 平安科技(深圳)有限公司 基于神经网络的知识掌握程度测评方法、装置及相关设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10963789B2 (en) * 2016-11-28 2021-03-30 Conduent Business Services, Llc Long-term memory networks for knowledge extraction from text and publications
CN108346030A (zh) * 2017-12-29 2018-07-31 北京北森云计算股份有限公司 计算机自适应能力测验方法及装置
US11308211B2 (en) * 2019-06-18 2022-04-19 International Business Machines Corporation Security incident disposition predictions based on cognitive evaluation of security knowledge graphs
CN110489454A (zh) * 2019-07-29 2019-11-22 北京大米科技有限公司 一种自适应测评方法、装置、存储介质及电子设备
CN111274411A (zh) * 2020-01-22 2020-06-12 文思海辉智科科技有限公司 课程推荐方法、装置、电子设备及可读存储介质
CN111428021B (zh) * 2020-06-05 2023-05-30 平安国际智慧城市科技股份有限公司 基于机器学习的文本处理方法、装置、计算机设备及介质
CN112527821A (zh) * 2020-12-09 2021-03-19 大连东软教育科技集团有限公司 一种学生布鲁姆掌握度评估方法、***及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020205048A1 (en) * 2019-03-29 2020-10-08 Microsoft Technology Licensing, Llc Ontology entity type detection from tokenized utterance
CN111159419A (zh) * 2019-12-09 2020-05-15 浙江师范大学 基于图卷积的知识追踪数据处理方法、***和存储介质
CN111582694A (zh) * 2020-04-29 2020-08-25 腾讯科技(深圳)有限公司 一种学习评估方法及装置
CN113220847A (zh) * 2021-06-01 2021-08-06 平安科技(深圳)有限公司 基于神经网络的知识掌握程度测评方法、装置及相关设备

Also Published As

Publication number Publication date
CN113220847A (zh) 2021-08-06
CN113220847B (zh) 2024-06-21

Similar Documents

Publication Publication Date Title
WO2021121198A1 (zh) 基于语义相似度的实体关系抽取方法、装置、设备及介质
WO2021218028A1 (zh) 基于人工智能的面试内容精炼方法、装置、设备及介质
WO2022252643A1 (zh) 基于神经网络的知识掌握程度测评方法、装置及相关设备
WO2021139247A1 (zh) 医学领域知识图谱的构建方法、装置、设备及存储介质
WO2022174491A1 (zh) 基于人工智能的病历质控方法、装置、计算机设备及存储介质
CN111930792B (zh) 数据资源的标注方法、装置、存储介质及电子设备
WO2018171295A1 (zh) 一种给文章标注标签的方法、装置、终端及计算机可读存储介质
CN112052424B (zh) 一种内容审核方法及装置
WO2022267454A1 (zh) 分析文本的方法、装置、设备及存储介质
CN110826315B (zh) 使用神经网络***识别短文本时效性的方法
CN115730597A (zh) 多级语义意图识别方法及其相关设备
CN114817478A (zh) 基于文本的问答方法、装置、计算机设备及存储介质
CN112966509B (zh) 文本质量评估方法、装置、存储介质及计算机设备
CN114398466A (zh) 基于语义识别的投诉分析方法、装置、计算机设备及介质
CN114385694A (zh) 一种数据加工处理方法、装置、计算机设备及存储介质
WO2021174814A1 (zh) 众包任务的答案验证方法、装置、计算机设备及存储介质
CN113626576A (zh) 远程监督中关系特征抽取方法、装置、终端及存储介质
CN113988085B (zh) 文本语义相似度匹配方法、装置、电子设备及存储介质
CN112364649B (zh) 命名实体的识别方法、装置、计算机设备及存储介质
CN110276001B (zh) 盘点页识别方法、装置、计算设备和介质
CN115238077A (zh) 基于人工智能的文本分析方法、装置、设备及存储介质
CN114218393A (zh) 数据分类方法、装置、设备和存储介质
CN111523318A (zh) 一种汉语短语分析方法、***、存储介质及电子设备
WO2024098282A1 (zh) 一种几何解题方法、装置、设备及存储介质
CN117077656B (zh) 论证关系挖掘方法、装置、介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22814696

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22814696

Country of ref document: EP

Kind code of ref document: A1