CN111161578A - Learning interaction method and device and terminal equipment - Google Patents

Learning interaction method and device and terminal equipment Download PDF

Info

Publication number
CN111161578A
CN111161578A CN202010009658.4A CN202010009658A CN111161578A CN 111161578 A CN111161578 A CN 111161578A CN 202010009658 A CN202010009658 A CN 202010009658A CN 111161578 A CN111161578 A CN 111161578A
Authority
CN
China
Prior art keywords
learning
question
error
answering
answer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010009658.4A
Other languages
Chinese (zh)
Other versions
CN111161578B (en
Inventor
李滨何
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL China Star Optoelectronics Technology Co Ltd
Original Assignee
Shenzhen China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Star Optoelectronics Technology Co Ltd filed Critical Shenzhen China Star Optoelectronics Technology Co Ltd
Priority to CN202010009658.4A priority Critical patent/CN111161578B/en
Publication of CN111161578A publication Critical patent/CN111161578A/en
Application granted granted Critical
Publication of CN111161578B publication Critical patent/CN111161578B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/418Document matching, e.g. of document images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Educational Technology (AREA)
  • Multimedia (AREA)
  • Educational Administration (AREA)
  • Strategic Management (AREA)
  • Artificial Intelligence (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The application is suitable for the technical field of education, and provides a learning interaction method, a device and terminal equipment, wherein the learning interaction method comprises the following steps: recording each answering step when the user answers the current test question when the learning equipment is in a learning test mode; searching a reference answer matched with the current test question, and determining whether potential errors exist in each question answering step based on the reference answer; under the condition that potential errors exist in the first question answering step, displaying corresponding visual marks to the first question answering step according to the error types of the potential errors; wherein the first answering step is any one of the answering steps. According to the method and the device, the user can learn efficiently according to the answering steps with potential errors.

Description

Learning interaction method and device and terminal equipment
Technical Field
The application belongs to the technical field of education, and particularly relates to a learning interaction method, a learning interaction device and terminal equipment.
Background
At present, a learning test is often used for detecting whether a user grasps a knowledge point, so that the teaching direction and the teaching progress can be adjusted in time, but the traditional learning test generally can only judge the right and wrong according to the final answer of the user, and the degree of grasping a certain knowledge point by the user cannot be accurately judged according to the right and wrong of the final answer, which is not beneficial to pertinently improving the learning efficiency of the user.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present application provide a learning interaction method, apparatus, and terminal device.
The application is realized by the following technical scheme:
in a first aspect, an embodiment of the present application provides a learning interaction method, including:
recording each answering step when the user answers the current test question when the learning equipment is in a learning test mode;
searching a reference answer matched with the current test question, and determining whether potential errors exist in each question answering step based on the reference answer;
under the condition that potential errors exist in the first question answering step, displaying corresponding visual marks to the first question answering step according to the error types of the potential errors; wherein the first answering step is any one of the answering steps.
In a possible implementation manner of the first aspect, the error types of the potential errors are multiple, and each error type corresponds to one visual marker;
under the condition that the first question answering step has potential errors, displaying corresponding visual marks to the first question answering step according to the error types of the potential errors, wherein the visual marks comprise:
in the case that the first answering step has a potential error, determining the error type of the potential error;
and marking the first answering step based on the visual mark corresponding to the error type.
In a possible implementation manner of the first aspect, each error type corresponds to a color cursor, and the marking the first question answering step based on the visual marker corresponding to the error type includes:
and marking the first answering step by adopting a cursor with a color corresponding to the error type.
In a possible implementation manner of the first aspect, each error type corresponds to a cursor in a shape, and the marking the first question answering step based on the visual marker corresponding to the error type includes:
and marking the first answering step by adopting a cursor with a shape corresponding to the error type.
In a possible implementation manner of the first aspect, each error type corresponds to a color, and the marking the first question answering step based on the visual marker corresponding to the error type includes:
and highlighting the first answering step by adopting the color corresponding to the error type.
In a possible implementation manner of the first aspect, the recording, when the learning device is in the learning and testing mode, each answer of the user to the current test question includes:
detecting whether the learning device is in a learning test mode;
acquiring voice information input by a user under the condition that the learning equipment is in a learning test mode;
recognizing user intention according to the voice information;
and when the step of identifying that the user intention is to record each answer of the current test question, controlling a camera to shoot each answer when the user answers the current test question.
In a possible implementation manner of the first aspect, the method further includes:
a step of obtaining pre-stored answer with the same error type as the first answer step from an error library; the wrong question bank stores a plurality of pre-stored answer steps and corresponding error types, and each pre-stored answer step corresponds to one question;
and displaying the target questions corresponding to the pre-stored answering steps, all the answering steps of the target questions and the correction results.
In a second aspect, an embodiment of the present application provides a learning interaction device, including:
the answer step recording module is used for recording each answer step when the user answers the current test question when the learning equipment is in a learning test mode;
a latent error determining module, configured to search for a reference answer matching the current test question, and determine whether a latent error exists in each question answering step based on the reference answer;
the visual marking module is used for displaying corresponding visual marks to the first question answering step according to the error types of the potential errors under the condition that the potential errors exist in the first question answering step; wherein the first answering step is any one of the answering steps.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the learning interaction method according to any one of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the learning interaction method according to any one of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when running on a terminal device, causes the terminal device to execute the learning interaction method according to any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that:
according to the embodiment of the application, when the learning equipment is in a learning test mode, all answer steps of a user for answering a current test question are recorded, reference answers matched with the current test question are searched, whether potential errors exist in all the answer steps is determined based on the reference answers, corresponding visual marks are displayed to the first answer step according to the error types of the potential errors under the condition that the potential errors exist in the first answer step, the first answer step of the user for answering the test question with the potential errors can be achieved, the visual marks corresponding to the error types of the potential errors are displayed, and therefore the user can learn efficiently according to the answer steps with the potential errors.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario of a learning interaction method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a learning interaction method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a learning interaction method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a learning interaction method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a learning interaction method according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a learning interaction method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a learning interaction device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a learning interaction device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of a learning machine to which the learning interaction method provided in the embodiment of the present application is applied.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
At present, a learning test is often used for detecting whether a user grasps a knowledge point, so that the teaching direction and the teaching progress can be adjusted in time, but the traditional learning test generally can only judge the right and wrong according to the final answer of the user, and the degree of grasping a certain knowledge point by the user cannot be accurately judged according to the right and wrong of the final answer, which is not beneficial to pertinently improving the learning efficiency of the user. The inventor finds that the learning efficiency of the user can be more efficient by performing targeted learning according to the answering steps of the test questions by the user.
Based on the above problems, in the learning interaction method in the embodiment of the application, when the learning device is in the learning test mode, each answer step of a user for answering a current test question is recorded, a reference answer matched with the current test question is searched, whether a potential error exists in each answer step is determined based on the reference answer, a visual mark is displayed to the corresponding answer step according to the error type of the potential error under the condition that the potential error exists in the answer step, the first answer step of the user for answering the test question with the potential error can be realized, and the visual mark corresponding to the error type of the potential error is displayed, so that the user can perform high-efficiency learning according to the answer step with the potential error.
For example, the embodiment of the present application can be applied to the exemplary scenario shown in fig. 1. In this scenario, when the user 10 learns through the learning device 20, the learning device 20 may perform warriors on potentially incorrect answer steps to the user 10 according to the individual answer steps the user answers to the test questions. Specifically, when the learning device 20 is in the learning test mode, each answering step of the user in answering the current test question is recorded, a reference answer matching the current test question is searched for, whether a potential error exists in each answering step is determined based on the reference answer, and if a potential error exists in a certain answering step, a visual mark is displayed to the corresponding answering step according to the error type of the potential error.
The learning interaction method of the present application is described in detail below with reference to fig. 1.
Fig. 2 is a schematic flow chart of a learning interaction method according to an embodiment of the present application, and with reference to fig. 2, the learning interaction method is described in detail as follows:
in step 101, when the learning device is in the learning test mode, the answering steps of the user for answering the current test question are recorded.
The learning device can be electronic devices such as a learning tablet, a learning mobile phone, a learning machine, a family education machine and a point reading machine, and the specific type of the learning device is not limited in the embodiment of the application.
For example, the learning device may have a learning test mode in which a user may select a test question through the learning device for testing. When the user answers the current test question, all the answering steps of the user on the current test question can be collected through a camera of the learning device, and therefore recording is conducted.
In some embodiments, the user may enter the learning device into the learning test mode via a display interface of the learning device or a specific learning test mode button provided on the learning device.
In step 102, a reference answer matching the current test question is searched, and whether a potential error exists in each answering step is determined based on the reference answer.
The database of the learning device is pre-stored with a plurality of preset test questions and corresponding reference answers. Specifically, the reference answer may be a reference process of each answer step of a corresponding preset test question, and the reference process of each answer step constitutes the reference answer.
For example, each test question may correspond to a unique question identifier, and a reference answer matching the current test question may be determined from the database by the question identifier. For example, the question identifier may be a question number of each test question, and the learning device may determine a corresponding question number according to the test question selected by the user, further match a preset test question corresponding to the question number from the database, and determine a corresponding reference answer according to the preset test question.
In a possible implementation manner, the above process of determining whether there is a potential error in each answering step based on the reference answer may be: and comparing each answering step of the current test question with the corresponding answering step in the reference answers one by one, so as to determine whether each answering step has potential errors. For example, when a certain answer step of the current test question is inconsistent with a corresponding answer step in the reference answers, it is determined that the answer step of the current test question has a potential error.
In step 103, in the case that a potential error exists in the first question answering step, a corresponding visual marker is displayed to the first question answering step according to the error type of the potential error.
Wherein the first answering step is any one of the answering steps.
In some embodiments, the learning device may compare each answering step of the current test question with a corresponding answering step in the reference answers one by one, and determine the first answering step with potential errors. It should be noted that, for one test question, the first question answering step may be any question answering step of the test question, or may be any two or more question answering steps of the test question, which is not limited in this application.
Illustratively, the error types of the potential errors are multiple, and each error type may correspond to a visual marker. For example, the error types of the potential errors may include technical errors, logical errors, and empirical errors, wherein the technical errors may be calculation errors, writing errors, and the like, the logical errors may be the mismatching of the answer step and the test question information, and the empirical errors may be the same as a certain historical error.
Referring to fig. 3, in some embodiments, step 103 may specifically include:
in step 1031, in the case that there is a potential error in the first answering step, determining an error type of the potential error.
In step 1032, the first answering step is marked based on the visual mark corresponding to the error type.
The visual mark can be realized by cursors in different colors and/or different shapes, the visual mark can also be realized by highlighting the first question answering step, and the visual mark can also be realized in other realization modes. Specifically, the user can be made aware of the first question answering step through the visual marker, and the error type of the first question answering step can be known according to the visual marker.
In a possible implementation manner, each error type corresponds to a color cursor, and step 1032 may specifically be: and marking the first answering step by adopting a cursor with a color corresponding to the error type.
The different error types can correspond to cursors in different colors, and the cursor in the color corresponding to the error type is displayed at the position of the first answering step, so that the user can be noticed that the answering step with potential errors exists and the error type of the answering step.
For example, a cursor of a red color may correspond to the technical error, a cursor of a blue color may correspond to the logical error, and a cursor of a green color may correspond to the empirical error. For example, when the error type of the first answering step is technically wrong, a red cursor is displayed at the position of the first answering step, and the user can determine the answering step with potential errors and the error type of the answering step according to the red cursor.
In a possible implementation manner, each error type corresponds to a cursor in a shape, and step 1032 may specifically be: and marking the first answering step by adopting a cursor with a shape corresponding to the error type.
The cursor with the shape corresponding to the error type is displayed at the position of the first answering step, so that the user can notice the answering step with the potential error and the error type of the answering step.
For example, a quadrilateral shaped cursor may correspond to the technical error, a triangular shaped cursor may correspond to the logical error, and a linear shaped cursor may correspond to the empirical error. For example, when the error type of the first question answering step is technically wrong, a quadrangular cursor is displayed at the position of the first question answering step, and the user can determine the question answering step with potential errors and the error type of the question answering step according to the quadrangular cursor.
In a possible implementation manner, each error type corresponds to a color, and step 1032 may specifically be: and highlighting the first answering step by adopting the color corresponding to the error type.
The different error types can correspond to different colors, and the first answering step is highlighted by adopting the color corresponding to the error type, so that the user can notice the answering step with the potential error and the error type of the answering step.
Illustratively, a red color may correspond to the technical error described above, a blue color may correspond to the logical error described above, and a green color may correspond to the empirical error described above. For example, when the error type of the first question answering step is technically wrong, the first question answering step is highlighted in red, and the user can determine the question answering step with the potential error and the error type of the question answering step according to the red highlighting mode.
Fig. 4 is a schematic flowchart of a learning interaction method according to an embodiment of the present application, and referring to fig. 4, based on the embodiment shown in fig. 2, the learning interaction method may further include:
in step 104, a pre-stored answer step with the same error type as the first answer step is obtained in an error database.
The wrong question bank stores a plurality of pre-stored answer steps and corresponding error types, and each pre-stored answer step corresponds to one question. The error database in this step may be a part of the database, or may be independent of the database.
Illustratively, the wrong question bank stores a plurality of questions, each pre-stored question step in each question has at least one wrong question answering step, and the wrong question answering step also corresponds to a correction result, which is specifically shown in table 1. In table 1, M1 is a positive integer no greater than N1, M2 is a positive integer no greater than N2, M3 is a positive integer no greater than N3, and the correction result 1 is the correct answer step of the M1 th pre-stored answer step for question 1.
TABLE 1 wrong question bank information
Question serial number Pre-storing answer step Wrong answer step Type of error Corrected result
Topic
1 Comprises N1 pre-stored answer steps M1 pre-stored answer steps Error type 2 Correction result 1
Topic 2 Comprises N2 pre-stored answer steps M2 pre-stored answer steps Error type 1 Correction result 2
Topic 3 Comprises N3 pre-stored answer steps M3 pre-stored answer steps Error type 3 Correction result 3
…… …… …… …… ……
In step 105, the target question corresponding to the pre-stored answering step, all the answering steps of the target question and the correction result are displayed.
Specifically, the pre-stored answer step with the same error type can be found in the error database according to the error type in step 103. For example, if the error type of the first answer step obtained in step 103 is error type 1, the M2 th pre-stored answer step with the error type of 1 in the error database is obtained, and then the target question is determined to be question 2 based on the pre-stored answer step. Finally, all the major topic steps of the topic 2 and the correction result 2 are displayed for the user to check and modify.
Optionally, for a situation that more than two answer steps in one test question have a potential error, more than two pre-stored answer steps can be matched from the wrong question library, and then the question and the correction result corresponding to each pre-stored answer step can be displayed for the user to refer to.
For example, if there are two first answer steps for the current test question, and the error types are respectively error type 1 and error type 3, then the M2 th pre-stored answer step with error type 1 and the M3 th pre-stored answer step with error type 3 in the error library are obtained, and then the target questions are determined to be question 2 and question 3 based on the two pre-stored answer steps. Finally, all the large topic steps and correction results 2 of the topics 2 and 2, and all the large topic steps and correction results 3 of the topics 3 and 3 are displayed for the user to refer to.
Fig. 5 is a schematic flowchart of a learning interaction method according to an embodiment of the present application, and referring to fig. 5, the step 101 may specifically include:
in step 201, it is detected whether the learning device is in a learning quiz mode.
As an alternative embodiment, the user may make the learning device enter the learning test mode through a virtual button in a display interface of the learning device or a specific physical button disposed on the learning device, and when it is detected that the corresponding button is triggered, it is determined that the learning device is in the learning test mode.
In step 202, voice information input by a user is acquired in a case where the learning device is in a learning quiz mode.
In step 203, the user intention is recognized according to the voice information.
Wherein, the user can control the learning device to execute corresponding functions through voice. For example, the language information may contain user intentions, which may be used to indicate respective answering steps for photographing the current test question.
In step 204, when the user intention is recognized as each answer step of recording the current test question, the camera is controlled to shoot each answer step when the user answers the current test question.
For example, when it is recognized that the user intends to record each answer step of the current test question, the camera may be controlled to adjust a shooting angle, and the current test question is shot in each answer step. For example, the camera may be a pop-up camera.
Fig. 6 is a schematic flowchart of a learning interaction method according to an embodiment of the present application, and with reference to fig. 6, the learning interaction method is described in detail as follows:
in step 301, it is detected whether the learning device is in a learning test mode.
In step 302, in the case that the learning device is in the learning test mode, voice information input by the user is acquired.
In step 303, a user intent is identified based on the speech information.
In step 304, when the step of recognizing that the user intends to record each answer of the current test question is performed, the camera is controlled to shoot each answer step when the user answers the current test question.
In step 305, a reference answer matching the current test question is searched, and whether a potential error exists in each answering step is determined based on the reference answer.
In step 306, in the case that there is a potential error in the first answering step, the error type of the potential error is determined.
In step 307, each error type corresponds to a color cursor, and the first question answering step is marked with the color cursor corresponding to the error type.
In step 308, each error type corresponds to a cursor of a shape, and the first answering step is marked by using the cursor of the shape corresponding to the error type.
In step 309, each error type corresponds to a color, and the first question answering step is highlighted with the color corresponding to the error type.
In step 310, a pre-stored answer step with the same type of error as the first answer step is obtained in the wrong answer library.
In step 311, the target question corresponding to the pre-stored answering step, all the answering steps of the target question and the correction result are displayed.
According to the learning interaction method, when the learning equipment is in a learning test mode, all answer steps of a user for answering a current test question are recorded, reference answers matched with the current test question are searched, whether potential errors exist in all the answer steps is determined based on the reference answers, corresponding visual marks are displayed to the first answer step according to the error types of the potential errors under the condition that the potential errors exist in the first answer step, the first answer step of the user for answering the test question with the potential errors can be achieved, the visual marks corresponding to the error types of the potential errors are displayed, and therefore the user can learn efficiently according to the answer steps with the potential errors.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 7 shows a block diagram of a learning interaction apparatus provided in the embodiment of the present application, which corresponds to the learning interaction method described in the foregoing embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 7, the learning interaction apparatus in the embodiment of the present application may include a answering step recording module 401, a potential error determination module 402, and a visual marking module 403.
The answer step recording module 401 is configured to record each answer step when the user answers the current test question when the learning device is in the learning test mode;
a latent error determining module 402, configured to search a reference answer matching the current test question, and determine whether a latent error exists in each question answering step based on the reference answer;
a visual marking module 403, configured to, when a potential error exists in the first question answering step, display a corresponding visual mark to the first question answering step according to the error type of the potential error; wherein the first answering step is any one of the answering steps.
Optionally, the error types of the potential errors are multiple, and each error type corresponds to one visual marker; referring to fig. 8, based on the embodiment shown in fig. 7, the visual marker module 403 may include:
an error type determining unit 4031, configured to determine, in a case that there is a potential error in the first answer step, an error type of the potential error;
a marking unit 4032, configured to mark the first answer step based on the visual mark corresponding to the error type.
Optionally, each error type corresponds to a color cursor, and the marking unit 4032 may be specifically configured to:
and marking the first answering step by adopting a cursor with a color corresponding to the error type.
Optionally, each error type corresponds to a cursor with a shape, and the marking unit 4032 may be specifically configured to:
and marking the first answering step by adopting a cursor with a shape corresponding to the error type.
Optionally, each error type corresponds to a color, and the labeling unit 4032 may specifically be configured to:
and highlighting the first answering step by adopting the color corresponding to the error type.
As an implementation manner, the answering step recording module 401 may include:
a detection unit 4011 configured to detect whether the learning apparatus is in a learning test mode;
an obtaining unit 4012, configured to obtain, when the learning apparatus is in a learning test mode, voice information input by a user;
an identifying unit 4013 configured to identify a user intention according to the voice information;
and the shooting unit 4014 is configured to control the camera to shoot each answer step when the user answers the current test question when recognizing that the user intends to record each answer step of the current test question.
Referring to fig. 8, in some embodiments, the learning interaction apparatus may further include:
an obtaining module 404, configured to obtain a pre-stored answer step in a wrong answer library, where the pre-stored answer step is the same as the error type of the first answer step; the wrong question bank stores a plurality of pre-stored answer steps and corresponding error types, and each pre-stored answer step corresponds to one question;
and a display module 405, configured to display the target question corresponding to the pre-stored answering step, all the answering steps of the target question, and the correction result.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a terminal device, and referring to fig. 9, the terminal device 500 may include: at least one processor 510, a memory 520, and a computer program stored in the memory 520 and operable on the at least one processor 510, wherein the processor 510, when executing the computer program, implements the steps of any of the above-described method embodiments, such as the steps S101 to S103 in the embodiment shown in fig. 2. Alternatively, the processor 510, when executing the computer program, implements the functions of the modules/units in the above-described device embodiments, for example, the functions of the modules 401 to 403 shown in fig. 7.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 520 and executed by the processor 510 to accomplish the present application. The one or more modules/units may be a series of computer program segments capable of performing specific functions, which are used to describe the execution of the computer program in the terminal device 500.
Those skilled in the art will appreciate that fig. 9 is merely an example of a terminal device and is not limiting and may include more or fewer components than shown, or some components may be combined, or different components such as input output devices, network access devices, buses, etc.
The Processor 510 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 520 may be an internal storage unit of the terminal device, or may be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. The memory 520 is used for storing the computer programs and other programs and data required by the terminal device. The memory 520 may also be used to temporarily store data that has been output or is to be output.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The learning interaction method provided by the embodiment of the application can be applied to terminal devices such as a learning machine, a computer, a wearable device, a vehicle-mounted device, a tablet computer, a notebook computer, a netbook, a Personal Digital Assistant (PDA), an Augmented Reality (AR)/Virtual Reality (VR) device, and a mobile phone, and the embodiment of the application does not limit the specific type of the terminal device at all.
Taking the terminal device as a learning machine as an example. Fig. 10 is a block diagram illustrating a partial structure of a learning machine provided in an embodiment of the present application. Referring to fig. 10, the learning machine includes: a communication circuit 610, a memory 620, an input unit 630, a display unit 640, an audio circuit 650, a wireless fidelity (WiFi) module 660, a processor 670, and a power supply 680. Those skilled in the art will appreciate that the learning machine configuration shown in fig. 10 does not constitute a limitation of the learning machine and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The following specifically describes each component of the learning machine with reference to fig. 10:
the communication circuit 610 may be used for receiving and transmitting signals during a message transmission or communication process, and in particular, receives and processes an image sample transmitted by the image capturing device to the processor 670; in addition, the image acquisition instruction is sent to the image acquisition device. Typically, the communication circuit includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the communication circuit 610 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 620 may be used to store software programs and modules, and the processor 670 executes various functional applications and data processing of the learning machine by operating the software programs and modules stored in the memory 620. The memory 620 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the learning machine, and the like. Further, the memory 620 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 630 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the learning machine. Specifically, the input unit 630 may include a touch panel 631 and other input devices 632. The touch panel 631, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 631 or near the touch panel 631 by using any suitable object or accessory such as a finger or a stylus) thereon or nearby, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 631 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 670, and can receive and execute commands sent by the processor 670. In addition, the touch panel 631 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 630 may include other input devices 632 in addition to the touch panel 631. In particular, other input devices 632 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 640 may be used to display information input by the user or information provided to the user and various menus of the learning machine, and to project the avatar model of the target user transmitted from other learning machines. The display unit 640 may include a display panel 641 and a projection device, and optionally, the display panel 641 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 631 can cover the display panel 641, and when the touch panel 631 detects a touch operation thereon or nearby, the touch panel is transmitted to the processor 670 to determine the type of the touch event, and then the processor 670 provides a corresponding visual output on the display panel 641 according to the type of the touch event. Although in fig. 10, the touch panel 631 and the display panel 641 are two independent components to implement the input and output functions of the learning machine, in some embodiments, the touch panel 631 and the display panel 641 may be integrated to implement the input and output functions of the learning machine.
The audio circuit 650 may provide an audio interface between the user and the learning machine. The audio circuit 650 may transmit the received electrical signal converted from the audio data to a speaker, and convert the electrical signal into an audio signal for output; on the other hand, the microphone converts the collected sound signal into an electric signal, which is received by the audio circuit 650 and converted into audio data, which is then processed by the audio data output processor 670 and output to the memory 620 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the learning machine can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 660, and provides wireless broadband internet access for the user. Although fig. 10 shows the WiFi module 660, it is understood that it does not belong to the essential constitution of the learning machine, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 670 is the control center for the learning machine, and is connected to various parts of the whole learning machine by various interfaces and lines, and performs various functions of the learning machine and processes data by running or executing software programs and/or modules stored in the memory 620 and calling up the data stored in the memory 620, thereby performing overall monitoring of the learning machine. Alternatively, processor 670 may include one or more processing units; alternatively, processor 670 may integrate an application processor that handles primarily the operating system, user interface, and applications, etc., and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 670.
The learning machine also includes a power supply 680 (e.g., a battery) for powering the various components, where the power supply 680 may be logically coupled to the processor 670 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
In addition, although not shown, the learning machine may further include a bluetooth module or the like, which is not described herein.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program can implement the steps in the embodiments of the learning interaction method.
The embodiment of the application provides a computer program product, and when the computer program product runs on a mobile terminal, the steps in each embodiment of the learning interaction method can be realized when the mobile terminal is executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A learning interaction method, comprising:
recording each answering step when the user answers the current test question when the learning equipment is in a learning test mode;
searching a reference answer matched with the current test question, and determining whether potential errors exist in each question answering step based on the reference answer;
under the condition that potential errors exist in the first question answering step, displaying corresponding visual marks to the first question answering step according to the error types of the potential errors; wherein the first answering step is any one of the answering steps.
2. The learning interaction method of claim 1, wherein the error types of the potential errors are multiple types, and each type of error corresponds to a visual label;
under the condition that the first question answering step has potential errors, displaying corresponding visual marks to the first question answering step according to the error types of the potential errors, wherein the visual marks comprise:
in the case that the first answering step has a potential error, determining the error type of the potential error;
and marking the first answering step based on the visual mark corresponding to the error type.
3. The learning and interaction method of claim 2, wherein each error type corresponds to a color cursor, and the labeling of the first question answering step based on the visual label corresponding to the error type comprises:
and marking the first answering step by adopting a cursor with a color corresponding to the error type.
4. The learning and interaction method of claim 2, wherein each error type corresponds to a cursor of a shape, and the labeling of the first question answering step based on the visual label corresponding to the error type comprises:
and marking the first answering step by adopting a cursor with a shape corresponding to the error type.
5. The learning and interaction method of claim 2, wherein each error type corresponds to a color, and the labeling of the first question answering step based on the visual label corresponding to the error type comprises:
and highlighting the first answering step by adopting the color corresponding to the error type.
6. The learning interaction method of claim 1, wherein the step of recording each answer of the user to the current test question while the learning device is in the learning test mode comprises:
detecting whether the learning device is in a learning test mode;
acquiring voice information input by a user under the condition that the learning equipment is in a learning test mode;
recognizing user intention according to the voice information;
and when the step of identifying that the user intention is to record each answer of the current test question, controlling a camera to shoot each answer when the user answers the current test question.
7. The learning interaction method of any one of claims 1 to 6, wherein the method further comprises:
a step of obtaining pre-stored answer with the same error type as the first answer step from an error library; the wrong question bank stores a plurality of pre-stored answer steps and corresponding error types, and each pre-stored answer step corresponds to one question;
and displaying the target questions corresponding to the pre-stored answering steps, all the answering steps of the target questions and the correction results.
8. A learning interaction device, comprising:
the answer step recording module is used for recording each answer step when the user answers the current test question when the learning equipment is in a learning test mode;
a latent error determining module, configured to search for a reference answer matching the current test question, and determine whether a latent error exists in each question answering step based on the reference answer;
the visual marking module is used for displaying corresponding visual marks to the first question answering step according to the error types of the potential errors under the condition that the potential errors exist in the first question answering step; wherein the first answering step is any one of the answering steps.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010009658.4A 2020-01-06 2020-01-06 Learning interaction method and device and terminal equipment Active CN111161578B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010009658.4A CN111161578B (en) 2020-01-06 2020-01-06 Learning interaction method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010009658.4A CN111161578B (en) 2020-01-06 2020-01-06 Learning interaction method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111161578A true CN111161578A (en) 2020-05-15
CN111161578B CN111161578B (en) 2022-03-11

Family

ID=70561497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010009658.4A Active CN111161578B (en) 2020-01-06 2020-01-06 Learning interaction method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111161578B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113051300A (en) * 2021-03-05 2021-06-29 深圳市鹰硕技术有限公司 Online learning method and device based on learning partner matching
CN114549248A (en) * 2022-02-22 2022-05-27 广州起祥科技有限公司 Error cause analysis method and device and electronic equipment

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0744087A (en) * 1993-07-27 1995-02-14 Kibi Syst Kk Computer aided learning system
CN1474300A (en) * 2002-08-06 2004-02-11 无敌科技股份有限公司 Method for teaching Chinese in computer writing mode
CN1776724A (en) * 2005-11-25 2006-05-24 南京师范大学 Network-based engineering drawing automatic judging method
CN101101706A (en) * 2006-07-05 2008-01-09 香港理工大学 Chinese writing study machine and Chinese writing study method
CN101739868A (en) * 2008-11-19 2010-06-16 中国科学院自动化研究所 Automatic evaluation and diagnosis method of text reading level for oral test
CN104464404A (en) * 2013-09-19 2015-03-25 卡西欧计算机株式会社 Voice learning support apparatus and voice learning support method
US20160180742A1 (en) * 2013-08-13 2016-06-23 Postech Academy-Industry Foundation Preposition error correcting method and device performing same
CN106709830A (en) * 2015-08-13 2017-05-24 马正方 Knowledge-point-structure-based question bank system
CN107016132A (en) * 2017-05-19 2017-08-04 广东小天才科技有限公司 A kind of online exam pool quality improving method, system and terminal device
CN107346618A (en) * 2017-06-01 2017-11-14 广西昌成科技有限公司 One kind is write exercise system and method
CN108230797A (en) * 2017-12-04 2018-06-29 颜厥护 A kind of interactive assisted learning method and system
CN109493666A (en) * 2019-01-23 2019-03-19 广东小天才科技有限公司 A kind of learning interaction method and facility for study
CN109493652A (en) * 2018-11-05 2019-03-19 广州南洋理工职业学院 Practicing teaching system based on VR technology

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0744087A (en) * 1993-07-27 1995-02-14 Kibi Syst Kk Computer aided learning system
CN1474300A (en) * 2002-08-06 2004-02-11 无敌科技股份有限公司 Method for teaching Chinese in computer writing mode
CN1776724A (en) * 2005-11-25 2006-05-24 南京师范大学 Network-based engineering drawing automatic judging method
CN101101706A (en) * 2006-07-05 2008-01-09 香港理工大学 Chinese writing study machine and Chinese writing study method
CN101739868A (en) * 2008-11-19 2010-06-16 中国科学院自动化研究所 Automatic evaluation and diagnosis method of text reading level for oral test
US20160180742A1 (en) * 2013-08-13 2016-06-23 Postech Academy-Industry Foundation Preposition error correcting method and device performing same
CN104464404A (en) * 2013-09-19 2015-03-25 卡西欧计算机株式会社 Voice learning support apparatus and voice learning support method
CN106709830A (en) * 2015-08-13 2017-05-24 马正方 Knowledge-point-structure-based question bank system
CN107016132A (en) * 2017-05-19 2017-08-04 广东小天才科技有限公司 A kind of online exam pool quality improving method, system and terminal device
CN107346618A (en) * 2017-06-01 2017-11-14 广西昌成科技有限公司 One kind is write exercise system and method
CN108230797A (en) * 2017-12-04 2018-06-29 颜厥护 A kind of interactive assisted learning method and system
CN109493652A (en) * 2018-11-05 2019-03-19 广州南洋理工职业学院 Practicing teaching system based on VR technology
CN109493666A (en) * 2019-01-23 2019-03-19 广东小天才科技有限公司 A kind of learning interaction method and facility for study

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
燕鹏超: "基于知识图谱的智能测评***设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113051300A (en) * 2021-03-05 2021-06-29 深圳市鹰硕技术有限公司 Online learning method and device based on learning partner matching
CN114549248A (en) * 2022-02-22 2022-05-27 广州起祥科技有限公司 Error cause analysis method and device and electronic equipment

Also Published As

Publication number Publication date
CN111161578B (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN111060514B (en) Defect detection method and device and terminal equipment
CN104852885B (en) Method, device and system for verifying verification code
CN111368934A (en) Image recognition model training method, image recognition method and related device
CN109063583A (en) A kind of learning method and electronic equipment based on read operation
CN107885346A (en) A kind of candidate's words recommending method, terminal and computer-readable recording medium
CN111161578B (en) Learning interaction method and device and terminal equipment
CN108195390A (en) A kind of air navigation aid, device and mobile terminal
CN107770729A (en) Signal intensity reminding method and Related product
CN111104967B (en) Image recognition network training method, image recognition device and terminal equipment
CN104615663A (en) File sorting method and device and terminal
CN112989148A (en) Error correction word ordering method and device, terminal equipment and storage medium
CN106959859A (en) The call method and device of system call function
CN106791153A (en) Using PUSH message classifying indication method, device and mobile terminal
CN113940033B (en) User identification method and related product
WO2019056324A1 (en) Method for suggesting related term, mobile terminal, and computer readable storage medium
CN111160174B (en) Network training method, head orientation recognition method, device and terminal equipment
CN110796096B (en) Training method, device, equipment and medium for gesture recognition model
CN111738354A (en) Automatic recognition training method, system, storage medium and computer equipment
CN107329584A (en) A kind of word input processing method, mobile terminal and computer-readable recording medium
CN108270660A (en) The quickly revert method and device of message
CN110825291B (en) Data processing method, data processing device and computer equipment
CN107613109B (en) Input method of mobile terminal, mobile terminal and computer storage medium
CN106851023B (en) Method and equipment for quickly making call and mobile terminal
CN113138702B (en) Information processing method, device, electronic equipment and storage medium
CN107507143A (en) A kind of image restoring method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant