CN111081082B - Dictation intelligent control method based on user intention and electronic equipment - Google Patents

Dictation intelligent control method based on user intention and electronic equipment Download PDF

Info

Publication number
CN111081082B
CN111081082B CN201910622863.5A CN201910622863A CN111081082B CN 111081082 B CN111081082 B CN 111081082B CN 201910622863 A CN201910622863 A CN 201910622863A CN 111081082 B CN111081082 B CN 111081082B
Authority
CN
China
Prior art keywords
dictation
audio
user
target
reported
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910622863.5A
Other languages
Chinese (zh)
Other versions
CN111081082A (en
Inventor
周林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201910622863.5A priority Critical patent/CN111081082B/en
Publication of CN111081082A publication Critical patent/CN111081082A/en
Application granted granted Critical
Publication of CN111081082B publication Critical patent/CN111081082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention relates to the technical field of electronic equipment, and discloses a dictation intelligent control method based on user intention and electronic equipment, wherein the method comprises the following steps: collecting voice corpora generated by a user in a dictation mode, analyzing the voice corpora to obtain the current intention of the user, and detecting the dictation handwriting of the user; the current intent indicates a dictation indication of a user; and acquiring a target dictation audio matched with the current intention according to the corresponding relation between the dictated dictation audio and the dictation handwriting, and reading, so that the flexible control of the user on dictation can be realized, the improvement of the intelligence of the electronic equipment on dictation and reading is facilitated, and the use experience of the user is improved.

Description

Dictation intelligent control method based on user intention and electronic equipment
Technical Field
The invention relates to the technical field of electronic equipment, in particular to a dictation intelligent control method based on user intention and electronic equipment.
Background
At present, when dictation newspaper reading is performed by using an electronic device (such as a family education machine), it is often necessary to set a specified voice in the electronic device in advance to control dictation. For example, a designated voice "next" needs to be set to control the electronic device to read the next dictation audio. However, when the user uses the electronic device to listen and read, it is found that if the user forgets the designated voice or does not completely memorize the designated voice, the electronic device cannot be normally controlled to listen and read. Therefore, the existing electronic equipment lacks intelligence in dictation, reading and writing, and the user experience is poor.
Disclosure of Invention
The embodiment of the invention discloses a dictation intelligent control method based on user intention and electronic equipment, which are used for realizing dictation intelligent control, improving the interactivity between a user and the electronic equipment and improving the user experience.
The first aspect of the embodiment of the invention discloses a dictation intelligent control method based on user intention, which comprises the following steps:
collecting voice corpora generated by a user in a dictation mode, analyzing the voice corpora to obtain the current intention of the user, and detecting the dictation handwriting of the user; the current intent indicates a dictation indication of a user;
and acquiring a target dictation audio matched with the current intention and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after obtaining the target dictation audio adapted to the current intention and reading the target dictation audio according to the correspondence between the read dictation audio and the dictation handwriting, the method further includes:
judging whether the target dictation audio belongs to the reported dictation audio or not;
when the target dictation audio belongs to the reported dictation audio, judging whether the dictation content corresponding to the target dictation audio is polyphone;
when the dictation content corresponding to the target dictation audio is polyphone, acquiring word analysis information of the dictation content corresponding to the target dictation audio;
and converting the word analysis information into analysis voice and outputting the analysis voice.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the method further includes:
when the target dictation audio belongs to the reported dictation audio, acquiring the reported times of the target dictation audio;
judging whether the reported times exceed preset times or not;
when the reported times exceed the preset times, executing the step of judging whether the dictation content corresponding to the target dictation audio is polyphone;
and when the reported times do not exceed the preset times, counting and updating the reported times of the target dictation audio.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before collecting speech corpora generated by a user in a dictation mode, parsing the speech corpora to obtain a current intention of the user, and detecting dictation handwriting of the user, the method further includes:
after a dictation mode is started, dictation condition description information is output, wherein the dictation condition description information is used for indicating the type of a dictation purpose, and the type of the dictation purpose is dictation exercise or dictation operation or dictation examination;
the method further comprises the following steps:
when the reported times exceed the preset times, acquiring current positioning information and acquiring the dictation condition description information input by a user when dictation starts;
and when the current positioning information indicates that the user is located in a non-school place and the dictation purpose type indicated by the dictation condition description information is the dictation exercise, executing the step of judging whether the dictation content corresponding to the target dictation audio is polyphone.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before collecting speech corpora generated by a user in a dictation mode, parsing the speech corpora to obtain a current intention of the user, and detecting dictation handwriting of the user, the method further includes:
after the dictation mode is started, receiving a connection request sent by an online teacher end to establish connection;
receiving dictation data pushed by the online teacher end, wherein the dictation data comprises a plurality of dictation audios;
the acquiring and reading a target dictation audio matched with the current intention according to the corresponding relation between the reported dictation audio and the dictation handwriting comprises the following steps:
acquiring a target dictation audio matched with the current intention from the dictation data and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting;
after the target dictation audio matched with the current intention is obtained and read according to the corresponding relation between the reported dictation audio and the dictation handwriting, the method further comprises the following steps:
obtaining the dictation progress and dictation state description of a user according to the current intention, the target dictation audio and the reported dictation audio;
and sending the current intention, the dictation progress of the user and the dictation state description to the online teacher end so that the online teacher end completes dictation condition description of the user on a teaching system according to the current intention, the dictation progress of the user and the dictation state description.
A second aspect of an embodiment of the present invention discloses an electronic device, which may include:
the voice collecting unit is used for collecting voice corpora generated by a user in a dictation mode and analyzing the voice corpora to obtain the current intention of the user; the current intent indicates a dictation indication of a user;
the handwriting detection unit is used for simultaneously detecting the dictation handwriting of the user in the dictation mode;
and the dictation reading unit is used for acquiring a target dictation audio matched with the current intention and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes: the multi-tone detection unit is used for judging whether the target dictation audio belongs to the reported and read dictation audio or not after the dictation reading unit acquires the target dictation audio matched with the current intention and reads the target dictation audio according to the corresponding relation between the reported and read dictation audio and the dictation handwriting; when the target dictation audio belongs to the reported dictation audio, judging whether the dictation content corresponding to the target dictation audio is polyphone; when the dictation content corresponding to the target dictation audio is polyphone, acquiring word analysis information of the dictation content corresponding to the target dictation audio; and converting the word analysis information into analysis voice and outputting the analysis voice.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the reading time detection unit is used for acquiring the reading times of the target dictation audio when the target dictation audio belongs to the reported reading dictation audio; judging whether the reported times exceed preset times; and when the reported times exceed the preset times, triggering the polyphone detection unit to execute the step of judging whether the dictation content corresponding to the target dictation audio is polyphone; and when the number of reported times does not exceed the preset number, counting and updating the number of reported times of the target dictation audio.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the output unit is used for collecting voice corpora generated by the user in the dictation mode by the voice collection unit and outputting dictation condition description information before the handwriting detection unit is used for simultaneously detecting the dictation handwriting of the user in the dictation mode and after the dictation mode is started by the electronic equipment, wherein the dictation condition description information is used for indicating the type of a dictation purpose, and the type of the dictation purpose is dictation practice, dictation operation or dictation examination;
the dictation state detection unit is used for acquiring current positioning information and acquiring dictation condition description information input by a user when the reading time detection unit judges that the reported times exceed the preset times; and when the current positioning information indicates that the user is located in a non-school place and the dictation purpose type indicated by the dictation condition description information is the dictation exercise, triggering the polyphonic detection unit to execute the step of judging whether the dictation content corresponding to the target dictation audio is polyphonic.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes: a connection unit and a feedback unit;
the connection unit is used for collecting voice corpora generated by the user in the dictation mode by the voice collection unit and receiving a connection request sent by an online teacher end to establish connection before the handwriting detection unit is used for simultaneously detecting the dictation handwriting of the user in the dictation mode and after the dictation mode is started by the electronic equipment; receiving dictation data pushed by the online teacher end, wherein the dictation data comprises a plurality of dictation audios;
the dictation reading unit is used for acquiring a target dictation audio matched with the current intention and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting, and the method for reading the target dictation audio specifically comprises the following steps:
the dictation reading unit is used for acquiring a target dictation audio matched with the current intention from the dictation data and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting;
the feedback unit is used for acquiring a target dictation audio matched with the current intention and reading the target dictation audio according to the corresponding relation between the reported dictation audio and the dictation handwriting by the dictation reading unit, and acquiring a dictation progress and a dictation state description of a user according to the current intention, the target dictation audio and the reported dictation audio; and sending the current intention, the dictation progress of the user and the dictation state description to the online teacher end so that the online teacher end completes dictation condition description of the user on a teaching system according to the current intention, the dictation progress of the user and the dictation state description.
A third aspect of an embodiment of the present invention discloses an electronic device, which may include:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the dictation intelligent control method based on the user intention disclosed by the first aspect of the embodiment of the invention.
A fourth aspect of the embodiments of the present invention discloses a computer-readable storage medium storing a computer program, where the computer program enables a computer to execute a dictation intelligent control method based on user intention disclosed in the first aspect of the embodiments of the present invention.
A fifth aspect of embodiments of the present invention discloses a computer program product, which, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
A sixth aspect of the present embodiment discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product is configured to, when running on a computer, cause the computer to perform part or all of the steps of any one of the methods in the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, after a dictation mode is started, in the dictation process, the voice corpus generated by a user is collected, the voice corpus is further analyzed to obtain the current intention of the user for indicating the dictation of the user under the current state and time point, the dictation track of the user aiming at dictation is synchronously detected, and the dictation audio of the target dictation content matched with the current intention is obtained and output according to the corresponding relation between the reported dictation audio and the dictation handwriting in the dictation; it can be seen that, by implementing the embodiment of the present invention, the dictation instruction given by the voice of the user can be obtained through parsing in the dictation process, and further, the target dictation content mentioned in the dictation instruction can be made clear according to the corresponding relationship between the reported dictation video and the dictation handwriting, so as to be able to report the dictation audio with the adaptive target dictation content, and the drawback that the dictation instruction needs to be set in advance in the conventional technology is avoided, thereby implementing flexible control of the dictation by the user, facilitating to improve the intellectualization of the electronic device on dictation, and improving the interactivity of the user and the electronic device.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a dictation intelligent control method based on user intention according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a dictation intelligent control method based on user's intention according to another embodiment of the present disclosure;
FIG. 3 is a schematic flow chart illustrating a dictation intelligent control method based on user's intention according to another embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure;
FIG. 5 is a schematic structural diagram of an electronic device according to another embodiment of the disclosure;
FIG. 6 is a schematic structural diagram of an electronic device according to another embodiment of the disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to still another embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, of embodiments of the present invention are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a dictation intelligent control method based on user intentions, which is used for realizing dictation intelligent control, improving the interactivity between a user and electronic equipment and improving the user experience. The embodiment of the invention also correspondingly discloses the electronic equipment.
The electronic device according to the embodiment of the present invention includes, but is not limited to, a family education machine, a tablet computer, and the like, and the operating system of the electronic device may include, but is not limited to, an Android operating system, an IOS operating system, a Symbian operating system, a Black Berry operating system, a Windows Phone8 operating system, and the like. The technical solution of the present invention will be described in detail through specific embodiments from the perspective of electronic devices.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a dictation intelligent control method based on user's intention according to an embodiment of the present invention; as shown in fig. 1, the dictation intelligent control method based on user intention may include:
101. the electronic equipment collects voice corpora generated by a user in a dictation mode, analyzes the voice corpora to obtain the current intention of the user, and the current intention indicates dictation indication of the user.
The electronic equipment in the embodiment of the invention can be used for assisting a user in completing dictation exercises, dictation operations, dictation examinations and the like, and realizing dictation intellectualization, wherein the dictation modes provided by the electronic equipment comprise dictation exercises, dictation operations, dictation examinations and the like.
In the dictation process, a user may issue a specific instruction for dictation through voice, or the user may follow the dictation, or speak a writing order of the dictation while writing the dictation, or describe information related to the dictation (e.g., how the word is written), for the sounds issued by the user, the electronic device collects the sounds as a voice corpus generated by the user in a dictation mode, and can obtain a current intention (intention in a current state and a time point) of the user according to the collected voice corpus. For example, after the electronic device reads "world", the user gives a specific instruction "reread" to the electronic device, and then the electronic device analyzes the user's intention to re-read "world". For another example, after the electronic device reads the "world", the user's voice "how to write the word" is detected, and the user's intention analyzed by the electronic device is also to read the "world" again.
Optionally, the electronic device may start a dictation mode to enter a dictation state when the user starts the dictation mode, and specifically, the electronic device monitors whether voice information used by the user to start the dictation mode is received; and starting a dictation mode when the voice information is monitored. In the embodiment, the electronic equipment has a voice control function, so that a user can start a dictation mode of the electronic equipment through voice, the intelligence of the electronic equipment is improved, both hands can be released, and the use experience of the user is improved.
102. The electronic device detects the dictation of the user.
Wherein steps 101 and 102 are performed in real time in a dictation mode.
When the electronic equipment starts a dictation mode, a front-facing camera of the electronic equipment is started, a writing image of a user is shot through the front-facing camera, and the writing image is analyzed to obtain dictation handwriting.
103. And the electronic equipment acquires the target dictation audio matched with the current intention and reads the target dictation audio according to the corresponding relation between the reported dictation audio and the dictation handwriting.
It can be understood that after each new dictation audio is read, the electronic device records the new dictation audio into the read dictation audio so as to view the read dictation audio and the unread dictation audio at any time. The corresponding relation between the reported and read dictation audio and the dictation handwriting can reflect whether the user writes all the reported and read dictation audio or which dictation audio is reported and read but is not written.
As an optional implementation manner, the electronic device obtains a target dictation audio matched with the current intention and obtains a dictation progress sent by a contra-hand end (another electronic device) competing with the user according to the corresponding relationship between the reported dictation audio and the dictation handwriting; judging whether the target dictation audio belongs to reported and read dictation audio, if the target dictation audio belongs to reported and read dictation audio, further judging whether the dictation progress acquired from the opponent indicates that a competitor in competition indicates to acquire the next dictation audio (unreported dictation audio), if the dictation progress indicates that the competitor in competition indicates to acquire the next dictation audio, sending a notification message for indicating to temporarily acquire the next dictation audio to the opponent, wherein the notification message enables the opponent to wait and increase corresponding reward points (reward corresponding points if the completion is first), and acquire the target dictation audio and report the target dictation audio; if the dictation progress indicates that the competitor of the race indicates to temporarily obtain the next dictation audio, obtaining the target dictation audio and reading; if the target dictation audio does not belong to the reported dictation audio (namely, the next dictation audio), detecting whether a notification message sent by the opposite terminal is received, if the received notification message indicates that the next dictation audio is to be acquired temporarily, waiting and increasing corresponding reward points (awarding corresponding points after the completion of the first dictation audio) until the dictation progress indication of the hand terminal acquires the next dictation audio, acquiring the next dictation audio and reporting (synchronously reporting the next dictation audio by the electronic equipment and the opposite terminal), and if the notification message sent by the hand terminal is not detected and the dictation progress indication of the opposite terminal acquires the next dictation audio, acquiring the next dictation audio and reporting. In the embodiment, the user can perform dictation competitions with other competitors, each of the users performs dictation, reading and reporting competitions by using the electronic equipment, the same dictation audio is synchronously read and reported on the two electronic equipment, the user who preferentially completes writing can obtain corresponding reward points, and the other user who does not have the reward points can not deduct the reward points. Therefore, according to the embodiment, the dictation interest of the user is improved in a competition mode, and the learning efficiency is improved.
By implementing the embodiment, the dictation instruction given by the voice of the user can be obtained by analyzing in the dictation process, the target dictation content mentioned by the dictation instruction can be clarified further according to the corresponding relation between the reported dictation video and the dictated handwriting, so that the dictation audio matched with the target dictation content can be reported, the defect that the dictation instruction needs to be set in advance in the traditional technology is overcome, the flexible control of the dictation by the user is realized, the intellectualization of the electronic equipment on dictation is favorably improved, and the interactivity of the user and the electronic equipment is improved.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart of a dictation intelligent control method based on user intention according to another embodiment of the present invention; as shown in fig. 2, the dictation intelligent control method based on user intention may include:
201. the electronic equipment collects voice corpora generated by a user in a dictation mode, analyzes the voice corpora to obtain the current intention of the user, and the current intention indicates dictation indication of the user.
202. The electronic device detects the dictation of the user.
Wherein steps 201 and 202 are performed in real time in a dictation mode.
203. And the electronic equipment acquires the target dictation audio matched with the current intention and reads the target dictation audio according to the corresponding relation between the reported dictation audio and the dictation handwriting.
As an optional implementation manner, the electronic device compares the reported and read dictation audio with the dictation handwriting one by one, records the reported and read dictation audio which exists in the reported and read dictation audio and does not have the corresponding dictation handwriting, takes the recorded reported and read dictation audio as the target dictation audio, and acquires the target dictation audio for reading. In this embodiment, the electronic device will reread the dictation script that the user did not write. Further, if the comparison result shows that the number of the reported dictation audio and the number of the dictated handwriting are equal, determining the target dictation audio which is indicated by the user to be reported and read according to the current intention, and reporting and reading. Of course, it is also understood that the dictation indication of the current intent indication is for reading the next dictation audio.
204. The electronic equipment judges whether the target dictation audio belongs to the reported dictation audio or not; when the target dictation audio belongs to the reported dictation audio, turning to step 205; when the target dictation audio does not belong to the reported dictation audio, the process is ended.
As an optional implementation manner, when the target dictation audio belongs to a reported dictation audio, acquiring the reported times of the target dictation audio; judging whether the reported times exceed preset times; when the number of reported reading times exceeds the preset number, go to step 205; and when the reported times do not exceed the preset times, counting and updating the reported times of the target dictation audio. In this embodiment, if the target dictation audio belongs to a reported and read dictation audio, that is, the target dictation audio is a repeat read dictation audio required by the user, then if the number of reported times exceeds a preset number (for example, 3 times), it indicates that the user fails to hear the target dictation audio, and may further determine whether the target dictation audio is a polyphone, if the number of reported times does not exceed the preset number, then the polyphone detection may not be performed first, and only the count update of the number of reported times is performed, for example, the number of reported times is 2, then the count update is 3, that is, 1 is added to the original number of reported times to obtain a new number of reported times.
Furthermore, the electronic device is provided with a counter specially used for updating the reported times count, and the electronic device reads a current count value from the counter before judging whether the reported times exceed a preset number, wherein the current count value is the reported times, and executes the step of judging whether the reported times exceed the preset number. Furthermore, the counting and updating the number of reported times of the target dictation audio specifically comprises: the electronic equipment triggers the counter to update the count.
205. The electronic equipment judges whether the dictation content corresponding to the target dictation audio is polyphone or not; if yes, go to step 206; if not, the flow is ended.
Wherein, if the dictation content corresponding to the target dictation audio is polyphone, such as 'ripe'.
206. The electronic equipment acquires word analysis information of the dictation content corresponding to the target dictation audio.
207. And the electronic equipment converts the word analysis information into analysis voice and outputs the analysis voice.
As an alternative embodiment, before performing steps 201 and 202, the electronic device further performs the following steps: after the dictation mode is started, dictation condition description information is output, the dictation condition description information is used for indicating the type of dictation purpose, and the type of the dictation purpose is dictation exercise or dictation operation or dictation examination. After the dictation mode is started, a user can be required to input dictation condition description information, and the dictation purpose of the user can be known through the dictation condition description information so as to determine whether a prompt can be given in the dictation process.
Further, the electronic device may further perform the steps of:
when the number of reported reading times exceeds a preset number, acquiring current positioning information and acquiring the dictation condition description information input by a user when dictation starts;
and when the current positioning information indicates that the user is located in a non-school place and the dictation purpose type indicated by the dictation condition description information is dictation practice, executing a step of judging whether the dictation content corresponding to the target dictation audio is polyphone.
According to the embodiment, whether dictation prompt can be performed on the user can be further determined according to the position of the user and the description of the user on dictation, so that appropriate dictation prompt can be given to the user, and the learning efficiency is improved.
It can be seen that, with the above embodiment, after the target dictation audio is reported and read, if the target dictation audio is a dictation audio in the reported dictation audio, it is further determined whether the dictation content corresponding to the target dictation audio is a polyphone, and if the target dictation audio is a polyphone, word analysis information of the dictation content is further acquired and output to the user, so as to help the user to understand the dictation content corresponding to the target dictation audio as soon as possible, and improve dictation efficiency.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic flow chart of a dictation intelligent control method based on user intention according to another embodiment of the present invention; as shown in fig. 3, the dictation intelligent control method based on user intention may include:
301. after the electronic equipment starts the dictation mode, the electronic equipment receives a connection request sent by an online teacher end to establish connection.
It should be noted that, in the embodiment of the present invention, an online teacher monitors that a user completes dictation.
302. The electronic equipment receives dictation data pushed by an online teacher end, wherein the dictation data comprises a plurality of dictation audios.
The on-line teacher end pushes the dictation data to the electronic equipment, the electronic equipment outputs a dictation start prompt after receiving the dictation data, and obtains dictation audio from the dictation data and reads the dictation audio after preset time.
303. The electronic equipment collects voice corpora generated by a user in a dictation mode, analyzes the voice corpora to obtain the current intention of the user, and the current intention indicates dictation indication of the user.
304. The electronic device detects the dictation of the user.
Wherein steps 303 and 304 are performed in real time in a dictation mode.
305. And the electronic equipment acquires the target dictation audio matched with the current intention from the dictation data and reads the target dictation audio according to the corresponding relation between the reported dictation audio and the dictation handwriting.
306. The electronic equipment obtains the dictation progress and the dictation state explanation of the user according to the current intention, the target dictation audio and the reported dictation audio.
307. The electronic equipment sends the current intention, the dictation progress and the dictation state description of the user to the online teacher end, so that the online teacher end completes dictation condition description of the user on the teaching system according to the current intention, the dictation progress and the dictation state description of the user.
The dictation state description may be used to describe the dictation condition of the user on the dictation audio, for example, the dictation state description is "world-read twice", that is, the dictation content of "world" is read twice, and the side reflects the degree of grasp of the user on the dictation content of "world".
The dictation condition explanation of the user is completed on the teaching system, so that a teacher can comprehensively know the learning condition of the user, and the subsequent targeted learning tutoring can be provided, thereby being beneficial to helping the user to improve the learning score.
It can be seen that, by implementing the above embodiment, the online teacher end can monitor the user to perform dictation exercise, dictation operation, dictation examination, and the like on line, so that the intelligent interaction of teaching is improved, and the dictation condition of the user is recorded on the teaching system in real time, so that the teacher can teach in a targeted manner, and the improvement of the learning score of the user is facilitated.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure; as shown in fig. 4, the electronic device may include:
a voice collecting unit 410, configured to collect voice corpora generated by the user in a dictation mode, and analyze the voice corpora to obtain the current intent of the user; the current intent indicates a dictation indication of a user;
and a handwriting detection unit 420 for simultaneously detecting the dictation handwriting of the user in the dictation mode;
and the dictation reading unit 430 is configured to obtain a target dictation audio matched with the current intention according to a corresponding relationship between the recorded dictation audio and the dictation handwriting, and read the target dictation audio.
As an optional implementation manner, the electronic device further includes a starting unit, where the starting unit is configured to monitor whether voice information used by a user to start a dictation mode is received; and starting a dictation mode when the voice information is monitored. In the embodiment, the electronic equipment has a voice control function, so that a user can start a dictation mode of the electronic equipment through voice, the intelligence of the electronic equipment is improved, both hands can be released, and the use experience of the user is improved.
As an optional implementation manner, the dictation reading unit 430 is specifically configured to obtain a target dictation audio adapted to a current intention according to a corresponding relationship between a read dictation audio and a dictation handwriting, and obtain a dictation progress sent by a contra-hand end (another electronic device) competing with a user; judging whether the target dictation audio belongs to reported and read dictation audio, if the target dictation audio belongs to reported and read dictation audio, further judging whether the dictation progress acquired from the opponent indicates that a competitor in competition indicates to acquire the next dictation audio (unreported dictation audio), if the dictation progress indicates that the competitor in competition indicates to acquire the next dictation audio, sending a notification message for indicating to temporarily acquire the next dictation audio to the opponent, wherein the notification message enables the opponent to wait and increase corresponding reward points (reward corresponding points if the completion is first), and acquire the target dictation audio and report the target dictation audio; if the dictation progress indicates that the competitor of the race indicates to temporarily obtain the next dictation audio, obtaining the target dictation audio and reading; if the target dictation audio does not belong to the reported dictation audio (namely, the next dictation audio), detecting whether a notification message sent by the opposite terminal is received, if the received notification message indicates that the next dictation audio is to be acquired temporarily, waiting and increasing corresponding reward points (awarding corresponding points after the completion of the first dictation audio) until the dictation progress indication of the hand terminal acquires the next dictation audio, acquiring the next dictation audio and reporting (synchronously reporting the next dictation audio by the electronic equipment and the opposite terminal), and if the notification message sent by the hand terminal is not detected and the dictation progress indication of the opposite terminal acquires the next dictation audio, acquiring the next dictation audio and reporting. In the embodiment, the user can perform dictation competitions with other competitors, each of the users performs dictation, reading and reporting competitions by using the electronic equipment, the same dictation audio is synchronously read and reported on the two electronic equipment, the user who preferentially completes writing can obtain corresponding reward points, and the other user who does not have the reward points can not deduct the reward points. Therefore, according to the embodiment, the dictation interest of the user is improved in a competition mode, and the learning efficiency is improved.
By implementing the electronic equipment, the dictation instruction given by the voice of the user can be obtained by analysis in the dictation process, the target dictation content mentioned by the dictation instruction can be clarified further according to the corresponding relation between the reported dictation video and the dictated handwriting, so that the dictation audio matched with the target dictation content can be reported, the defect that the dictation instruction needs to be set in advance in the traditional technology is overcome, the flexible control of the dictation by the user is realized, the intellectualization of the electronic equipment on dictation is favorably improved, and the interactivity of the user and the electronic equipment is improved.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to another embodiment of the disclosure; the electronic device shown in fig. 5 is optimized based on the electronic device shown in fig. 4, and the electronic device shown in fig. 5 further includes:
a multi-tone detection unit 510, configured to determine whether the target dictation audio belongs to the reported dictation audio after the dictation reading unit 430 obtains the target dictation audio adapted to the current intention according to the corresponding relationship between the reported dictation audio and the dictation handwriting and reads the target dictation audio; when the target dictation audio belongs to the reported dictation audio, judging whether the dictation content corresponding to the target dictation audio is polyphone; when the dictation content corresponding to the target dictation audio is polyphone, acquiring word analysis information of the dictation content corresponding to the target dictation audio; and converting the word analysis information into analysis voice and outputting the analysis voice.
As an optional implementation manner, the dictation reading unit 430 may specifically compare the reported and read dictation audio with the dictation handwriting one by one, record the reported and read dictation audio which exists in the reported and read dictation audio and does not have the corresponding dictation handwriting, serve as the target dictation audio, and acquire the target dictation audio for reading. In this embodiment, the electronic device will reread the dictation script that the user did not write. Further, if the comparison result shows that the number of the reported dictation audio and the number of the dictated handwriting are equal, determining the target dictation audio which is indicated by the user to be reported and read according to the current intention, and reporting and reading. Of course, it is also understood that the dictation indication of the current intent indication is for reading the next dictation audio.
Further, in the electronic device shown in fig. 5, the electronic device further includes:
a reading frequency detection unit 520, configured to obtain the reported reading frequency of the target dictation audio frequency when the target dictation audio frequency belongs to the reported reading dictation audio frequency; judging whether the reported times exceed preset times; and when the reported times exceed the preset times, triggering the polyphone detection unit to execute the step of judging whether the dictation content corresponding to the target dictation audio is polyphone; and when the number of reported times does not exceed the preset number, counting and updating the number of reported times of the target dictation audio.
An output unit 530, configured to collect, by the speech collection unit 410, speech corpora generated by the user in the dictation mode, and output, by the handwriting detection unit 420, dictation condition description information before the dictation handwriting of the user is simultaneously detected in the dictation mode and after the dictation mode is started by the electronic device, the dictation condition description information being used to indicate a type of dictation purpose, where the type of dictation purpose is dictation practice, dictation job, or dictation test;
a dictation state detection unit 540, configured to obtain current positioning information and obtain dictation condition description information input by a user when the number of times of reading exceeds a preset number, when the number of times of reading detection unit 520 determines that the number of times of reading exceeds the preset number of times; and when the current positioning information indicates that the user is located in a non-school place and the dictation purpose type indicated by the dictation condition description information is the dictation exercise, triggering the polyphonic detection unit to execute the step of judging whether the dictation content corresponding to the target dictation audio is polyphonic.
Further, the electronic device is provided with a counter specially used for updating the reported number of times, the reported number detection unit 520 reads a current count value from the counter before determining whether the reported number of times exceeds a preset number of times, the current count value is the reported number of times, and the step of determining whether the reported number of times exceeds the preset number of times is performed. Furthermore, the counting and updating the number of reported times of the target dictation audio specifically comprises: the reading count detection unit 520 triggers the counter to perform counting update.
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to another embodiment of the disclosure; the electronic device shown in fig. 6 is optimized based on the electronic device shown in fig. 4, and the electronic device shown in fig. 6 further includes: a connection unit 610 and a feedback unit 620.
The connection unit 610 is configured to collect, by the voice collection unit 410, voice corpora generated by the user in the dictation mode, and before the handwriting detection unit 410 is configured to simultaneously detect the dictation handwriting of the user in the dictation mode, and after the electronic device starts the dictation mode, receive a connection request sent by the online teacher end to establish a connection; receiving dictation data pushed by the online teacher end, wherein the dictation data comprises a plurality of dictation audios;
the method for acquiring and reading the target dictation audio matched with the current intention according to the corresponding relationship between the dictated dictation audio and the dictation handwriting by the dictation reading unit 430 specifically comprises the following steps:
the dictation reading unit 430 is configured to obtain a target dictation audio matched with the current intention from the dictation data according to a corresponding relationship between the recorded dictation audio and the dictation handwriting, and read the target dictation audio;
the feedback unit 620 is configured to, after the dictation reading unit 430 obtains a target dictation audio matched with the current intention and reads the target dictation audio according to a corresponding relationship between the reported dictation audio and the dictation handwriting, obtain a dictation progress and a dictation state description of the user according to the current intention, the target dictation audio and the reported dictation audio; and sending the current intention, the dictation progress of the user and the dictation state description to the online teacher end so that the online teacher end completes dictation condition description of the user on a teaching system according to the current intention, the dictation progress of the user and the dictation state description.
The electronic equipment shown in fig. 6 can be implemented by monitoring the user to perform dictation exercise, dictation operation, dictation examination and the like on line by an online teacher end, so that intelligent interaction of teaching is improved, and the dictation condition of the user is recorded on a teaching system in real time, so that a teacher can teach specifically, and the improvement of the learning score of the user is facilitated.
EXAMPLE seven
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to another embodiment of the disclosure; the electronic device shown in fig. 7 may include: at least one processor 710, such as a CPU, a communication bus 730 is used to enable communication connections between these components. Memory 720 may be a high-speed RAM memory or a non-volatile memory, such as at least one disk memory. The memory 720 may optionally be at least one memory device located remotely from the processor 710. Wherein the processor 710 may be combined with the electronic device described in fig. 4 to 6, a set of program codes is stored in the memory 710, and the processor 710 calls the program codes stored in the memory 720 to perform the following operations:
collecting voice corpora generated by a user in a dictation mode, analyzing the voice corpora to obtain the current intention of the user, and detecting the dictation handwriting of the user; the current intent indicates a dictation indication of a user; and acquiring a target dictation audio matched with the current intention and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting.
As an alternative embodiment, the processor 710 is further configured to perform the following operations:
after acquiring a target dictation audio matched with the current intention and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting, judging whether the target dictation audio belongs to the reported reading dictation audio; when the target dictation audio belongs to the reported dictation audio, judging whether the dictation content corresponding to the target dictation audio is polyphone; when the dictation content corresponding to the target dictation audio is polyphone, acquiring word analysis information of the dictation content corresponding to the target dictation audio; and converting the word analysis information into analysis voice and outputting the analysis voice.
As an alternative embodiment, the processor 710 is further configured to perform the following operations:
when the target dictation audio belongs to the reported dictation audio, acquiring the reported times of the target dictation audio; judging whether the reported times exceed preset times; and when the reported times exceed the preset times, executing the step of judging whether the dictation content corresponding to the target dictation audio is polyphone; and when the number of reported times does not exceed the preset number, counting and updating the number of reported times of the target dictation audio.
As an alternative embodiment, the processor 710 is further configured to perform the following operations:
collecting voice corpora generated by a user in a dictation mode, analyzing the voice corpora to obtain the current intention of the user, and outputting dictation condition description information before detecting the dictation handwriting of the user and after starting the dictation mode, wherein the dictation condition description information is used for indicating the type of a dictation purpose, and the type of the dictation purpose is dictation exercise or dictation operation or a dictation test;
when the reported number of times exceeds the preset number of times, acquiring current positioning information and acquiring dictation condition description information input by a user at the beginning of dictation; and when the current positioning information indicates that the user is located in a non-school place and the dictation purpose type indicated by the dictation condition description information is the dictation exercise, executing the step of judging whether the dictation content corresponding to the target dictation audio is polyphone.
As an alternative embodiment, the processor 710 is further configured to perform the following operations:
collecting voice corpora generated by a user in a dictation mode, analyzing the voice corpora to obtain the current intention of the user, and receiving a connection request sent by an online teacher end to establish connection before detecting the dictation handwriting of the user and after starting the dictation mode; receiving dictation data pushed by the online teacher end, wherein the dictation data comprises a plurality of dictation audios;
acquiring a target dictation audio matched with the current intention from the dictation data and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting;
after a target dictation audio matched with the current intention is obtained and read according to the corresponding relation between the reported dictation audio and the dictation handwriting, the dictation progress and the dictation state explanation of the user are obtained according to the current intention, the target dictation audio and the reported reading dictation audio; and sending the current intention, the dictation progress of the user and the dictation state description to the online teacher end so that the online teacher end completes dictation condition description of the user on a teaching system according to the current intention, the dictation progress of the user and the dictation state description.
The embodiment of the invention also discloses a computer readable storage medium which stores a computer program, wherein the computer program enables a computer to execute the intelligent dictation control method based on the user intention disclosed in the figures 1 to 3.
An embodiment of the present invention further discloses a computer program product, which, when running on a computer, causes the computer to execute part or all of the steps of any one of the methods disclosed in fig. 1 to 3.
An embodiment of the present invention further discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where when the computer program product runs on a computer, the computer is enabled to execute part or all of the steps of any one of the methods disclosed in fig. 1 to fig. 3.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The method for intelligently controlling dictation based on user intention and the electronic device disclosed by the embodiment of the invention are described in detail, a specific embodiment is applied to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A dictation intelligent control method based on user intention is characterized by comprising the following steps:
collecting voice corpora generated by a user in a dictation mode, analyzing the voice corpora to obtain the current intention of the user, and detecting the dictation handwriting of the user; the current intent indicates a dictation indication of a user;
acquiring a target dictation audio matched with the current intention and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting;
the acquiring and reading a target dictation audio matched with the current intention according to the corresponding relation between the reported dictation audio and the dictation handwriting comprises the following steps:
acquiring a target dictation audio matched with the current intention and acquiring a dictation progress sent by an opponent terminal competing with a user according to the corresponding relation between the reported dictation audio and the dictation handwriting;
judging whether the target dictation audio belongs to reported dictation audio or not;
if the target dictation audio belongs to the reported dictation audio, judging whether the dictation progress indicates that the opposite terminal indicates to acquire the next dictation audio;
if yes, sending a notification message for indicating to suspend obtaining of the next dictation audio to the opponent terminal, obtaining the target dictation audio and reading, wherein the notification message is used for enabling the opponent terminal to wait and increase corresponding reward points;
if not, the dictation progress indicates that the opposite terminal indicates to suspend obtaining the next dictation audio, obtain the target dictation audio and report.
2. The method according to claim 1, wherein after obtaining the target dictation audio adapted to the current intention according to the correspondence between the reported dictation audio and the dictation handwriting and reporting, the method further comprises:
judging whether the target dictation audio belongs to the reported dictation audio or not;
when the target dictation audio belongs to the reported dictation audio, judging whether the dictation content corresponding to the target dictation audio is polyphone;
when the dictation content corresponding to the target dictation audio is polyphone, acquiring word analysis information of the dictation content corresponding to the target dictation audio;
and converting the word analysis information into analysis voice and outputting the analysis voice.
3. The method of claim 2, further comprising:
when the target dictation audio belongs to the reported dictation audio, acquiring the reported times of the target dictation audio;
judging whether the reported times exceed preset times or not;
when the reported times exceed the preset times, executing the step of judging whether the dictation content corresponding to the target dictation audio is polyphone;
and when the reported times do not exceed the preset times, counting and updating the reported times of the target dictation audio.
4. The method of claim 3, wherein before collecting user-generated speech corpora in the dictation mode, parsing the speech corpora to obtain a current intent of the user, and detecting the user's dictation, the method further comprises:
after a dictation mode is started, dictation condition description information is output, wherein the dictation condition description information is used for indicating the type of a dictation purpose, and the type of the dictation purpose is dictation exercise or dictation operation or dictation examination;
the method further comprises the following steps:
when the reported times exceed the preset times, acquiring current positioning information and acquiring the dictation condition description information input by a user when dictation starts;
and when the current positioning information indicates that the user is located in a non-school place and the dictation purpose type indicated by the dictation condition description information is the dictation exercise, executing the step of judging whether the dictation content corresponding to the target dictation audio is polyphone.
5. The method of claim 1, wherein prior to collecting user-generated speech corpora in dictation mode, parsing the speech corpora to obtain a current intent of the user, and detecting the user's dictation, the method further comprises:
after the dictation mode is started, receiving a connection request sent by an online teacher end to establish connection;
receiving dictation data pushed by the online teacher end, wherein the dictation data comprises a plurality of dictation audios;
the acquiring and reading a target dictation audio matched with the current intention according to the corresponding relation between the reported dictation audio and the dictation handwriting comprises the following steps:
acquiring a target dictation audio matched with the current intention from the dictation data and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting;
after the target dictation audio matched with the current intention is obtained and read according to the corresponding relation between the reported dictation audio and the dictation handwriting, the method further comprises the following steps:
obtaining the dictation progress and dictation state description of a user according to the current intention, the target dictation audio and the reported dictation audio;
and sending the current intention, the dictation progress of the user and the dictation state description to the online teacher end so that the online teacher end completes dictation condition description of the user on a teaching system according to the current intention, the dictation progress of the user and the dictation state description.
6. An electronic device, comprising:
the voice collecting unit is used for collecting voice corpora generated by a user in a dictation mode and analyzing the voice corpora to obtain the current intention of the user; the current intent indicates a dictation indication of a user;
the handwriting detection unit is used for simultaneously detecting the dictation handwriting of the user in the dictation mode;
the dictation reading unit is used for acquiring a target dictation audio matched with the current intention and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting;
the dictation reading unit is used for acquiring a target dictation audio matched with the current intention and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting, and the method for reading the target dictation audio specifically comprises the following steps:
acquiring a target dictation audio matched with the current intention and acquiring a dictation progress sent by an opponent terminal competing with a user according to the corresponding relation between the reported dictation audio and the dictation handwriting; judging whether the target dictation audio belongs to the reported dictation audio or not; if the target dictation audio belongs to the reported dictation audio, judging whether the dictation progress indicates that the opposite terminal indicates to acquire the next dictation audio; if the judgment result is yes, sending a notification message for indicating to suspend obtaining of the next dictation audio to the opponent terminal, obtaining the target dictation audio and reading, wherein the notification message is used for enabling the opponent terminal to wait and increase corresponding reward points; and if the judgment result is negative, and the dictation progress indicates that the opposite terminal indicates that the next dictation audio is to be acquired in a suspension manner, acquiring the target dictation audio and reading.
7. The electronic device of claim 6, further comprising:
the multi-tone detection unit is used for judging whether the target dictation audio belongs to the reported and read dictation audio or not after the dictation reading unit acquires the target dictation audio matched with the current intention and reads the target dictation audio according to the corresponding relation between the reported and read dictation audio and the dictation handwriting; when the target dictation audio belongs to the reported dictation audio, judging whether the dictation content corresponding to the target dictation audio is polyphone; when the dictation content corresponding to the target dictation audio is polyphone, acquiring word analysis information of the dictation content corresponding to the target dictation audio; and converting the word analysis information into analysis voice and outputting the analysis voice.
8. The electronic device of claim 7, further comprising:
the reading time detection unit is used for acquiring the reading times of the target dictation audio when the target dictation audio belongs to the reported reading dictation audio; judging whether the reported times exceed preset times; and when the reported times exceed the preset times, triggering the polyphone detection unit to execute the step of judging whether the dictation content corresponding to the target dictation audio is polyphone; and when the number of reported times does not exceed the preset number, counting and updating the number of reported times of the target dictation audio.
9. The electronic device of claim 8, further comprising:
the output unit is used for collecting voice corpora generated by the user in the dictation mode by the voice collection unit and outputting dictation condition description information before the handwriting detection unit is used for simultaneously detecting the dictation handwriting of the user in the dictation mode and after the dictation mode is started by the electronic equipment, wherein the dictation condition description information is used for indicating the type of a dictation purpose, and the type of the dictation purpose is dictation practice, dictation operation or dictation examination;
the dictation state detection unit is used for acquiring current positioning information and acquiring dictation condition description information input by a user when the reading time detection unit judges that the reported times exceed the preset times; and when the current positioning information indicates that the user is located in a non-school place and the dictation purpose type indicated by the dictation condition description information is the dictation exercise, triggering the polyphonic detection unit to execute the step of judging whether the dictation content corresponding to the target dictation audio is polyphonic.
10. The electronic device of claim 6, further comprising: a connection unit and a feedback unit;
the connection unit is used for collecting voice corpora generated by the user in the dictation mode by the voice collection unit and receiving a connection request sent by an online teacher end to establish connection before the handwriting detection unit is used for simultaneously detecting the dictation handwriting of the user in the dictation mode and after the dictation mode is started by the electronic equipment; receiving dictation data pushed by the online teacher end, wherein the dictation data comprises a plurality of dictation audios;
the dictation reading unit is used for acquiring a target dictation audio matched with the current intention and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting, and the method for reading the target dictation audio specifically comprises the following steps:
the dictation reading unit is used for acquiring a target dictation audio matched with the current intention from the dictation data and reading according to the corresponding relation between the reported dictation audio and the dictation handwriting;
the feedback unit is used for acquiring a target dictation audio matched with the current intention and reading the target dictation audio according to the corresponding relation between the reported dictation audio and the dictation handwriting by the dictation reading unit, and acquiring a dictation progress and a dictation state description of a user according to the current intention, the target dictation audio and the reported dictation audio; and sending the current intention, the dictation progress of the user and the dictation state description to the online teacher end so that the online teacher end completes dictation condition description of the user on a teaching system according to the current intention, the dictation progress of the user and the dictation state description.
CN201910622863.5A 2019-07-11 2019-07-11 Dictation intelligent control method based on user intention and electronic equipment Active CN111081082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910622863.5A CN111081082B (en) 2019-07-11 2019-07-11 Dictation intelligent control method based on user intention and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910622863.5A CN111081082B (en) 2019-07-11 2019-07-11 Dictation intelligent control method based on user intention and electronic equipment

Publications (2)

Publication Number Publication Date
CN111081082A CN111081082A (en) 2020-04-28
CN111081082B true CN111081082B (en) 2022-04-29

Family

ID=70310457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910622863.5A Active CN111081082B (en) 2019-07-11 2019-07-11 Dictation intelligent control method based on user intention and electronic equipment

Country Status (1)

Country Link
CN (1) CN111081082B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5864805A (en) * 1996-12-20 1999-01-26 International Business Machines Corporation Method and apparatus for error correction in a continuous dictation system
CN103366611A (en) * 2012-03-27 2013-10-23 希伯仑股份有限公司 A teaching system using hand-held control apparatuses and an operating system
CN204496731U (en) * 2015-01-19 2015-07-22 王功成 A kind of Voice command dictation device
CN106125905A (en) * 2016-06-13 2016-11-16 广东小天才科技有限公司 Dictation control method, device and system
CN107256652A (en) * 2017-05-27 2017-10-17 国家电网公司 Training on electric power Comprehensive Control experience system
CN109213893A (en) * 2018-07-27 2019-01-15 阿里巴巴集团控股有限公司 A kind of word display methods and device based on pronunciation
CN109460209A (en) * 2018-12-20 2019-03-12 广东小天才科技有限公司 Control method for dictation and reading progress and electronic equipment
CN109599108A (en) * 2018-12-17 2019-04-09 广东小天才科技有限公司 Dictation auxiliary method and dictation auxiliary device
CN109634416A (en) * 2018-12-12 2019-04-16 广东小天才科技有限公司 Intelligent control method for dictation, newspaper and reading and terminal equipment
CN109887349A (en) * 2019-04-12 2019-06-14 广东小天才科技有限公司 Dictation auxiliary method and device
CN109960809A (en) * 2019-03-27 2019-07-02 广东小天才科技有限公司 Method for generating dictation content and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8744848B2 (en) * 2010-04-23 2014-06-03 NVQQ Incorporated Methods and systems for training dictation-based speech-to-text systems using recorded samples
US8775175B1 (en) * 2012-06-01 2014-07-08 Google Inc. Performing dictation correction
EP2983125A1 (en) * 2014-08-04 2016-02-10 Tata Consultancy Services Limited System and method for recommending services to a customer
CN108986564B (en) * 2018-06-21 2021-08-24 广东小天才科技有限公司 Reading control method based on intelligent interaction and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5864805A (en) * 1996-12-20 1999-01-26 International Business Machines Corporation Method and apparatus for error correction in a continuous dictation system
CN103366611A (en) * 2012-03-27 2013-10-23 希伯仑股份有限公司 A teaching system using hand-held control apparatuses and an operating system
CN204496731U (en) * 2015-01-19 2015-07-22 王功成 A kind of Voice command dictation device
CN106125905A (en) * 2016-06-13 2016-11-16 广东小天才科技有限公司 Dictation control method, device and system
CN107256652A (en) * 2017-05-27 2017-10-17 国家电网公司 Training on electric power Comprehensive Control experience system
CN109213893A (en) * 2018-07-27 2019-01-15 阿里巴巴集团控股有限公司 A kind of word display methods and device based on pronunciation
CN109634416A (en) * 2018-12-12 2019-04-16 广东小天才科技有限公司 Intelligent control method for dictation, newspaper and reading and terminal equipment
CN109599108A (en) * 2018-12-17 2019-04-09 广东小天才科技有限公司 Dictation auxiliary method and dictation auxiliary device
CN109460209A (en) * 2018-12-20 2019-03-12 广东小天才科技有限公司 Control method for dictation and reading progress and electronic equipment
CN109960809A (en) * 2019-03-27 2019-07-02 广东小天才科技有限公司 Method for generating dictation content and electronic equipment
CN109887349A (en) * 2019-04-12 2019-06-14 广东小天才科技有限公司 Dictation auxiliary method and device

Also Published As

Publication number Publication date
CN111081082A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN106971009B (en) Voice database generation method and device, storage medium and electronic equipment
WO2019169914A1 (en) Method and device for voice testing
CN108053839B (en) Language exercise result display method and microphone equipment
CN108986564B (en) Reading control method based on intelligent interaction and electronic equipment
CN110111761B (en) Method for real-time following musical performance and related product
CN111081084B (en) Method for broadcasting dictation content and electronic equipment
CN109410984B (en) Reading scoring method and electronic equipment
CN109086431B (en) Knowledge point consolidation learning method and electronic equipment
CN104205215A (en) Automatic realtime speech impairment correction
CN112598961A (en) Piano performance learning method, electronic device and computer readable storage medium
US20170076626A1 (en) System and Method for Dynamic Response to User Interaction
CN111077996A (en) Information recommendation method based on point reading and learning equipment
KR102060229B1 (en) Method for assisting consecutive interpretation self study and computer readable medium for performing the method
CN111417014A (en) Video generation method, system, device and storage medium based on online education
CN108899011B (en) Voice function testing method, device and system of air conditioner
CN111081082B (en) Dictation intelligent control method based on user intention and electronic equipment
WO2018074023A1 (en) Word learning support device, word learning support program, word learning support method
CN109271480B (en) Voice question searching method and electronic equipment
CN111028591B (en) Dictation control method and learning equipment
CN108039081B (en) Robot teaching evaluation method and device
CN111081227B (en) Recognition method of dictation content and electronic equipment
CN111026839B (en) Method for detecting mastering degree of dictation word and electronic equipment
JP6225077B2 (en) Learning state monitoring terminal, learning state monitoring method, learning state monitoring terminal program
CN113489846A (en) Voice interaction testing method, device, equipment and computer storage medium
CN111079486A (en) Method for starting dictation detection and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant