CN111638783A - Man-machine interaction method and electronic equipment - Google Patents

Man-machine interaction method and electronic equipment Download PDF

Info

Publication number
CN111638783A
CN111638783A CN202010420768.XA CN202010420768A CN111638783A CN 111638783 A CN111638783 A CN 111638783A CN 202010420768 A CN202010420768 A CN 202010420768A CN 111638783 A CN111638783 A CN 111638783A
Authority
CN
China
Prior art keywords
user
action
target
image
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010420768.XA
Other languages
Chinese (zh)
Inventor
崔颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202010420768.XA priority Critical patent/CN111638783A/en
Publication of CN111638783A publication Critical patent/CN111638783A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A man-machine interaction method and electronic equipment are provided, the method comprises the following steps: outputting a first action instruction in a target language; acquiring a collected user image, and identifying limb actions included in the user image; and when the limb action is identified to be matched with the first action instruction, outputting feedback information for indicating correctness. By implementing the embodiment of the application, the electronic equipment can send the action instruction in the target language, and the user can make corresponding limb actions after understanding the meaning of the action instruction; when recognizing that the limb action executed by the user is matched with the action instruction, the electronic equipment outputs feedback information for indicating correctness, so that the participation sense of the user in learning the target language can be improved, the learning enthusiasm of the user is mobilized, and the learning efficiency is improved.

Description

Man-machine interaction method and electronic equipment
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to a human-computer interaction method and electronic equipment.
Background
At present, electronic devices such as partial family education machines and learning tablets can help users to learn languages. These electronic devices can generally provide functions of reading, translating, and searching words, and although they can provide convenience for language learning, they interact with users monotonously, resulting in low learning efficiency of users.
Disclosure of Invention
The embodiment of the application discloses a man-machine interaction method and electronic equipment, which can improve learning efficiency.
The embodiment of the application discloses a man-machine interaction method in a first aspect, and the method comprises the following steps:
outputting a first action instruction in a target language;
acquiring a collected user image, and identifying limb actions included in the user image;
and when the limb action is identified to be matched with the first action instruction, outputting feedback information for indicating correctness.
As an optional implementation manner, in the first aspect of the embodiment of the present application, the first motion instruction is an instruction pointing to a target object;
the identifying a limb motion included in the user image includes:
when the limb movement is judged to be the pointing gesture, identifying an object pointed by the pointing gesture;
when the limb action is identified to be matched with the first action instruction, outputting feedback information for indicating correctness, wherein the feedback information comprises:
when the limb movement is judged to be the pointing gesture, and the object pointed by the pointing gesture is matched with the target object, outputting feedback information for indicating correctness.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the acquiring a captured user image includes:
when the target object is a face organ, controlling a camera to shoot the user image at a first focal length;
when the target object part is a non-face organ, controlling a camera to shoot the user image at a second focal length; the second focal length is less than the first focal length.
As an optional implementation manner, in the first aspect of this embodiment of the present application, before the outputting the first action instruction in the target language, the method further includes:
acquiring a shot book image;
identifying first target content from the book image;
generating the first action instruction corresponding to the first target content.
As an optional implementation manner, in the first aspect of this embodiment of the present application, before the generating the first action instruction corresponding to the first target content, the method further includes:
acquiring a user identity, and inquiring a learning record corresponding to the user identity;
and if the learning record includes the pronunciation learning of the target content, executing the step of generating the first action command corresponding to the target content.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the method further includes:
acquiring user voice input in a target language;
recognizing a second action instruction included in the user voice;
and controlling the virtual model displayed on the screen to execute the action or the form change corresponding to the second action instruction.
As an optional implementation manner, in the first aspect of this embodiment of the present application, before the acquiring the input user speech, the method further includes:
acquiring a shot book image;
identifying second target content from the book image;
displaying a standard action corresponding to the second target content;
and, the acquiring the user speech input in the target language includes:
and acquiring the user voice describing the standard action in the target language.
As an optional implementation manner, in the first aspect of this embodiment of the present application, before the displaying the standard action corresponding to the second target content, the method further includes:
acquiring a user identity, and inquiring a learning record corresponding to the user identity;
and if the learning record comprises paraphrase learning of the target content, executing the step of displaying the standard action corresponding to the target content.
A second aspect of an embodiment of the present application discloses an electronic device, including:
the first output unit is used for outputting a first action instruction in a target language;
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring an acquired user image and identifying limb actions included in the user image;
and the second output unit is used for outputting feedback information for indicating correctness when the limb action is identified to be matched with the first action instruction.
A third aspect of the embodiments of the present application discloses an electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute any one of the methods disclosed in the first aspect of the embodiments of the present application.
A fourth aspect of the present invention discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute any one of the methods disclosed in the first aspect of the embodiments of the present application.
A fifth aspect of embodiments of the present application discloses a computer program product, which, when run on a computer, causes the computer to perform any one of the methods disclosed in the first aspect of embodiments of the present application.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
the electronic equipment can send out action instructions in the target language, and the user can make corresponding limb actions after understanding the meaning of the action instructions; when recognizing that the limb action executed by the user is matched with the action instruction, the electronic equipment outputs feedback information for indicating correctness, so that the participation sense of the user in learning the target language can be improved, the learning enthusiasm of the user is mobilized, and the learning efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is an exemplary diagram of an application scenario of a human-computer interaction method disclosed in an embodiment of the present application;
FIG. 2 is a flow chart of a human-computer interaction method disclosed in an embodiment of the present application;
FIG. 3A is a diagram illustrating an example of an application scenario of another method for human-computer interaction disclosed in an embodiment of the present application;
FIG. 3B is a diagram illustrating an example of an application scenario of another method for human-computer interaction disclosed in an embodiment of the present application;
FIG. 4 is a flowchart illustrating a human-computer interaction method disclosed in an embodiment of the present application;
FIG. 5 is an exemplary diagram of an application scenario of another human-computer interaction method disclosed in an embodiment of the present application;
FIG. 6 is a flow chart illustrating another exemplary human-computer interaction method disclosed in embodiments of the present application;
fig. 7 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application;
fig. 8 is a schematic structural diagram of another electronic device disclosed in an embodiment of the present application;
fig. 9 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses a human-computer interaction method and electronic equipment, which can improve learning efficiency and are described in detail below respectively.
An application scenario of the human-computer interaction method disclosed in the embodiment of the present application is introduced below. Referring to fig. 1, fig. 1 is a diagram illustrating an application scenario of a human-computer interaction method disclosed in an embodiment of the present application. The electronic device 10 shown in fig. 1 may be an electronic device such as a smart phone, a tablet computer, a family education machine, a learning machine, and the like, and is not particularly limited. The electronic device may be provided with components such as a screen 11, a camera 12, a speaker 13, a microphone 14, etc. The screen 11 may be used to display text information, video pictures, or other text or image information; camera 12 may be used to capture images including a user; speaker 13 may be used to play audio and microphone 14 may be used to capture sounds made by the user. The electronic device 10 may interact with a user based on one or more of the components described above.
Referring to fig. 2, fig. 2 is a flowchart illustrating a man-machine interaction method according to an embodiment of the present disclosure, where the method shown in fig. 2 may be applied to the electronic device shown in fig. 1. As shown in fig. 2, the human-computer interaction method may include the following steps:
210. and outputting the first action instruction in the target language.
In the embodiment of the present application, the target language may be a language that the user needs to learn, may be a native language of the user, and may also be a second language other than the native language. For example, the target language may include, but is not limited to: chinese, english, french, etc. For convenience of description, the following description is made in english as the target language, and should not be construed as limiting the target language.
In the embodiment of the application, the first action instruction may be generated according to a learning target that the user needs to learn. Learning objectives that a user needs to learn may include, but are not limited to: active wording, object name, adjective, preposition.
For example, when the learning target that the user needs to learn is an action term (e.g., "stand up", "sit down", "jump", etc.), the first action command may be a command instructing the user to perform an action corresponding to the action term; when the learning target that the user needs to learn is an object name (e.g., "pen", "nose", etc.), the first action instruction may be an instruction (e.g., "show me your pen", "touch your nose", etc.) indicating that the user points to a target object corresponding to the object name; when the learning target to be learned by the user is the object name and the object name is the animal name, the first action instruction may be an instruction instructing the user to simulate the animal corresponding to the animal name; when the content that the user needs to learn is an adjective (e.g., "tail", "long", etc.), the first action instruction may be an instruction (e.g., "be tail", "be long") instructing the user to execute a preset action corresponding to the adjective; when the content that the user needs to learn is a preposition (e.g., "in", "out"), the first action command may be a command (e.g., "jump in", "jump out") instructing the user to perform a preset action corresponding to the preposition.
In addition, in the embodiment of the present application, the first action instruction may be output in the target language in a voice or text form. When the first action instruction is output in the target language through voice, the hearing ability of the user can be considered, and when the first action instruction is output in the target language through characters, the reading comprehension ability of the user can be considered.
220. And acquiring the acquired user image, and identifying the limb action included in the user image.
In this embodiment of the application, after the electronic device outputs the first motion instruction, the user may respond to the first motion instruction to perform a corresponding limb motion. In the process of executing the limb action by the user, the user image can be collected through a camera of the electronic equipment. And performing image recognition on the acquired user image, and extracting limb actions from the user image.
When the collected user image is subjected to image recognition, the skeleton characteristics of the human body can be extracted from the user image, so that the limb actions executed by the user are recognized according to the skeleton characteristics. Alternatively, the user image may be input to a recognition model trained in advance, and the limb movement included in the user image may be recognized by the recognition model trained in advance. The pre-trained recognition model may be a Support Vector Machine (SVM) Machine learning model, a Convolutional Neural Network (CNN) Machine learning model, or the like. The machine learning model is trained using the motion image with the motion type marked in advance, and a recognition model can be obtained. The more motion types the motion image of the marker corresponds to, the more motion types the recognition model can recognize.
In the embodiment of the application, as an optional implementation manner, the camera of the electronic device may be a camera with an adjustable focal length, and the focal length of the camera may be adjusted according to the first action instruction, so that the limb actions included in the user image acquired by the camera are clear and complete, the success rate of recognizing the limb actions from the user image may be improved, and the problem of misjudgment of the limb actions performed by the user due to the image shooting quality problem and mismatching of the first action instruction may be reduced.
Further optionally, when the first motion instruction is an instruction pointing to a target object, acquiring a captured user image includes:
when the target object is a face organ, controlling the camera to shoot a user image at a first focal length;
when the target object is a non-face organ, controlling the camera to shoot the user image at a second focal length; wherein, the non-facial organs can be other body parts except for facial organs such as hands, feet, head, etc.; alternatively, the non-facial organ may be an object other than a body part, such as a pen or a book; the second focal length is smaller than the first focal length.
That is, when the first action instruction instructs the user to point at a face organ such as eyes, ears, mouth, nose, etc., the camera is controlled to capture the user image with a larger focal length, so that the face occupies a larger proportion in the user image and the face organ is clearer in the user image; when the user performs a body motion directed to the face part according to the first motion instruction, the face part to which the user is directed can be recognized more accurately from the user image in which the face is clear and the face occupies a larger portion in the image. When the first action instruction indicates that the user points to the target object of the non-face organ, the camera is controlled to shoot the user image with a smaller focal length, so that the user image can comprise a complete trunk and/or limbs, and therefore the pointing gesture pointing to the non-face organ can be recognized from the user image more accurately.
230. And when the limb action is identified to be matched with the first action instruction, outputting feedback information for indicating correctness.
In the embodiment of the application, when it is recognized that the limb action performed by the user is the same as or similar to the action corresponding to the first action instruction, it may be determined that the limb action matches the first action instruction.
Optionally, when the first action instruction is an instruction pointing to the target object, the action corresponding to the first action instruction may be a pointing gesture, and after it is determined that the limb action performed by the user is the same as or similar to the pointing gesture, it is further determined whether an object pointed by the pointing gesture performed by the user matches with the target object pointed by the first action instruction; if the object pointed by the pointing gesture is judged to be matched with the target object, the limb action executed by the user can be judged to be matched with the target object.
In addition, in the embodiment of the present application, the feedback information indicating the correctness may be output through one or more combinations of text, voice, image, animation, video and the like. Optionally, words and voice output "answered" or "correct" and the like can be used for indicating correct terms; or, a hooked image or animation can be displayed on the screen; or, the virtual model such as the virtual portrait, the cartoon image and the like displayed on the screen can be controlled to execute the action corresponding to the first action instruction. In the embodiment of the application, correct feedback information is indicated by the output language, so that a user can receive forward feedback of 'answer right', a link of 'input of a target language-understanding of the target language-action expression' is established, and understanding of the target language can be deepened.
For example, please refer to fig. 3A together, and fig. 3A is an exemplary diagram of an application scenario of another human-computer interaction method disclosed in the embodiment of the present application. As shown in fig. 3A, the electronic device plays the first action command "be talk" in english, the user performs the limb action shown in fig. 3A, and the electronic device may capture the user image through the camera. Assuming that the preset action corresponding to the adjective "tall" is to lift the arm upward, the limb action in the collected user image is the same as the action of lifting the arm upward, it can be determined that the limb action performed by the user matches the first action command, and the electronic device outputs "congratulate you and answers the right! "is received.
For example, please refer to fig. 3B together, where fig. 3B is an exemplary diagram of an application scenario of another human-computer interaction method disclosed in the embodiment of the present application. As shown in fig. 3B, the electronic device displays a first motion instruction "touch your nose" on the screen, and the first motion instruction is an instruction pointing to a "nose", and the nose is a target object. The user performs the limb movement as shown in fig. 3B, and the electronic device may capture the user image through the camera. Assuming that the pointing gesture is 'extending the index finger', the index finger can be recognized in the collected user image, and then the limb action executed by the user can be judged to be the pointing gesture; furthermore, it can be recognized from the user image that the object pointed to by the index finger is the nose, it can be determined that the object pointed to by the pointing gesture performed by the user matches the target object, and the electronic device outputs "congratulate you, right answer! "is received.
Therefore, by implementing the method provided by the above embodiment, the electronic device may output the action instruction in the target language, and after receiving the action instruction, the user may perform the corresponding limb action according to the understanding of the user on the action instruction. The electronic equipment can recognize the limb actions executed by the user through the collected user images, and can output feedback information for indicating correctness when recognizing that the limb actions are matched with the action instructions, so that the participation sense of the user in learning the target language can be improved, the learning enthusiasm of the user is mobilized, and the learning efficiency is improved.
In addition, when the method provided by the above embodiment is implemented, the electronic device may also play a role of a teacher in a direct physical response (TPR) teaching method when the TPR teaching method is implemented to learn a target language. The TPR teaching method uses "listening to commands and doing actions" as a main teaching organization, and in the teaching process, a teacher generally gives commands and students complete actions. The method provided by the embodiment can enable the user to independently use the TPR teaching method for language learning in an environment (such as a home) outside a classroom, thereby providing convenience for language learning.
Alternatively, the man-machine interaction method shown in fig. 2 may be a method executed by a functional module (e.g., a language learning application) in the electronic device, which provides the TPR teaching method. As another alternative, the functional module in the electronic device providing the TPR teaching method may also perform a human-computer interaction method as shown in fig. 4. As shown in fig. 4, the human-computer interaction method may include the steps of:
410. the voice of the user input in the target language is acquired.
In the embodiment of the application, the user can speak a speech in the target language, and the content of the speech can be related to the content that the user needs to learn. The electronic device can collect the sound emitted by the user through the microphone, so as to obtain the input voice of the user.
In this embodiment, as an optional implementation manner, before performing step 410, the electronic device may also display a prompt content to prompt the user to input the user voice in the target language. Alternatively, the prompt content may be a standard action related to the learning objective that the user needs to learn. After the standard action is displayed, the user can describe the standard action by using the target language, and the electronic equipment can collect the sound emitted by the user when describing the standard action, so that the voice of the user input in the target language is obtained. Alternatively, the content of the prompt may be a word, phrase, or sentence corresponding to a learning target that the user needs to learn, which is displayed in text form.
For example, a standard "raise arm up" action may be displayed in the screen of the electronic device. When the user sees a standard motion displayed on the screen, if the meaning of the standard motion can be correctly understood as being high and the expression of high in english is thought to be "be talk", the user can describe the standard motion as "be talk" and can read it in english, and the electronic device can acquire the user voice input by the user in english. Alternatively, "talk" may be displayed on the screen of the electronic device, and when the user understands the reading corresponding to talk "when viewing" talk ", the electronic device may read" be talk "in english, and may acquire the user's voice input by the user in english.
By implementing the above embodiment, before acquiring the user voice input in the target language, the standard action is output as a prompt to help the user establish a link of "action expression-understanding of the target language-output of the target language", so as to also deepen the understanding of the user on the target language.
420. A second action instruction included in the user speech is recognized.
In the embodiment of the present application, the second action instruction is an instruction issued by the user. Alternatively, it may be recognized whether the user speech includes a keyword related to a learning target that the user needs to learn. If the user voice is recognized to contain the keyword, a second action instruction can be generated according to the keyword. Or; if the user voice is recognized to contain the keyword, the sentence containing the keyword in the user voice can be directly used as the second action command. Specific examples of the second motion instruction may be as shown in the above-mentioned first motion instruction, and details are not repeated below.
430. And controlling the virtual model displayed on the screen to execute the action or the form change corresponding to the second action instruction.
In the embodiment of the present application, the virtual model may be a two-dimensional or three-dimensional virtual model, and may include, but is not limited to: virtual figures, cartoon characters, content related to a target that a user needs to learn, and displayed in a target language (e.g., a word, phrase, or sentence displayed in the target language).
Optionally, when the learning target that the user needs to learn is an adjective and the second action instruction is an instruction instructing the user to execute a preset action corresponding to the adjective, the virtual model displayed on the screen may be controlled to execute a form change corresponding to the second action instruction.
Further optionally, after the second action instruction included in the user voice is identified in the step 420 and before the step 430 is executed, a keyword corresponding to a learning target that the user needs to learn and included in the second action instruction may also be acquired; and outputting and displaying a virtual model corresponding to the spelling or the font of the keyword in the target language in the screen.
Exemplarily, please refer to fig. 5, and fig. 5 is a diagram illustrating another example of an application scenario of human-computer interaction disclosed in the embodiment of the present application. As shown in fig. 5, the user reads "be bill" in english, the word "bill" is a learning target (i.e., the above-mentioned keyword) that the user needs to learn, and the virtual model displayed on the screen of the electronic device is a graphic of the english letter "TALL". And the electronic equipment collects the voice of the user through a microphone, recognizes that the second action command is 'be talk' from the voice of the user, and controls the virtual model 'TALL' displayed in the screen to be changed to be high. The user can better correspond the pronunciation, the font and the paraphrase of the bill by observing the virtual model displayed in the screen, and the method is favorable for the user to remember the pronunciation and the font of the bill by using image memory, thereby improving the learning efficiency of the user.
That is, by implementing the method provided by the above embodiment, the user can output the action command in the target language; if the pronunciation of the user when the user outputs the action instruction is correct, the electronic equipment can recognize the action instruction sent by the user from the collected user voice, so that the virtual model displayed in the screen can be controlled to execute the action or the form change corresponding to the action instruction, the user can be helped to establish the pronunciation, shape and meaning link of the target language, and the learning efficiency is improved.
Optionally, in this embodiment of the application, when learning the learning target, the electronic device may execute the human-computer interaction method shown in fig. 2 and 4, so as to help the user to consolidate understanding of the learning target from listening, speaking, and reading. The details will be described below. Referring to fig. 6, fig. 6 is a schematic flow chart of another human-computer interaction method disclosed in the embodiment of the present application. As shown in fig. 6, the human-computer interaction method may include the steps of:
610. and acquiring the shot book image.
612. First target content and second target content are identified from the book image.
In the embodiment of the application, the electronic device can control the camera to shoot pages of books such as textbooks, drawing books and exercise books, so as to obtain book images.
In the embodiment of the present application, the first target content and the second target content may be the same content or different contents. When the first target content and the second target content are the same content, executing the human-computer interaction method shown in fig. 6 may help the user perform reinforcement learning for the same learning target. When the first target content and the second target content are different contents, executing the human-computer interaction method shown in fig. 6 can help the user to perform targeted listening, speaking and reading training for different learning targets.
For convenience of description, the following description will describe the man-machine interaction method shown in fig. 6 by taking the first target content and the second target content as the same content as an example.
Optionally, when the book is printed, a special identification code may be added to the paper page. Illustratively, the identification code may be a two-dimensional code printed in a header or footer, or may be a code printed in a special paint that is not visible to the user. The single identification code can correspond to at least one target content which needs to be learned by the user; accordingly, identifying the target content from the book image may include: and identifying the identification code from the book image, and inquiring target content corresponding to the identification code from a local database or a server.
Optionally, when reading the book, the user may also use a finger or a pen to indicate the target content to be learned. Accordingly, identifying the target content from the book image may include: recognizing an object such as a finger or a pen for pointing from the book image, and recognizing a pointing position of the finger or the pen in the image; further, the target content corresponding to the pointing position in the image is identified.
That is, in the embodiment of the present application, the target content that the user needs to learn may be identified from the book that the user is reading, and the following steps are performed to help the user learn the target content through the TPR teaching method. The user does not need to manually set a learning target on the electronic equipment, so that complicated setting steps can be omitted, the use threshold of the user is reduced, and especially for the users of the younger ages, the electronic equipment can be used for language learning more conveniently.
614. And generating a first action instruction corresponding to the first target content, and outputting the first action instruction in the target language.
616. And acquiring the acquired user image, and identifying the limb action included in the user image.
618. And when the limb action is identified to be matched with the first action instruction, outputting feedback information for indicating correctness.
Illustratively, the target content on the book is the english word "bill", the electronic device may perform step 614 to read the first action command "be bill" in english. When the user hears the 'be talk', the user makes corresponding limb actions. The electronic device executes steps 616-618, and takes the user image while the user is doing the action, and recognizes from the user image that the limb action performed by the user is "arm up", and matches the action command of "be talk", and then outputs "congratulate you and answer! "is received.
In the embodiment of the present application, the electronic device performs the functions of the command issuer and the user performs the functions of the command executor to train the hearing ability and/or reading ability of the user. To further train the user's spoken language skills, the following steps 620-626 may be continued.
620. Displaying a standard action corresponding to the second target content.
622. User speech describing a standard action in a target language is obtained.
624. A second action instruction included in the user speech is recognized.
626. And controlling the virtual model displayed on the screen to execute the action or the form change corresponding to the second action instruction.
For example, the electronic device may control the cartoon character displayed in the screen to perform a standard action of "arm up" when performing step 620. When the user sees the standard action, if the user interacts with the electronic device through the steps shown in steps 614 to 618, and remembers that the action corresponding to "be talk" is "arm up", the user can read out "be talk", the electronic device can collect the user voice, and recognize a second action instruction "beta" from the user voice, so that another virtual model "TALL" displayed in the screen can be controlled to be high.
In the embodiment of the present application, the user may play the role of the command issuer and the electronic device may play the role of the command executor to execute the steps 620 to 626, so as to encourage the user to speak the target language to the great extent and help to train the spoken language ability of the user.
It should be noted that, in some other possible embodiments, after the above step 612 is executed to identify the target content from the book image, steps 620 to 626 may also be executed to train the spoken language ability of the user, and help the user to establish the corresponding relationship between the pronunciation and the paraphrase of the target content through the standard action and the action or morphological change of the virtual model; and step 614-step 618 are executed again, the electronic device sends out an instruction to check the learning result of the user in step 620-step 626.
As an alternative embodiment, after the above step 612 is executed to identify the target content from the book image, the following steps may be further executed:
and acquiring a user identity. The user identity may be a credential indicating the user identity, and may include but is not limited to: user account, mobile phone number, mailbox address, identity card number. In the embodiment of the present application, the manner of obtaining the user identity may include, but is not limited to: acquiring biological characteristic information such as fingerprints, faces, voiceprints and the like input by a user, and inquiring a user identity corresponding to the acquired biological characteristic information; or displaying a login interface in the screen, and acquiring the user account input in the login interface when the login is successful.
And inquiring the learning record corresponding to the user identity. In the embodiment of the application, when the user uses the functions related to language learning (such as translation, click-to-read, question searching and the like) on the electronic equipment, the learning record can be generated according to the use record of the user for the functions. For example, when a user logs in a current user account, the translation function is used, and a translated object is a word "bill", so that an item related to paraphrase learning of "bill" can be generated in a learning record corresponding to the current user account; or, when the user logs in the current user account, the click-to-read function is used, and the object of click-to-read is the word "lose", so that an entry about the pronunciation learning of "lose" can be generated in the learning record corresponding to the current user account.
If the learning record includes the pronunciation learning of the first target content, the above steps 620 to 626 are triggered to be executed. In the embodiment of the present application, if the user has learned the pronunciation of the first target content, the user may be encouraged to speak the first target content open through the above steps 620 to 626, and may be helped to understand the paraphrase of the first target content, thereby helping the user understand the first target content comprehensively from three aspects of the sound, shape and meaning.
If the learning record includes paraphrase learning of the second target content, the above steps 614-618 are triggered. In the embodiment of the present application, if the user has learned the paraphrase of the second target content, the user may be assisted in consolidating the understanding of the paraphrase of the second target content through the above-mentioned steps 614 to 618, and assisted in learning the pronunciation or spelling of the second target content by outputting the instruction related to the target content.
Therefore, by implementing the method provided by the embodiment, the target content required to be learned by the user can be identified from the book being read by the user, the learning target does not need to be manually set on the electronic equipment, the complicated setting steps can be omitted, and the use threshold of the user is reduced. And the user can also play the roles of an instruction sender and an instruction executor respectively, and the user can be helped to comprehensively learn the target content from three aspects of sound, form and meaning through different role transformation, so that the understanding of the user on the target content is deepened.
An electronic device disclosed in an embodiment of the present application is described in detail below. Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 7, the electronic device may include:
the first output unit 701 is configured to output a first action command in a target language. In the embodiment of the application, the first action instruction may be generated according to a learning target that the user needs to learn. Learning objectives that a user needs to learn may include, but are not limited to: active wording, object name, adjective, preposition. The first output unit 701 may output the first action command in the target language in a voice or text form. When the first action instruction is output in the target language through voice, the hearing ability of the user can be considered, and when the first action instruction is output in the target language through characters, the reading comprehension ability of the user can be considered.
An obtaining unit 702, configured to obtain a collected user image, and identify a limb motion included in the user image. In this embodiment of the application, the obtaining unit 702 may be specifically configured to collect a user image through a camera of an electronic device, and perform image recognition on the collected user image, so as to extract a limb motion from the user image. When the collected user image is subjected to image recognition, the skeleton characteristics of the human body can be extracted from the user image, so that the limb actions executed by the user are recognized according to the skeleton characteristics. Alternatively, the user image may be input to a recognition model trained in advance, and the limb movement included in the user image may be recognized by the recognition model trained in advance.
In this embodiment, as an optional implementation manner, the camera of the electronic device may be a camera with an adjustable focal length. The manner of acquiring the acquired user image by the acquiring unit 702 may include: the obtaining unit 702 adjusts the focal length of the camera according to the first action instruction, so that the limb action included in the user image collected by the camera is clear or complete, the success rate of recognizing the limb action from the user image can be improved, and the problem of misjudgment of the limb action executed by the user due to the image shooting quality problem and mismatching of the first action instruction are reduced.
Further optionally, the obtaining unit 702 may be configured to control the camera to capture the user image at the first focal length when the first action instruction is an instruction pointing to the target object and the target object is a face organ; when the first action instruction is an instruction pointing to a target object and the target object is a non-face organ, controlling the camera to shoot a user image at a second focal length; wherein the second focal length is smaller than the first focal length.
And a second output unit 703, configured to output feedback information indicating correctness when it is recognized that the limb movement matches the first movement instruction.
In this embodiment, as an optional implementation manner, when the first motion instruction is an instruction pointing to a target object, the second output unit 703 may be configured to output feedback information indicating that the object is correct when it is determined that the limb motion is a pointing gesture and the object pointed by the pointing gesture matches the target object. That is, when the first motion command is a command pointing to the target object, it needs to be determined whether the object pointed by the pointing gesture matches the target object, in addition to determining whether the body motion performed by the user is the same as or similar to the pointing gesture.
By implementing the electronic device provided by the embodiment, the electronic device can output the action instruction in the target language, after receiving the action instruction, the user can make the corresponding limb action according to the understanding of the user on the action instruction, the electronic device can recognize the limb action, and when judging that the limb action is matched with the action instruction, the electronic device outputs the feedback information for indicating the correctness, so that the participation sense of the user in learning the target language can be improved, the learning enthusiasm of the user is called, and the learning efficiency is improved.
Optionally, please refer to fig. 8, and fig. 8 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application. The electronic device shown in fig. 8 is optimized from the electronic device shown in fig. 7. In the electronic device shown in fig. 8:
the acquiring unit 702 may be further configured to acquire a user voice input in a target language;
and, the electronic device shown in fig. 8 may further include:
a display unit 704 for recognizing a second action instruction included in the user voice and controlling the virtual model displayed in the screen to perform an action or a morphological change corresponding to the second action instruction. Alternatively, the display unit 704 may control the virtual model displayed on the screen to execute the morphological change corresponding to the second action instruction when the learning target to be learned by the user is an adjective and the second action instruction is an instruction instructing the user to execute a preset action corresponding to the adjective.
Further optionally, the electronic device shown in fig. 8 may further include:
the identification unit 705 is configured to acquire a captured book image and identify the first target content and/or the second target content from the book image. Wherein the first target content and the second target content may be the same or different content.
Optionally, before the first output unit 701 outputs the first action command in the target language, the first output unit may further perform an operation of generating the first action command according to the first target content identified by the identification unit 705;
alternatively, the display unit 704 may be further configured to acquire the second target content identified by the identification unit 705, and display a standard action corresponding to the second target content. Accordingly, the manner of the above-mentioned acquiring unit 702 for acquiring the user voice input in the target language may include: an obtaining unit 702 is configured to obtain a user voice describing a standard action in a target language.
Still further optionally, the electronic device shown in fig. 8 may further include:
the query unit 706 is configured to obtain a user identity, and query a learning record corresponding to the user identity; when the learning record includes the pronunciation learning of the first target content identified by the identification unit 705, triggering the first output unit 701 to execute an operation of generating a first action instruction according to the first target content identified by the identification unit 705 and outputting the first action instruction in the target language; and when paraphrase learning of the second target content identified by the identification unit 705 is included in the learning record, triggering the display unit 704 to perform an operation of displaying a standard action corresponding to the second target content.
Therefore, by implementing the electronic device provided by the embodiment, the user can play the role of an instruction executor and the role of an instruction issuer, so that the user can be encouraged to speak the target language boldly, and the training of the spoken language ability of the user is facilitated. The electronic equipment can identify the learning target required to be learned by the user from the book read by the user, the learning target does not need to be manually set on the electronic equipment, complicated setting steps can be omitted, and the use threshold of the user is reduced.
Referring to fig. 9, fig. 9 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application. As shown in fig. 9, the electronic device may include:
a memory 901 in which executable program code is stored;
a processor 902 coupled to a memory 901;
the processor 902 calls the executable program code stored in the memory 901 to execute any one of the human-computer interaction methods disclosed in the embodiments of the present application.
The processor 902 may include one or more processing cores. The processor 902 connects various parts within the entire electronic device using various interfaces and lines, and performs various functions of the terminal device and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 901 and calling data stored in the memory 901. Alternatively, the processor 902 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 902 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is to be understood that the modem may not be integrated into the processor 902, but may be implemented solely by a communication chip.
The Memory 901 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 901 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 901 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The storage data area may also store data created by the electronic device in use, and the like.
It should be noted that the electronic device shown in fig. 9 may further include components, which are not shown, such as a power supply, an input key, a camera, a speaker, a screen, a microphone, an RF circuit, a Wi-Fi module, a bluetooth module, and a sensor, which are not described in detail in this embodiment.
The embodiment of the application discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute any one of the human-computer interaction methods disclosed in the embodiment of the application.
Embodiments of the present application disclose a computer program product comprising a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform any of the human-computer interaction methods disclosed in embodiments of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
In various embodiments of the present invention, it should be understood that the size of the sequence number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
Those skilled in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program instructing associated hardware, the program may be stored in a computer readable storage medium including a Read-Only Memory, a random access Memory, a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact-Read-Only Memory (CD-ROM) or other optical disk storage, a magnetic disk storage, a tape storage, or any other medium readable by a computer capable of carrying or storing data.
The human-computer interaction method and the electronic device disclosed in the embodiments of the present application are described in detail above, and the principles and embodiments of the present invention are explained herein by applying specific examples, and the above description of the embodiments is only used to help understanding the method and the core idea of the present invention. Meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A human-computer interaction method, characterized in that the method comprises:
outputting a first action instruction in a target language;
acquiring a collected user image, and identifying limb actions included in the user image;
and when the limb action is identified to be matched with the first action instruction, outputting feedback information for indicating correctness.
2. The method of claim 1, wherein the first motion command is a command directed to a target object;
the identifying a limb motion included in the user image includes:
when the limb movement is judged to be the pointing gesture, identifying an object pointed by the pointing gesture;
when the limb action is identified to be matched with the first action instruction, outputting feedback information for indicating correctness, wherein the feedback information comprises:
when the limb movement is judged to be the pointing gesture, and the object pointed by the pointing gesture is matched with the target object, outputting feedback information for indicating correctness.
3. The method of claim 2, wherein said acquiring an acquired user image comprises:
when the target object is a face organ, controlling a camera to shoot the user image at a first focal length;
when the target object part is a non-face organ, controlling a camera to shoot the user image at a second focal length; the second focal length is less than the first focal length.
4. A method according to any one of claims 1-3, wherein prior to said outputting a first action instruction in a target language, the method further comprises:
acquiring a shot book image;
identifying first target content from the book image;
generating the first action instruction corresponding to the first target content.
5. The method of claim 4, wherein prior to the generating the first action instruction corresponding to the first target content, the method further comprises:
acquiring a user identity, and inquiring a learning record corresponding to the user identity;
and if the learning record includes the pronunciation learning of the first target content, executing the step of generating the first action command corresponding to the target content.
6. The method of claim 1, further comprising:
acquiring user voice input in a target language;
recognizing a second action instruction included in the user voice;
and controlling the virtual model displayed on the screen to execute the action or the form change corresponding to the second action instruction.
7. The method of claim 6, wherein prior to said obtaining the input user speech, the method further comprises:
acquiring a shot book image;
identifying second target content from the book image;
displaying a standard action corresponding to the second target content;
and, the acquiring the user speech input in the target language includes:
and acquiring the user voice describing the standard action in the target language.
8. The method of claim 7, wherein prior to the displaying the standard action corresponding to the second target content, the method further comprises:
acquiring a user identity, and inquiring a learning record corresponding to the user identity;
and if the learning record comprises paraphrase learning of the second target content, executing the step of displaying the standard action corresponding to the second target content.
9. An electronic device, comprising:
the first output unit is used for outputting a first action instruction in a target language;
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring an acquired user image and identifying limb actions included in the user image;
and the second output unit is used for outputting feedback information for indicating correctness when the limb action is identified to be matched with the first action instruction.
10. An electronic device, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the human-computer interaction method of any one of claims 1 to 8.
CN202010420768.XA 2020-05-18 2020-05-18 Man-machine interaction method and electronic equipment Pending CN111638783A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010420768.XA CN111638783A (en) 2020-05-18 2020-05-18 Man-machine interaction method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010420768.XA CN111638783A (en) 2020-05-18 2020-05-18 Man-machine interaction method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111638783A true CN111638783A (en) 2020-09-08

Family

ID=72328016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010420768.XA Pending CN111638783A (en) 2020-05-18 2020-05-18 Man-machine interaction method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111638783A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113257251A (en) * 2021-05-11 2021-08-13 深圳优地科技有限公司 Robot user identification method, apparatus and storage medium
CN117636767A (en) * 2023-12-12 2024-03-01 创而新(北京)科技有限公司 Image display method, system, terminal and storage medium of high beam imaging learning machine

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104253904A (en) * 2014-09-04 2014-12-31 广东小天才科技有限公司 Method for realizing point-reading learning and smart phone
CN105205454A (en) * 2015-08-27 2015-12-30 深圳市国华识别科技开发有限公司 System and method for capturing target object automatically
CN105872828A (en) * 2016-03-30 2016-08-17 乐视控股(北京)有限公司 Television interactive learning method and device
CN106056989A (en) * 2016-06-23 2016-10-26 广东小天才科技有限公司 Language learning method and device and terminal equipment
CN109446891A (en) * 2018-09-11 2019-03-08 广东智媒云图科技股份有限公司 A kind of interactive learning methods based on image recognition, electronic equipment and storage medium
CN109637207A (en) * 2018-11-27 2019-04-16 曹臻祎 A kind of preschool education interactive teaching device and teaching method
CN110033776A (en) * 2019-03-08 2019-07-19 佛山市云米电器科技有限公司 A kind of virtual image interactive system and method applied to screen equipment
CN110633027A (en) * 2019-09-16 2019-12-31 广东小天才科技有限公司 Point reading implementation method, system, computer equipment and storage medium
CN110688008A (en) * 2019-09-27 2020-01-14 贵州小爱机器人科技有限公司 Virtual image interaction method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104253904A (en) * 2014-09-04 2014-12-31 广东小天才科技有限公司 Method for realizing point-reading learning and smart phone
CN105205454A (en) * 2015-08-27 2015-12-30 深圳市国华识别科技开发有限公司 System and method for capturing target object automatically
CN105872828A (en) * 2016-03-30 2016-08-17 乐视控股(北京)有限公司 Television interactive learning method and device
CN106056989A (en) * 2016-06-23 2016-10-26 广东小天才科技有限公司 Language learning method and device and terminal equipment
CN109446891A (en) * 2018-09-11 2019-03-08 广东智媒云图科技股份有限公司 A kind of interactive learning methods based on image recognition, electronic equipment and storage medium
CN109637207A (en) * 2018-11-27 2019-04-16 曹臻祎 A kind of preschool education interactive teaching device and teaching method
CN110033776A (en) * 2019-03-08 2019-07-19 佛山市云米电器科技有限公司 A kind of virtual image interactive system and method applied to screen equipment
CN110633027A (en) * 2019-09-16 2019-12-31 广东小天才科技有限公司 Point reading implementation method, system, computer equipment and storage medium
CN110688008A (en) * 2019-09-27 2020-01-14 贵州小爱机器人科技有限公司 Virtual image interaction method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113257251A (en) * 2021-05-11 2021-08-13 深圳优地科技有限公司 Robot user identification method, apparatus and storage medium
CN113257251B (en) * 2021-05-11 2024-05-24 深圳优地科技有限公司 Robot user identification method, apparatus and storage medium
CN117636767A (en) * 2023-12-12 2024-03-01 创而新(北京)科技有限公司 Image display method, system, terminal and storage medium of high beam imaging learning machine

Similar Documents

Publication Publication Date Title
CN107773982B (en) Game voice interaction method and device
CN110288077B (en) Method and related device for synthesizing speaking expression based on artificial intelligence
US20110015920A1 (en) Apparatus for chinese language education and method thereof
CN112162628A (en) Multi-mode interaction method, device and system based on virtual role, storage medium and terminal
CN107203953A (en) It is a kind of based on internet, Expression Recognition and the tutoring system of speech recognition and its implementation
CN111290568A (en) Interaction method and device and computer equipment
US20180137778A1 (en) Language learning system, language learning support server, and computer program product
US20120156659A1 (en) Foreign language learning method based on stimulation of long-term memory
CN110162164A (en) A kind of learning interaction method, apparatus and storage medium based on augmented reality
CN110853422A (en) Immersive language learning system and learning method thereof
KR102101496B1 (en) Ar-based writing practice method and program
CN110992222A (en) Teaching interaction method and device, terminal equipment and storage medium
CN108877334B (en) Voice question searching method and electronic equipment
CN110796911A (en) Language learning system capable of automatically generating test questions and language learning method thereof
CN111638783A (en) Man-machine interaction method and electronic equipment
KR20190061191A (en) Speech recognition based training system and method for child language learning
CN111899576A (en) Control method and device for pronunciation test application, storage medium and electronic equipment
CN110019757A (en) Books point reads interaction device and its control method, computer readable storage medium
JP2006068489A (en) Interactive pet robot
CN112163513A (en) Information selection method, system, device, electronic equipment and storage medium
CN113253838A (en) AR-based video teaching method and electronic equipment
WO2020036011A1 (en) Information processing device, information processing method, and program
CN111046674A (en) Semantic understanding method and device, electronic equipment and storage medium
KR20130058840A (en) Foreign language learnning method
RU2479867C2 (en) Linguistic user interface operating method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination