CN111984161A - Control method and device of intelligent robot - Google Patents

Control method and device of intelligent robot Download PDF

Info

Publication number
CN111984161A
CN111984161A CN202010677265.0A CN202010677265A CN111984161A CN 111984161 A CN111984161 A CN 111984161A CN 202010677265 A CN202010677265 A CN 202010677265A CN 111984161 A CN111984161 A CN 111984161A
Authority
CN
China
Prior art keywords
graphical
intelligent robot
graphical components
teaching
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010677265.0A
Other languages
Chinese (zh)
Inventor
李庆民
孙传佳
赵保航
梁昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuangze Intelligent Robot Group Co ltd
Original Assignee
Chuangze Intelligent Robot Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chuangze Intelligent Robot Group Co ltd filed Critical Chuangze Intelligent Robot Group Co ltd
Priority to CN202010677265.0A priority Critical patent/CN111984161A/en
Publication of CN111984161A publication Critical patent/CN111984161A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a control method and a control device for an intelligent robot. Wherein, the method comprises the following steps: determining a plurality of graphical components associated with the target course in the intelligent robot, wherein the plurality of graphical components are used for guiding the teaching process of the target course; setting execution logic and execution content of each graphical component in a plurality of graphical components; and controlling the intelligent robot to carry out course teaching according to the course preparation information of the target course based on the execution logic and the execution content. The invention solves the technical problems that the intelligent robot system control mode can only realize the interaction of images, so that the interaction mode is single and the experience is poor.

Description

Control method and device of intelligent robot
Technical Field
The invention relates to the field of intelligent robots, in particular to a control method and device of an intelligent robot.
Background
The existing robot teaching can be mostly carried out based on single audio and video resources, for example, poetry of Tang Dynasty, children, one hundred thousand of reasons, student literacy, idiom story, lesson reading and the like, although the content is rich, the expression form is single, most of the robot teaching only has screen display and sound cooperation, the integral expression interaction of listening, speaking, lifting and stopping is lacked, the unique advantage that the robot can say to move is reflected, and the teaching effect is greatly reduced.
The existing lesson preparation system is mainly carried out aiming at the traditional multimedia equipment and comprises a computer, a projector and a smart phone. The use of the intelligent robot is limited to replacing the traditional projector and computer with the display screen and processor of the intelligent robot. However, with the rapid development of technology, the intelligent robot reaches or approaches the human level in the aspects of vision, hearing, voice and the like, and a richer and more diverse education and teaching system can be completely expanded on the basis.
In the related technology, the intelligent robot system control mode can only realize the interaction of images, and the interaction related to auditory sense, action and expression cannot be carried out, so that the lecture mode is single, and the experience is poor.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a control method and a control device of an intelligent robot, which at least solve the technical problems that the control mode of an intelligent robot system only can realize the interaction of images, so that the interaction mode is single and the experience is poor.
According to an aspect of an embodiment of the present invention, there is provided a control method of an intelligent robot, including: determining a plurality of graphical components associated with a target course within an intelligent robot, wherein the plurality of graphical components are used for guiding a teaching process of the target course; setting execution logic and execution content of each graphical component in the plurality of graphical components; and controlling the intelligent robot to carry out course teaching according to the course preparation information of the target course based on the execution logic and the execution content.
Optionally, determining, within the intelligent robot, the plurality of graphical components associated with the target course comprises: determining a first type of graphical component associated with the target course in the intelligent robot, wherein the first type of graphical component is used for controlling the intelligent robot to simulate the five-sense recognition of a human body; determining a second type of graphical components associated with the target lesson in the intelligent robot, wherein the second type of graphical components are used for controlling the intelligent robot to simulate the meaning representation of the human body; determining a third type of graphical components associated with the target course in the intelligent robot, wherein the third type of graphical components are used for controlling the intelligent robot to adjust the working state of associated equipment; determining the first type of graphical component, the second type of graphical component, and the third type of graphical component as the plurality of graphical components.
Optionally, the setting the execution logic of each graphical component in the plurality of graphical components includes: acquiring the starting moment of the target course for starting teaching; setting an incidence relation and an execution sequence among the graphical components based on the starting time, wherein the incidence relation is used for determining the graphical components to be synchronously executed at the same time, and the execution sequence is used for determining the graphical components which are sequentially executed at different times; and setting the execution logic according to the incidence relation and the execution sequence.
Optionally, the setting of the execution content of each graphical component in the plurality of graphical components includes: acquiring the teaching progress of the target course and the functional attribute of each graphical component in the plurality of graphical components; and setting the execution content according to the teaching progress and the functional attribute.
Optionally, the method further includes: acquiring the lesson preparation information from a preset storage area in a wireless network communication mode; acquiring preset teaching time of the target course; and calculating the completion degree of the lesson preparation information in the lesson giving duration, and carrying out optimization analysis on the incomplete part in the lesson preparation information.
According to another aspect of the embodiments of the present invention, there is also provided a control apparatus of an intelligent robot, including: a determining module, configured to determine a plurality of graphical components associated with a target course within an intelligent robot, wherein the plurality of graphical components are used for guiding a teaching process of the target course; the setting module is used for setting the execution logic and the execution content of each graphical component in the plurality of graphical components; and the control module is used for controlling the intelligent robot to carry out course teaching according to the course preparation information of the target course based on the execution logic and the execution content.
Optionally, the determining module is configured to determine a first type of graphical component associated with the target course in the intelligent robot, where the first type of graphical component is used to control the intelligent robot to simulate human body five-sense recognition; determining a second type of graphical components associated with the target lesson in the intelligent robot, wherein the second type of graphical components are used for controlling the intelligent robot to simulate the meaning representation of the human body; determining a third type of graphical components associated with the target course in the intelligent robot, wherein the third type of graphical components are used for controlling the intelligent robot to adjust the working state of associated equipment; determining the first type of graphical component, the second type of graphical component, and the third type of graphical component as the plurality of graphical components.
Optionally, the setting module is configured to obtain a starting time at which the target course starts teaching; setting an incidence relation and an execution sequence among the graphical components based on the starting time, wherein the incidence relation is used for determining the graphical components to be synchronously executed at the same time, and the execution sequence is used for determining the graphical components which are sequentially executed at different times; and setting the execution logic according to the incidence relation and the execution sequence.
Optionally, the setting module is configured to obtain a teaching progress of the target course and a functional attribute of each graphical component in the plurality of graphical components; and setting the execution content according to the teaching progress and the functional attribute.
Optionally, the apparatus further comprises: the processing module is used for acquiring the lesson preparation information from a preset storage area in a wireless network communication mode; acquiring preset teaching time of the target course; and calculating the completion degree of the lesson preparation information in the lesson giving duration, and carrying out optimization analysis on the incomplete part in the lesson preparation information.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium having a computer program stored therein, wherein the computer program is configured to execute the control method of the intelligent robot described in any one of the above items when running.
According to another aspect of the embodiments of the present invention, there is also provided a processor for executing a program, wherein the program is configured to execute the control method of the intelligent robot described in any one of the above items when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the control method of the intelligent robot described in any one of the above.
In the embodiment of the invention, a plurality of graphical components associated with the target course are determined in the intelligent robot, wherein the graphical components are used for guiding the teaching process of the target course; setting execution logic and execution content of each graphical component in a plurality of graphical components; based on execution logic and execution content, the intelligent robot is controlled to carry out course teaching according to course preparation information of a target course, multiple interaction modes are realized by controlling the intelligent robot through the graphical component, and then the teaching process of the target course is guided, the teaching process is realized by combining the graphical component, the purpose of the multiple interaction modes of the intelligent robot is achieved, so that the interaction modes of the teaching process of the intelligent robot are enriched, the technical effect of experience is optimized, and the problem that the interaction mode is single and the experience is poor due to the fact that the interaction of images can only be realized by the intelligent robot system control mode is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flowchart of a control method of an intelligent robot according to embodiment 1 of the present invention;
fig. 2 is a flowchart of an intelligent robot teaching process according to embodiment 2 of the present invention;
fig. 3 is a schematic diagram of a control apparatus of an intelligent robot according to embodiment 3 of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided a method embodiment of a control method for an intelligent robot, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a flowchart of a control method of an intelligent robot according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, determining a plurality of graphical components related to a target course in the intelligent robot, wherein the graphical components are used for guiding the teaching process of the target course;
the target course can be a preset course, and the intelligent robot executes a teaching process according to the target course to carry out teaching. The graphical components may include types of graphical components, and the multiple types of graphical components are respectively used for implementing different teaching modes of the target courses, for example, teaching through emotional expression, teaching through limb movement, and the like.
Optionally, the determining, within the intelligent robot, a plurality of graphical components associated with the target course includes: determining a first type of graphical components associated with a target course in the intelligent robot, wherein the first type of graphical components are used for controlling the intelligent robot to simulate five senses of human body recognition; the first type of graphic components may include an auditory graphic component, a tactile graphic component, a visual graphic component, and the like, which are respectively used for controlling the intelligent robot to simulate the auditory sense, the tactile sense, the visual sense, and the like of a human body.
The auditory graphical component may include execution contents such as hearing sound and not hearing sound, the functional component hearing sound may be accompanied by recognized sound character information, and the functional component not hearing sound may set a time when no sound is given. The intelligent robot is controlled to simulate the hearing of a human body, and teaching interaction of a target course is performed in the hearing.
The tactile graphical component may include executing content such as detecting a touch, detecting an obstacle, etc. The detected touch graphical function component can be accompanied by the haptic result parameter after haptic recognition: including touch location, touch force. The detected obstacle graphical function component may be accompanied by a result parameter after identification: including obstacle distance and obstacle position. The intelligent robot is controlled to simulate the touch sense of a human body, and teaching interaction of target courses is carried out on the touch sense.
The visual graphical component may include executable content such as recognizing a human face, recognizing a person, recognizing an animal, recognizing an object, and so forth. Visual outcome parameters after visual recognition may be attached: including the identified images, the distance, number, name, etc. of the identified objects. The intelligent robot is controlled to simulate the vision of a human body, and the teaching interaction of the target course is carried out visually.
Determining a second type of graphical components associated with the target course in the intelligent robot, wherein the second type of graphical components are used for controlling the intelligent robot to simulate the meaning representation of the human body; the second type of graphical components may include an action graphical component, a language graphical component, an expression graphical component, and the like, which are respectively used for controlling the robot to simulate the actions, languages, expressions, and the like of the human body.
Specifically, the action graph component may include: lifting left hand, lifting right hand, putting left hand, putting right hand, advancing, retreating, turning left, turning right, moving to a certain point, offering, shaking head, nodding head, hugging, turning left waist, turning right waist, stopping running, resetting hands, resetting waist, resetting head and the like. The action graphical component can specify motion parameters including angles and distances, and the action is realized through the motion parameters. The intelligent robot is controlled to simulate the action of a human body, and the teaching interaction of a target course is carried out on the action.
The expression graphical components can include execution contents of expressions such as laughing, waiting, embarrassment, surprise, crying, obsessiveness, vitality, sleeping, speaking, thinking, skin conditioning, smiling, committing, liking, serious, puzzling, frightening and the like. The intelligent robot is controlled to simulate the human expression, and teaching interaction of the target course is carried out on the expression.
The language graphic component may include executing content such as playing text, playing audio, etc. The graphical functional components can specify different text and audio data. The intelligent robot is controlled to simulate the language of a human body, and teaching interaction of target courses is carried out on the language.
Determining a third type of graphical components associated with the target course in the intelligent robot, wherein the third type of graphical components are used for controlling the intelligent robot to adjust the working state of the associated equipment; the third type of graphical component may include control instructions for controlling devices involved in the teaching of the target course, such as light control, air conditioning control, projector control, and computer control.
The light control comprises on, off, dimming control and other execution contents; the air conditioner control comprises execution contents such as on, off, mode, wind speed increasing/decreasing, temperature increasing/decreasing and the like; the projector controls execution contents such as on and off control; the computer control includes on/off control and other execution contents. The intelligent robot is controlled to control the working state of the associated equipment, and the teaching interaction of the target course is carried out.
And determining the first type of graphical components, the second type of graphical components and the third type of graphical components as a plurality of graphical components.
Step S104, setting the execution logic and the execution content of each graphical component in a plurality of graphical components;
the graphical component comprises a plurality of execution contents, for example, the auditory component comprises execution contents such as hearing sound and not hearing sound, and each execution content corresponds to execution logic for realizing the execution content by the intelligent robot.
Optionally, the setting the execution logic of each graphical component in the plurality of graphical components includes: acquiring the starting time of the target course for starting teaching; setting incidence relations and execution sequences among the graphical components based on the starting time, wherein the incidence relations are used for determining the graphical components to be synchronously executed at the same time, and the execution sequences are used for determining the graphical components which are sequentially executed at different times; and setting execution logic according to the association relation and the execution sequence.
The association relationship between the graphical components may be an association relationship between different graphical components in a teaching process, for example, a language graphical component and an action graphical component, in the teaching process, the language graphical component and the action graphical component may be expressed in a language, and then the action graphical component executes a sound-prohibiting action, specifically, an index finger may be placed at a position of a mouth of the intelligent robot to prohibit a sound, or a double arm may be lifted to slightly press the finger to prohibit the sound.
The execution order of the graphic components may be the execution order of the graphic components in the case of having an association relationship, and as described above, the language graphic component is executed first, and then the action graphic component is executed, so that the meaning can be clearly expressed, but if the opposite is true, the meaning of the action may be unclear, and therefore, in the above case, the execution order is the language graphic component is executed first, and then the action graphic component is executed.
And setting execution logic according to the incidence relation and the execution sequence, and controlling the intelligent robot through the execution logic, so that the graphic components with the incidence relation can be executed in a matching way, the teaching interaction mode of the target course is enriched, the expression accuracy of the intelligent robot is further improved, and the experience is further optimized.
Optionally, the setting the execution content of each graphical component in the plurality of graphical components includes: acquiring the teaching progress of a target course and the functional attribute of each graphical component in a plurality of graphical components; and setting execution content according to the teaching progress and the functional attributes.
The teaching progress may be the progress of the teaching process of the target course, and the functional attributes of the graphical component may include a function of the graphical component executing different execution contents, for example, the language graphical component executes the text contents of "next we learn the content of the first course, please a child to tell a teacher how the apple says in english", and the function is to start the content of the first course and teach the apple in english.
The setting of the execution content according to the teaching progress and the function attribute may be that the execution content of the graphical component is executed when the execution content matches the function of the execution content of the graphical component at a certain node or time of the teaching progress. For example, after the language graphic component executes the text content, the language graphic component needs to listen to the student's answer, and the function of listening to the sound execution content is the same as that of the auditory graphic component, then the auditory graphic component executes the listening to the content, acquires the recognized sound text information, and can continue the subsequent teaching process according to the text information. Therefore, the execution content of the graphical component is executed according to the specific teaching progress, the teaching process of the target course of the intelligent robot is more reasonable and anthropomorphic, and the interaction accuracy and the teaching experience of the intelligent robot are further improved.
And S106, controlling the intelligent robot to carry out course teaching according to the course preparation information of the target course based on the execution logic and the execution content.
Through the steps, a plurality of graphical components associated with the target course are determined in the intelligent robot, wherein the graphical components are used for guiding the teaching process of the target course; setting execution logic and execution content of each graphical component in a plurality of graphical components; based on execution logic and execution content, the intelligent robot is controlled to carry out course teaching according to course preparation information of a target course, multiple interaction modes are realized by controlling the intelligent robot through the graphical component, and then the teaching process of the target course is guided, the teaching process is realized by combining the graphical component, the purpose of the multiple interaction modes of the intelligent robot is achieved, so that the interaction modes of the teaching process of the intelligent robot are enriched, the technical effect of experience is optimized, and the problem that the interaction mode is single and the experience is poor due to the fact that the interaction of images can only be realized by the intelligent robot system control mode is solved.
Optionally, the method further comprises: acquiring lesson preparation information from a preset storage area in a wireless network communication mode; acquiring preset teaching duration of a target course; and calculating the completion degree of the lesson preparation information in the lesson giving time, and carrying out optimization analysis on the incomplete parts in the lesson preparation information.
And optimizing unfinished parts in the shell information through the preset teaching time and the finish degree of the shell information in the teaching time, and ensuring that the lesson preparation information of the target course is finished in the preset teaching time. For example, the teaching time is 40 minutes, 35 minutes have been carried out currently, the completion degree of lesson preparation information is 70%, 6 words are not completely learned, interaction with students can be closed, teaching is simplified, teaching of lesson preparation information is guaranteed to be completed in the course of the classroom, and the teaching progress is guaranteed.
The aforementioned graphical component may further include a display graphical component, which may include: displaying pictures, displaying videos, displaying animations, displaying PPT, displaying documents, displaying HTML, PPT paging, video pausing, video playing, display closing, and the like. The intelligent robot is controlled to display related multimedia, teaching interaction of target courses is carried out, and teaching experience is further improved.
Example 2
The embodiment provides an intelligent robot lesson preparation system, which realizes the auxiliary creation of a standard compatible lesson giving mode integrating talk ending, classroom interaction and knowledge explanation.
In the embodiment, teaching materials, talk ends and classroom interaction used by a human teacher in the course of teaching are programmed and standardized in a graphical arrangement mode, so that a set of unified and reproducible teaching scripts are formed, and the intelligent teaching robot can execute teaching tasks according to the scripts, thereby realizing indifferent popularization of an excellent teaching method.
The system of preparing lessons that intelligent robot of this embodiment gave lessons includes robot expression management module, robot action management module, robot hearing management module, robot speech management module, robot demonstration management module, robot vision management module, robot sense of touch management module, robot instruction management module, teaching materials management module, time control module, the module of importing and exporting preparing lessons.
The robot expression management module is used for controlling the robot to display different facial expressions, including but not limited to happy, sad, crying, lovely, boring and snooze.
The robot motion management module is used for controlling the robot limb motions including but not limited to hand lifting, head shaking, head nodding, advancing, retreating, circling and corresponding various combined motions.
The robot hearing management module is used for receiving voice input of an external environment and can execute contents of different functional modules according to different input contents.
The robot speaking management module is used for controlling the robot to express input characters and audio in a voice mode.
The robot display management module is used for controlling the robot to display teaching contents including but not limited to characters, pictures, videos, animations and PPT and controlling the life cycle of the displayed contents including forward, backward, start and end.
The robot vision management module is used for receiving the characters, objects and environment information generated by the robot through video analysis, and executing the contents of different functional modules according to different input information.
The robot touch management module is used for receiving touch, stress and distance information generated by the robot through the analysis of the touch sensor and executing the contents of different functional modules according to different input information.
The robot command management module is used for controlling the robot to execute a specified command and is linked with other equipment in the environment, including but not limited to controlling lamplight, an air conditioner, a computer and projector equipment.
The teaching material management module comprises audio and video files, PPT, pictures and animation courseware resources used in the course of lecture.
The time control module is used for managing the time schedule of the lectures, accounting the time required for the whole course to finish speaking and giving optimization suggestions to courses which exceed the preset time.
The functions of the robot expression management module, the robot action management module, the robot hearing management module, the robot speaking management module, the robot display management module, the robot vision management module, the robot touch management module, the robot instruction management module and the teaching material management module are provided for a user in the form of graphical functional components, and the graphical functional components are divided into five different functional logics of immediate execution, delayed execution, conditional execution, cyclic execution and event triggering execution.
The graphical functional component of the robot expression management module comprises: laughing, waiting, embarrassment, surprise, crying, difficulty, vitality, sleeping, speaking, thinking, skin conditioning, smiling, committing, liking, serious, puzzling and frightening.
The graphical functional components of the robot action management module comprise: lifting left hand, lifting right hand, putting left hand, putting right hand, advancing, retreating, turning left, turning right, moving to a certain point, offering, shaking head, nodding head, hugging, turning left waist, turning right waist, stopping running, resetting hand, resetting waist and resetting head. The functional components are characterized in that the motion parameters including angles and distances can be specified.
The graphical functional components of the robotic hearing management module include: hear the sound, not hear the sound. The sound hearing function component is characterized in that recognized sound character information can be attached; the no-sound-hearing function component is characterized in that the time when no sound is given can be set.
The graphical functional components of the robot-based utterance management module include: and playing characters and audio. The graphical functional component is characterized in that different text and audio data can be specified.
The graphical functional components of the robot display management module comprise: displaying pictures, displaying videos, displaying animations, displaying PPT, displaying documents, displaying HTML, PPT paging, pausing videos, playing videos, and displaying off.
The graphical functional components of the robot vision management module comprise: identifying a human face, identifying a person, identifying an animal, identifying an object. The graphical functional component is characterized in that the graphical functional component can be additionally provided with a result parameter after recognition: including distance, number, name.
The graphical functional components of the robotic haptic management module include: a touch is detected and an obstacle is detected. The touch detection graphical functional component is characterized in that the identified result parameters can be attached: including touch location, touch force. The above-mentioned obstacle-detected graphical functional component may be characterized by additionally carrying a result parameter after identification: including obstacle distance and obstacle position.
The graphical functional component of the robot instruction management module comprises: light control, air conditioner control, projector control and computer control. The light control graphical functional component comprises an on/off switch and a dimming control switch; the air conditioner control graphical functional assembly comprises an on/off mode, a wind speed increasing/decreasing mode and a temperature increasing/decreasing control mode; the projector control graphical functional component comprises on-off control; the computer control graphical functional component comprises on-off control.
The graphical functional components of each functional module support dynamic expansion.
The lesson preparation import and export module exports lessons preparation to form a JSON format or a JavaScript script, and imports the JSON format or the JavaScript script into a system to form visual lesson preparation data.
The lesson preparation storage module stores lesson data using structured or unstructured databases.
The robot is connected to the robot in a wireless network communication mode, and the robot acquires lesson preparation information from the storage module and runs lesson preparation scripts to perform teaching.
Fig. 2 is a flowchart of an intelligent robot teaching process according to an embodiment of the present invention, and as shown in fig. 2, the system specifically performs the following steps:
1. the time control module sets a course starting point.
2. The robot talk management module uses a broadcast text graphical component (M1) that is placed at the start of the lesson, sets the text content to "please rest and start class immediately", and the logic function is set to execute immediately.
3. The robotic expression management module uses a talking graphical component (M2) that is put to the curriculum starting point location and the logic function is set to execute immediately.
4. The robot command management module uses the projector to control the graphical component (M3), which is placed at the curriculum start point location, set control on, and the logic function set to execute immediately.
5. The robot action management module uses a hand-lifting graphical component (M4) that is placed at the curriculum start point location, the lift angle is set to 90 degrees, and the logic function is set to execute immediately.
6. The robot action management module uses a hand-lifting graphical component (M5) placed behind the playing text graphical component (M1), the lifting angle is set to 0 degrees, and the logic function is set to execute immediately. Namely, the hand-lifting graphical component is executed immediately after the function execution of the character-playing graphical component is finished.
7. The robotic expression management module uses a smile graphical component (M6) that is placed behind the play text graphical component (M1) and the logical function is set to execute immediately. Namely, the smile graphic component is executed immediately after the function of the character graphic component is executed.
8. The robot display management module uses the display PPT graphical component (M7) to select a PPT from the teaching material management module, the PPT is placed behind the playing text graphical component (M1), and the logic function is set to be executed with a delay of 1 second. Namely, after the function of the graphic display assembly for playing the characters is finished, the PPT graphic assembly is displayed by delaying for 1 second.
9. The robot talk management module uses a broadcast text graphical component (M8) placed behind the display PPT graphical component (M7) to set the text content to "next we learn the first course, please the child to tell the teacher how the apple says in english" and the logical function is set to execute immediately.
10. The robot hearing management module uses a sound-hearing graphical component (M9) which is placed behind a text-playing graphical component (M8), the logic function is set to be executed in a condition, when the sound-hearing graphical component (M9) is heard to identify that the result is 'apple', the text content is set to be 'correct answer and good answer' by using a text-playing graphical component (M10) of the robot speaking management module, and the logic function is set to be executed immediately; when hearing that the sound patterning means (M9) recognized that the result is not "applet", the text content is set to "apple english is applet" using the play text patterning means (M11) of the robot utterance management module, and the logic function is set to execute immediately.
11. The robot tactile management module uses the obstacle detected graphical component (M12) placed at the curriculum start point location, the logic function is set to loop, and when an obstacle distance less than 20cm is detected, the stopped motion graphical component of the robot motion management module is used (M13).
12. And the lesson preparation storage module stores the edited lesson preparation data and provides lesson preparation data access service for the robot.
The lesson preparation lead-in and lead-out module leads out lesson preparation data for the robot to use offline.
Example 3
Fig. 3 is a schematic diagram of a control apparatus of an intelligent robot according to embodiment 3 of the present invention, and as shown in fig. 3, according to another aspect of the embodiment of the present invention, there is also provided a control apparatus of an intelligent robot, including: a determination module 32, a setup module 34, and a control module 36, which are described in detail below.
A determining module 32, configured to determine a plurality of graphical components associated with the target course in the intelligent robot, wherein the plurality of graphical components are used for guiding the teaching process of the target course; the setting module 34, connected to the determining module 32, is used for setting the execution logic and the execution content of each graphical component in the plurality of graphical components; and the control module 36 is connected with the setting module 34 and is used for controlling the intelligent robot to carry out course teaching according to the course preparation information of the target course based on the execution logic and the execution content.
By the device, a determining module 32 is adopted to determine a plurality of graphical components associated with the target course in the intelligent robot, wherein the graphical components are used for guiding the teaching process of the target course; the setting module 34 sets the execution logic and the execution content of each of the plurality of graphic components; control module 36 is based on execution logic and execution content, control intelligent robot carries out the mode of course teaching according to the information of preparing lessons of target course, control intelligent robot through graphical subassembly and realize multiple interactive mode, and then guide the teaching process of target course, it realizes the teaching in-process to have reached to combine graphical subassembly, intelligent robot's multiple interactive mode's purpose, thereby the interactive mode of abundant intelligent robot teaching process has been realized, optimize the technological effect of experience, and then solved intelligent robot system control mode and can only realize the interaction of image, lead to the interactive mode single, experience relatively poor technical problem.
Optionally, the determining module is configured to determine a first type of graphical component associated with the target course in the intelligent robot, where the first type of graphical component is used to control the intelligent robot to simulate human body five-sense recognition; determining a second type of graphical components associated with the target course in the intelligent robot, wherein the second type of graphical components are used for controlling the intelligent robot to simulate the meaning representation of the human body; determining a third type of graphical components associated with the target course in the intelligent robot, wherein the third type of graphical components are used for controlling the intelligent robot to adjust the working state of the associated equipment; and determining the first type of graphical components, the second type of graphical components and the third type of graphical components as a plurality of graphical components.
Optionally, the setting module is configured to obtain a starting time at which the target course starts teaching; setting incidence relations and execution sequences among the graphical components based on the starting time, wherein the incidence relations are used for determining the graphical components to be synchronously executed at the same time, and the execution sequences are used for determining the graphical components which are sequentially executed at different times; and setting execution logic according to the association relation and the execution sequence.
Optionally, the setting module is configured to obtain a teaching progress of the target course and a functional attribute of each graphical component in the plurality of graphical components; and setting execution content according to the teaching progress and the functional attributes.
Optionally, the apparatus further comprises: the processing module is used for acquiring lesson preparation information from a preset storage area in a wireless network communication mode; acquiring preset teaching duration of a target course; and calculating the completion degree of the lesson preparation information in the lesson giving time, and carrying out optimization analysis on the incomplete parts in the lesson preparation information.
Example 4
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium having a computer program stored therein, wherein the computer program is configured to perform the following steps when executed:
determining a plurality of graphical components associated with a target course within an intelligent robot, wherein the plurality of graphical components are used for guiding a teaching process of the target course; setting execution logic and execution content of each graphical component in the plurality of graphical components; and controlling the intelligent robot to carry out course teaching according to the course preparation information of the target course based on the execution logic and the execution content.
Example 5
According to another aspect of the embodiments of the present invention, there is also provided a processor for executing a program, wherein the program is configured to perform the following steps when executed:
determining a plurality of graphical components associated with a target course within an intelligent robot, wherein the plurality of graphical components are used for guiding a teaching process of the target course; setting execution logic and execution content of each graphical component in the plurality of graphical components; and controlling the intelligent robot to carry out course teaching according to the course preparation information of the target course based on the execution logic and the execution content.
Example 6
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including a memory and a processor, the memory storing a computer program therein, the processor being configured to execute the computer program to perform the following steps:
determining a plurality of graphical components associated with a target course within an intelligent robot, wherein the plurality of graphical components are used for guiding a teaching process of the target course; setting execution logic and execution content of each graphical component in the plurality of graphical components; and controlling the intelligent robot to carry out course teaching according to the course preparation information of the target course based on the execution logic and the execution content.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (13)

1. A control method of an intelligent robot is characterized by comprising the following steps:
determining a plurality of graphical components associated with a target course within an intelligent robot, wherein the plurality of graphical components are used for guiding a teaching process of the target course;
setting execution logic and execution content of each graphical component in the plurality of graphical components;
and controlling the intelligent robot to carry out course teaching according to the course preparation information of the target course based on the execution logic and the execution content.
2. The method as recited in claim 1, wherein determining within the intelligent robot the plurality of graphical components associated with the target course comprises:
determining a first type of graphical component associated with the target course in the intelligent robot, wherein the first type of graphical component is used for controlling the intelligent robot to simulate the five-sense recognition of a human body;
determining a second type of graphical components associated with the target lesson in the intelligent robot, wherein the second type of graphical components are used for controlling the intelligent robot to simulate the meaning representation of the human body;
determining a third type of graphical components associated with the target course in the intelligent robot, wherein the third type of graphical components are used for controlling the intelligent robot to adjust the working state of associated equipment;
determining the first type of graphical component, the second type of graphical component, and the third type of graphical component as the plurality of graphical components.
3. The method of claim 1, wherein setting execution logic for each of the plurality of graphical components comprises:
acquiring the starting moment of the target course for starting teaching;
setting an incidence relation and an execution sequence among the graphical components based on the starting time, wherein the incidence relation is used for determining the graphical components to be synchronously executed at the same time, and the execution sequence is used for determining the graphical components which are sequentially executed at different times;
and setting the execution logic according to the incidence relation and the execution sequence.
4. The method of claim 1 or 3, wherein setting the execution content of each of the plurality of graphical components comprises:
acquiring the teaching progress of the target course and the functional attribute of each graphical component in the plurality of graphical components;
and setting the execution content according to the teaching progress and the functional attribute.
5. The method of claim 1, further comprising:
acquiring the lesson preparation information from a preset storage area in a wireless network communication mode;
acquiring preset teaching time of the target course;
and calculating the completion degree of the lesson preparation information in the lesson giving duration, and carrying out optimization analysis on the incomplete part in the lesson preparation information.
6. A control device for an intelligent robot, comprising:
a determining module, configured to determine a plurality of graphical components associated with a target course within an intelligent robot, wherein the plurality of graphical components are used for guiding a teaching process of the target course;
the setting module is used for setting the execution logic and the execution content of each graphical component in the plurality of graphical components;
and the control module is used for controlling the intelligent robot to carry out course teaching according to the course preparation information of the target course based on the execution logic and the execution content.
7. The apparatus of claim 6, wherein the determining module is configured to determine a first type of graphical component associated with the target lesson within the intelligent robot, wherein the first type of graphical component is configured to control the intelligent robot to simulate human five-sense recognition; determining a second type of graphical components associated with the target lesson in the intelligent robot, wherein the second type of graphical components are used for controlling the intelligent robot to simulate the meaning representation of the human body; determining a third type of graphical components associated with the target course in the intelligent robot, wherein the third type of graphical components are used for controlling the intelligent robot to adjust the working state of associated equipment; determining the first type of graphical component, the second type of graphical component, and the third type of graphical component as the plurality of graphical components.
8. The apparatus as claimed in claim 6, wherein the setting module is configured to obtain a starting time for starting teaching of the target course; setting an incidence relation and an execution sequence among the graphical components based on the starting time, wherein the incidence relation is used for determining the graphical components to be synchronously executed at the same time, and the execution sequence is used for determining the graphical components which are sequentially executed at different times; and setting the execution logic according to the incidence relation and the execution sequence.
9. The apparatus according to claim 6 or 8, wherein the setting module is configured to obtain a teaching progress of the target course and a functional attribute of each of the plurality of graphical components; and setting the execution content according to the teaching progress and the functional attribute.
10. The apparatus of claim 6, further comprising:
the processing module is used for acquiring the lesson preparation information from a preset storage area in a wireless network communication mode; acquiring preset teaching time of the target course; and calculating the completion degree of the lesson preparation information in the lesson giving duration, and carrying out optimization analysis on the incomplete part in the lesson preparation information.
11. A non-volatile storage medium, characterized in that a computer program is stored in the storage medium, wherein the computer program is arranged to execute the control method of a robot as claimed in any one of the claims 1 to 5 when running.
12. A processor for running a program, wherein the program is arranged to perform the control method of the robot of any of claims 1 to 5 when run.
13. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of controlling a robot as claimed in any one of the claims 1 to 5.
CN202010677265.0A 2020-07-14 2020-07-14 Control method and device of intelligent robot Pending CN111984161A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010677265.0A CN111984161A (en) 2020-07-14 2020-07-14 Control method and device of intelligent robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010677265.0A CN111984161A (en) 2020-07-14 2020-07-14 Control method and device of intelligent robot

Publications (1)

Publication Number Publication Date
CN111984161A true CN111984161A (en) 2020-11-24

Family

ID=73437843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010677265.0A Pending CN111984161A (en) 2020-07-14 2020-07-14 Control method and device of intelligent robot

Country Status (1)

Country Link
CN (1) CN111984161A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117151157A (en) * 2022-12-23 2023-12-01 深圳市木愚科技有限公司 Teaching method and device based on AI robot teaching platform, computer equipment and storage medium
CN117251152A (en) * 2022-12-12 2023-12-19 北京小米机器人技术有限公司 Robot graphical programming method and device, mobile terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107705643A (en) * 2017-11-16 2018-02-16 四川文理学院 Teaching method and its device are presided over by a kind of robot
CN108564828A (en) * 2018-02-09 2018-09-21 北京鼎致凌云科技有限公司 A kind of intelligent tutoring robot
CN109189535A (en) * 2018-08-30 2019-01-11 北京葡萄智学科技有限公司 Teaching method and device
CN109584648A (en) * 2018-11-08 2019-04-05 北京葡萄智学科技有限公司 Data creation method and device
KR20200079054A (en) * 2018-12-24 2020-07-02 엘지전자 주식회사 Robot and method for controlling thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107705643A (en) * 2017-11-16 2018-02-16 四川文理学院 Teaching method and its device are presided over by a kind of robot
CN108564828A (en) * 2018-02-09 2018-09-21 北京鼎致凌云科技有限公司 A kind of intelligent tutoring robot
CN109189535A (en) * 2018-08-30 2019-01-11 北京葡萄智学科技有限公司 Teaching method and device
CN109584648A (en) * 2018-11-08 2019-04-05 北京葡萄智学科技有限公司 Data creation method and device
KR20200079054A (en) * 2018-12-24 2020-07-02 엘지전자 주식회사 Robot and method for controlling thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117251152A (en) * 2022-12-12 2023-12-19 北京小米机器人技术有限公司 Robot graphical programming method and device, mobile terminal and storage medium
CN117151157A (en) * 2022-12-23 2023-12-01 深圳市木愚科技有限公司 Teaching method and device based on AI robot teaching platform, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US10957325B2 (en) Method and apparatus for speech interaction with children
TWI713000B (en) Online learning assistance method, system, equipment and computer readable recording medium
JP2607561B2 (en) Synchronized speech animation
CN108182830B (en) Robot, robot control device, method, system, and storage medium
CN108942919B (en) Interaction method and system based on virtual human
WO2008067413A2 (en) Training system using an interactive prompt character
CN105632251A (en) 3D virtual teacher system having voice function and method thereof
Gena et al. Design and development of a social, educational and affective robot
CN106057023A (en) Intelligent robot oriented teaching method and device for children
CN204650422U (en) A kind of intelligent movable toy manipulated alternately based on language
Gnjatović et al. Inducing genuine emotions in simulated speech-based human-machine interaction: The nimitek corpus
CN111984161A (en) Control method and device of intelligent robot
CN116543082B (en) Digital person generation method and device and digital person generation system
CN110046290B (en) Personalized autonomous teaching course system
Jing et al. Optimization of computer-aided English teaching system realized by VB software
TW202008326A (en) Dynamic scenario oriented digital language teaching method and system being applied to a teaching content supply end, an editing end and learning end
Griol et al. Developing multimodal conversational agents for an enhanced e-learning experience
Kose et al. iSign: an architecture for humanoid assisted sign language tutoring
Pan et al. Application of virtual reality in English teaching
Cui et al. Animation stimuli system for research on instructor gestures in education
WO2017028272A1 (en) Early education system
WO2019190817A1 (en) Method and apparatus for speech interaction with children
CN101840640B (en) Interactive voice response system and method
Tuo et al. Construction and Application of a Human‐Computer Collaborative Multimodal Practice Teaching Model for Preschool Education
CN113963306A (en) Courseware title making method and device based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201124