CN113534958B - Man-machine interaction control system and method based on multi-round dialogue - Google Patents

Man-machine interaction control system and method based on multi-round dialogue Download PDF

Info

Publication number
CN113534958B
CN113534958B CN202110840461.XA CN202110840461A CN113534958B CN 113534958 B CN113534958 B CN 113534958B CN 202110840461 A CN202110840461 A CN 202110840461A CN 113534958 B CN113534958 B CN 113534958B
Authority
CN
China
Prior art keywords
unit
maintenance
data
information
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110840461.XA
Other languages
Chinese (zh)
Other versions
CN113534958A (en
Inventor
林志贤
田启东
郑炜楠
何蓝图
李腾飞
于兆一
林欣慰
李志�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Power Supply Co ltd
Original Assignee
Shenzhen Power Supply Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Power Supply Co ltd filed Critical Shenzhen Power Supply Co ltd
Priority to CN202110840461.XA priority Critical patent/CN113534958B/en
Publication of CN113534958A publication Critical patent/CN113534958A/en
Application granted granted Critical
Publication of CN113534958B publication Critical patent/CN113534958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a man-machine interaction control system based on multi-wheel conversations, which comprises a control component and a maintenance unit, wherein the control component and the maintenance unit are integrated in man-machine interaction equipment, the control component is electrically connected with the maintenance unit, the control component comprises an induction unit, an identity recognition unit, a data acquisition unit, a data analysis unit, a domain knowledge base, a control center unit and an execution unit, and the units are electrically connected with the control center unit. The invention also discloses a corresponding method. By implementing the invention, the sensing unit can sense and judge the objects in the range, thereby improving the intelligent degree of the equipment, the identity recognition system can carry out the identity verification of the user, and the environment acquisition unit can receive the instruction, so that the operation difficulty of the man-machine interaction equipment is reduced, the self-maintenance capability of the man-machine interaction equipment is improved, and the man-machine interaction equipment is suitable for being pushed to use in people of all ages.

Description

Man-machine interaction control system and method based on multi-round dialogue
Technical Field
The invention relates to the field of man-machine interaction, in particular to a man-machine interaction control system and method based on multi-round dialogue.
Background
Human-computer interaction system is developed along with the birth of computer, in modern and future society, as long as people utilize information processing technologies such as communication, computer and the like to perform activities for society, economy, environment and resources, human-computer interaction is a perpetual theme, in view of the importance of the human-computer interaction to technological development, how to realize natural, convenient and ubiquitous human-computer interaction is researched, and the human-computer interaction system is a modern information technology, an artificial intelligence technology research to a high goal, and is a new combination point of intersection of mathematics, information science, intelligent science, neuroscience and multiple science of physiology and psychology, and the development of the information and computer research in the early twentieth century is guided.
The man-machine interaction technology is a technology for realizing the conversation between a person and a computer in an effective mode through computer input and output equipment, and comprises the steps that a machine provides a large amount of related information, prompt requests and the like for the person through output or display equipment, the person inputs the related information, answers questions, prompt requests and the like for the machine through input equipment, and the man-machine interaction technology is one of important contents in the design of a computer user interface.
In contrast to conventional user interfaces, where video and audio are introduced followed by a multimedia user interface, the most important change is that the interface is no longer a static interface, but a time-dependent time-varying media interface, where human speech and other time-varying media are used in a manner that is completely different from other media, where time-varying media are mainly presented sequentially, whereas we are usually familiar with visual media that are presented simultaneously, in terms of information presented to the user, where the user either selects from a series of options or interacts in a re-recognizable manner, where all options and files must be presented sequentially, due to limitations in media bandwidth and human attention, where the user must control not only the content of the presented information, but also when and how the user is presented, but where the answer that the terminal feeds back to the user is usually an objective answer, where the user does not only need to control but also needs to maintain the interactive device, but where human-computer interaction in the prior art cannot meet the user's requirements, affecting the user experience.
Disclosure of Invention
The invention aims to solve the technical problem of providing a man-machine interaction control system and method based on multi-round dialogue, so as to solve the problem that the control and maintenance effects of the existing man-machine interaction on equipment are poor in the background art.
In order to solve the technical problems, the invention adopts the technical scheme that: as an aspect of the present invention, a man-machine interaction control system based on multi-wheel conversations is provided, which includes a control component and a maintenance unit integrated in a man-machine interaction device, wherein the control component is electrically connected with the maintenance unit, and the control component includes an induction unit, an identity recognition unit, a data acquisition unit, a data analysis unit, a domain knowledge base and an execution unit, which are electrically connected with a control center unit, wherein:
the sensing unit consists of an infrared sensor and is used for detecting objects in a sensing range;
the identification unit is used for carrying out identification on the human body when the sensing unit senses that the human body approaches, and integrates the functions of face recognition, fingerprint recognition, IC card recognition, voice instruction recognition and password recognition;
the data acquisition unit is used for acquiring data after the identity recognition unit passes the verification, wherein the data comprises image data and audio data;
the data analysis unit is used for classifying the information acquired by the data acquisition module through the data analysis module, dividing the information acquired by the data acquisition module into audio data information and image data information, and then sending the classified information to the control center unit;
the knowledge base audio data and knowledge base image data are stored in the domain knowledge base;
the control center unit is used for comparing and analyzing the received image data with the image data in the domain knowledge base to obtain matched knowledge base image data so as to judge the meaning expressed by the received image data; the method comprises the steps of comparing and analyzing received audio data with audio data in a domain knowledge base to obtain matched knowledge base audio data so as to judge meaning expressed by the received audio data;
the execution unit is used for executing the execution operation corresponding to the meaning of the data obtained by the control center unit;
the maintenance unit comprises an information receiving unit, an information judging unit, a maintenance unit and a maintenance executing unit, wherein the information receiving unit, the information judging unit and the maintenance unit are electrically connected with the maintenance executing unit.
Preferably, the control center unit further comprises: the meaning judging unit is used for judging whether the meaning obtained by the control center unit is reasonable or not, if not, the executing operation is not performed, and if so, the executing unit controls the interactive device to execute; and analyzing the data obtained after execution to determine whether to end the interactive flow.
Preferably, the data analysis unit is an intermediate frequency analysis module, and the intermediate frequency analysis module comprises a phase-locked main circuit and a ZYNQ chip which are sequentially connected, wherein the phase-locked main circuit controls the frequency and the phase of an oscillation signal in a loop by using an externally input reference signal so as to realize automatic tracking of the frequency of an output signal to the frequency of an input signal; the ZYNQ chip processes the data information to achieve data analysis.
Preferably, the phase-locked main circuit is connected to the signal input circuit through input ends of a first frequency conversion channel and a second frequency conversion channel, and output ends of the first frequency conversion channel and the second frequency conversion channel are connected to the signal output circuit.
Preferably, in the maintenance unit:
the information receiving unit is used for receiving an instruction sent by a user through the man-machine interaction equipment after the identification is completed;
the information judging unit is used for reading, analyzing and classifying the instructions received by the information receiving unit and classifying the instructions into action instructions and voice instructions;
the maintenance unit is used for correspondingly comparing the action instruction and the voice instruction which are analyzed by the information judgment unit with the maintenance instructions stored in the information judgment unit and selecting the maintenance instruction with the highest overlap ratio;
the maintenance execution unit performs corresponding maintenance operation according to the maintenance instruction determined by the maintenance unit, and after the maintenance is completed and the system self-checking qualified information is received, the maintenance operation is ended.
Preferably, the maintenance unit further comprises:
the maintenance abnormality processing unit is used for judging maintenance abnormality when the self-check of the system fails, carrying out abnormality prompt and prompting a user to resend the instruction; and when the abnormality reaches a preset threshold, prompting the manual inspection of the equipment.
Preferably, the system further comprises an audio output unit, wherein when the interface of the man-machine interaction device and the audio output unit output files, the file output progress is monitored, and when the output progress reaches a preset selection node, the interactive files corresponding to the current selection node are output through the interface of the man-machine interaction device and the audio output unit.
As another aspect of the present invention, there is also provided a man-machine interaction control method based on a multi-round dialogue, which is implemented in the foregoing control system, the method including:
step S10, the sensing unit detects objects in the range of the sensor, and identity recognition is performed when sensing that a person approaches;
step S11, after the identity recognition unit passes verification, data acquisition is carried out, wherein the data comprise image data and audio data;
step S12, classifying the information acquired by the data acquisition module through a data analysis module, dividing the information acquired by the data acquisition module into audio data information and image data information by the data analysis module, and then transmitting the classified information to a control center unit;
step S13, the control center unit compares and analyzes the received image data with the image data prestored in the domain knowledge base to obtain matched knowledge base image data so as to judge the meaning expressed by the received image data; comparing and analyzing the received audio data with audio data prestored in a domain knowledge base to obtain matched knowledge base audio data so as to judge the meaning expressed by the received audio data;
step S14, executing operation corresponding to the meaning of the data obtained by the control center unit by the executing unit.
Preferably, the method further comprises:
judging whether the meaning obtained by the control center unit is reasonable or not, if not, not executing the operation, and if so, controlling the interactive device to execute through the executing unit; and analyzing the data obtained after execution to determine whether to end the interactive flow.
Preferably, the method further comprises:
after the identification is completed, receiving an instruction sent by a user through man-machine interaction equipment;
reading, analyzing and classifying the instruction received by the information receiving unit, and classifying the instruction into an action instruction and a voice instruction;
the analyzed action instruction and voice instruction are correspondingly compared with the maintenance instruction stored in the voice instruction, and the maintenance instruction with the highest overlapping degree is selected;
performing corresponding maintenance operation according to the determined maintenance instruction, and ending the maintenance operation after receiving the information of qualified system self-inspection after the maintenance is completed; judging maintenance abnormality when the self-check of the system fails, carrying out abnormality prompt, and prompting a user to resend the instruction; and when the abnormality reaches a preset threshold, prompting the manual inspection of the equipment.
The implementation of the invention has the following beneficial effects:
the invention provides a man-machine interaction control system and a man-machine interaction control method based on multi-round dialogue, which can sense and judge objects in a range through a sensing unit, so that the intelligent degree of equipment is improved, an identity recognition system can carry out identity verification on a user, an information receiving unit can receive image instructions or voice instructions of the user, an information analysis unit can match the received instructions with a domain knowledge base, accurate instruction meanings are obtained, corresponding functions are executed, the operation difficulty of the system is reduced, and the system is suitable for being pushed to people of all ages. Meanwhile, the information receiving unit in the maintenance unit can refresh the instruction more rapidly and timely and send the instruction to each component of the equipment, so that the instruction can be shared, the storage capacity of the file is improved through the domain knowledge base, and a user can find a proper file conveniently, thereby being beneficial to popularization and use; the convenience degree of man-machine interaction is improved, and the convenience degree of maintenance of machine equipment is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are required in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that it is within the scope of the invention to one skilled in the art to obtain other drawings from these drawings without inventive faculty.
FIG. 1 is a schematic diagram illustrating a structure of an embodiment of a man-machine interaction control system based on multi-round conversations according to the present invention;
FIG. 2 is a schematic diagram of the control assembly of FIG. 1;
FIG. 3 is a schematic diagram of the control center unit of FIG. 2;
FIG. 4 is a schematic view of the maintenance unit of FIG. 1;
fig. 5 is a schematic flow chart of an embodiment of a man-machine interaction control method based on multi-round conversations.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
In the description of the present invention, it should be noted that the positional or positional relationship indicated by the terms such as "upper", "lower", "inner", "outer", "top/bottom", etc. are based on the positional or positional relationship shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "configured to," "engaged with," "connected to," and the like are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Referring to fig. 1, a schematic structural diagram of an embodiment of a man-machine interaction control system based on multi-round conversations according to the present invention is shown. In this embodiment, the man-machine interaction control system 1 based on multi-wheel conversations includes a control assembly 10 and a maintenance unit 11 integrated in a man-machine interaction device, the control assembly 10 and the maintenance unit 11 are electrically connected, the control assembly 10 includes an induction unit 100, an identification unit 101, a data acquisition unit 102, a data analysis unit 103, a domain knowledge base 105 and an execution unit 106 electrically connected to a control center unit 104, wherein:
the sensing unit 100, which is composed of an infrared sensor, is used for detecting an object in a sensing range;
the identity recognition unit 101 is configured to perform identity recognition on a human body when the sensing unit senses that the human body approaches, and integrates functions of face recognition, fingerprint recognition, IC card recognition, voice instruction recognition and password recognition;
the data acquisition unit 102 is configured to perform data acquisition after the authentication of the identity recognition unit is passed, where the data includes image data and audio data;
the data analysis unit 103 is configured to classify the information collected by the data collection module by the data analysis module, the data analysis module divides the information collected by the data collection module into audio data information and image data information, and then sends the classified information to the control center unit;
the domain knowledge base 105 stores knowledge base audio data and knowledge base image data therein; more specifically, the domain knowledge base is an expandable storage disc, and workers can transmit data to the inside of the storage disc in real time, so that the storage quantity of the domain knowledge base is improved;
the control center unit 104 is configured to perform a comparative analysis on the received image data and the image data in the domain knowledge base, so as to obtain matched knowledge base image data, so as to determine a meaning expressed by the received image data; the method comprises the steps of comparing and analyzing received audio data with audio data in a domain knowledge base to obtain matched knowledge base audio data so as to judge meaning expressed by the received audio data;
an execution unit 406, configured to execute an execution operation corresponding to the meaning of the data obtained by the control center unit;
the maintenance unit 11 includes an information receiving unit 110, an information determining unit 111, a maintenance unit 112, a maintenance executing unit 113, and a maintenance abnormality processing unit 114, where the information receiving unit, the information determining unit, and the maintenance unit are all electrically connected to the maintenance executing unit.
In a specific example, the system further includes an audio output unit 12, when the interface of the man-machine interaction device and the audio output unit output the file, the file output progress is monitored, and when the output progress reaches a preset selection node, the interactive file corresponding to the current selection node is output through the interface of the man-machine interaction device and the audio output unit.
Still further, the control center unit 104 further includes: a meaning judging unit 107, configured to judge whether the meaning obtained by the control center unit is reasonable, if not, not perform an execution operation, and if so, control the interactive device to perform an execution by the executing unit; and analyzing the data obtained after execution to determine whether to end the interactive flow.
In a specific example, the data analysis unit 103 is an intermediate frequency analysis module, where the intermediate frequency analysis module includes a phase-locked main circuit and a ZYNQ chip that are sequentially connected, where the phase-locked main circuit uses an externally input reference signal to control the frequency and phase of an oscillation signal inside the loop, so as to implement automatic tracking of the output signal frequency to the input signal frequency; the ZYNQ chip processes the data information to achieve data analysis.
The phase-locked main circuit is connected to the signal input circuit through the input ends of the first frequency conversion channel and the second frequency conversion channel, and the output ends of the first frequency conversion channel and the second frequency conversion channel are connected to the signal output circuit.
In a specific example, in the maintenance unit 11:
the information receiving unit 110 is configured to receive an instruction sent by a user through a man-machine interaction device after the identification is completed;
the information determining unit 111 is configured to read, analyze, and classify the instruction received by the information receiving unit, and classify the instruction into an action instruction and a voice instruction;
the maintenance unit 112 is configured to compare the action instruction and the voice instruction analyzed by the information judging unit with the maintenance instructions stored in the information judging unit, and select a maintenance instruction with the highest overlap ratio;
the maintenance execution unit 113 performs corresponding maintenance operations according to the maintenance instructions determined by the maintenance unit, and after the maintenance is completed, the maintenance operation is ended after receiving the information of qualified system self-inspection.
The maintenance exception handling unit 114 is configured to determine that the maintenance exception is performed when the system self-test fails, perform exception prompt, and prompt a user to resend the instruction; and when the abnormality reaches a preset threshold, prompting the manual inspection of the equipment.
As shown in fig. 5, a man-machine interaction control method based on multi-round dialogue provided by the present invention is implemented in the control system described in the foregoing fig. 1 to 4, and in this embodiment, the method includes:
step S10, the sensing unit detects objects in the range of the sensor, and identity recognition is performed when sensing that a person approaches; specifically, when the infrared sensor senses that a person approaches the interactive equipment, an identity recognition system of the man-machine interactive equipment is started, and at the moment, identity recognition is performed in various modes such as face recognition, fingerprint recognition, IC card recognition, voice instruction recognition and password recognition;
step S11, after the identity recognition unit passes verification, data acquisition is carried out, wherein the data comprise image data and audio data; specifically, after the verification is met, a user can use the man-machine interaction device, and instruction data sent by the user can be received through the data acquisition unit;
step S12, classifying the information acquired by the data acquisition module through a data analysis module, dividing the information acquired by the data acquisition module into audio data information and image data information by the data analysis module, and then transmitting the classified information to a control center unit;
step S13, the control center unit compares and analyzes the received image data with the image data prestored in the domain knowledge base to obtain matched knowledge base image data so as to judge the meaning expressed by the received image data; comparing and analyzing the received audio data with audio data prestored in a domain knowledge base to obtain matched knowledge base audio data so as to judge the meaning expressed by the received audio data;
step S14, executing operation corresponding to the meaning of the data obtained by the control center unit by the executing unit.
More specifically, the method further comprises:
judging whether the meaning obtained by the control center unit is reasonable or not, if not, not executing the operation, and if so, controlling the interactive device to execute through the executing unit; and analyzing the data obtained after execution to determine whether to end the interactive flow.
More specifically, the method further comprises a process of automatic maintenance, comprising the steps of:
after the identification is completed, receiving an instruction sent by a user through man-machine interaction equipment;
reading, analyzing and classifying the instruction received by the information receiving unit, and classifying the instruction into an action instruction and a voice instruction;
the analyzed action instruction and voice instruction are correspondingly compared with the maintenance instruction stored in the voice instruction, and the maintenance instruction with the highest overlapping degree is selected;
performing corresponding maintenance operation according to the determined maintenance instruction, and ending the maintenance operation after receiving the information of qualified system self-inspection after the maintenance is completed; judging maintenance abnormality when the self-check of the system fails, carrying out abnormality prompt, and prompting a user to resend the instruction; and when the abnormality reaches a preset threshold, prompting the manual inspection of the equipment.
The implementation of the invention has the following beneficial effects:
the invention provides a man-machine interaction control system and a man-machine interaction control method based on multi-round dialogue, which can sense and judge objects in a range through a sensing unit, so that the intelligent degree of equipment is improved, an identity recognition system can carry out identity verification on a user, an information receiving unit can receive image instructions or voice instructions of the user, an information analysis unit can match the received instructions with a domain knowledge base, accurate instruction meanings are obtained, corresponding functions are executed, the operation difficulty of the system is reduced, and the system is suitable for being pushed to people of all ages. Meanwhile, the information receiving unit in the maintenance unit can refresh the instruction more rapidly and timely and send the instruction to each component of the equipment, so that the instruction can be shared, the storage capacity of the file is improved through the domain knowledge base, and a user can find a proper file conveniently, thereby being beneficial to popularization and use; the convenience degree of man-machine interaction is improved, and the convenience degree of maintenance of machine equipment is improved.
The above disclosure is only a preferred embodiment of the present invention, and it is needless to say that the scope of the invention is not limited thereto, and therefore, the equivalent changes according to the claims of the present invention still fall within the scope of the present invention.

Claims (7)

1. The man-machine interaction control system based on multi-wheel dialogue is characterized by comprising a control assembly and a maintenance unit which are integrated in man-machine interaction equipment, wherein the control assembly is electrically connected with the maintenance unit, and comprises an induction unit, an identity recognition unit, a data acquisition unit, a data analysis unit, a domain knowledge base and an execution unit which are electrically connected with a control center unit, wherein:
the sensing unit consists of an infrared sensor and is used for detecting objects in a sensing range;
the identification unit is used for carrying out identification on the human body when the sensing unit senses that the human body approaches, and integrates the functions of face recognition, fingerprint recognition, IC card recognition, voice instruction recognition and password recognition;
the data acquisition unit is used for acquiring data after the identity recognition unit passes the verification, wherein the data comprises image data and audio data;
the data analysis unit is used for classifying the information acquired by the data acquisition module through the data analysis module, dividing the information acquired by the data acquisition module into audio data information and image data information, and then sending the classified information to the control center unit;
the knowledge base audio data and knowledge base image data are stored in the domain knowledge base;
the control center unit is used for comparing and analyzing the received image data with the image data in the domain knowledge base to obtain matched knowledge base image data so as to judge the meaning expressed by the received image data; the method comprises the steps of comparing and analyzing received audio data with audio data in a domain knowledge base to obtain matched knowledge base audio data so as to judge meaning expressed by the received audio data;
the execution unit is used for executing the execution operation corresponding to the meaning of the data obtained by the control center unit;
the maintenance unit comprises an information receiving unit, an information judging unit, a maintenance unit and a maintenance executing unit, wherein the information receiving unit, the information judging unit and the maintenance unit are electrically connected with the maintenance executing unit;
wherein, at the control center unit further comprises: the meaning judging unit is used for judging whether the meaning obtained by the control center unit is reasonable or not, if not, the executing operation is not performed, and if so, the executing unit controls the interactive device to execute; analyzing the data obtained after execution to determine whether to end the interaction flow;
the data analysis unit is an intermediate frequency analysis module, and the intermediate frequency analysis module comprises a phase-locked main circuit and a ZYNQ chip which are sequentially connected, wherein the phase-locked main circuit controls the frequency and the phase of an internal oscillation signal of a loop by utilizing an externally input reference signal so as to realize automatic tracking of the frequency of an output signal to the frequency of an input signal; the ZYNQ chip is used for processing the data information to realize data analysis;
the phase-locked main circuit is connected to the signal input circuit through the input ends of the first frequency conversion channel and the second frequency conversion channel, and the output ends of the first frequency conversion channel and the second frequency conversion channel are connected to the signal output circuit.
2. The system of claim 1, wherein in the maintenance unit:
the information receiving unit is used for receiving an instruction sent by a user through the man-machine interaction equipment after the identification is completed;
the information judging unit is used for reading, analyzing and classifying the instructions received by the information receiving unit and classifying the instructions into action instructions and voice instructions;
the maintenance unit is used for correspondingly comparing the action instruction and the voice instruction which are analyzed by the information judgment unit with the maintenance instructions stored in the information judgment unit and selecting the maintenance instruction with the highest overlapping degree;
and the maintenance execution unit is used for carrying out corresponding maintenance operation according to the maintenance instruction determined by the maintenance unit, and ending the maintenance operation after receiving the information of qualified system self-inspection after the maintenance is completed.
3. The system of claim 2, wherein the maintenance unit further comprises:
the maintenance abnormality processing unit is used for judging maintenance abnormality when the self-check of the system fails, carrying out abnormality prompt and prompting a user to resend the instruction; and when the abnormality reaches a preset threshold, prompting the manual inspection of the equipment.
4. The system according to claim 1, wherein: the system further comprises an audio output unit, wherein when the interface of the man-machine interaction device and the audio output unit output files, the file output progress is monitored, and when the output progress reaches a preset selection node, the interactive files corresponding to the current selection node are output through the interface of the man-machine interaction device and the audio output unit.
5. A man-machine interaction control method based on multi-round conversations, implemented in a control system according to any one of claims 1 to 4, characterized in that the method comprises:
step S10, the sensing unit detects objects in the range of the sensor, and identity recognition is performed when sensing that a person approaches;
step S11, after the identity recognition unit passes verification, data acquisition is carried out, wherein the data comprise image data and audio data;
step S12, classifying the information acquired by the data acquisition module through a data analysis module, dividing the information acquired by the data acquisition module into audio data information and image data information by the data analysis module, and then transmitting the classified information to a control center unit;
step S13, the control center unit compares and analyzes the received image data with the image data prestored in the domain knowledge base to obtain matched knowledge base image data so as to judge the meaning expressed by the received image data; comparing and analyzing the received audio data with audio data prestored in a domain knowledge base to obtain matched knowledge base audio data so as to judge the meaning expressed by the received audio data;
step S14, executing operation corresponding to the meaning of the data obtained by the control center unit by the executing unit.
6. The method as recited in claim 5, further comprising:
judging whether the meaning obtained by the control center unit is reasonable or not, if not, not executing the operation, and if so, controlling the interactive device to execute through the executing unit; and analyzing the data obtained after execution to determine whether to end the interactive flow.
7. The method as recited in claim 5 or 6, further comprising:
after the identification is completed, receiving an instruction sent by a user through man-machine interaction equipment;
reading, analyzing and classifying the instruction received by the information receiving unit, and classifying the instruction into an action instruction and a voice instruction;
the analyzed action instruction and voice instruction are correspondingly compared with the maintenance instruction stored in the voice instruction, and the maintenance instruction with the highest overlapping degree is selected;
performing corresponding maintenance operation according to the determined maintenance instruction, and ending the maintenance operation after receiving the information of qualified system self-inspection after the maintenance is completed; judging maintenance abnormality when the self-check of the system fails, carrying out abnormality prompt, and prompting a user to resend the instruction; and when the abnormality reaches a preset threshold, prompting the manual inspection of the equipment.
CN202110840461.XA 2021-07-24 2021-07-24 Man-machine interaction control system and method based on multi-round dialogue Active CN113534958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110840461.XA CN113534958B (en) 2021-07-24 2021-07-24 Man-machine interaction control system and method based on multi-round dialogue

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110840461.XA CN113534958B (en) 2021-07-24 2021-07-24 Man-machine interaction control system and method based on multi-round dialogue

Publications (2)

Publication Number Publication Date
CN113534958A CN113534958A (en) 2021-10-22
CN113534958B true CN113534958B (en) 2023-08-22

Family

ID=78088926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110840461.XA Active CN113534958B (en) 2021-07-24 2021-07-24 Man-machine interaction control system and method based on multi-round dialogue

Country Status (1)

Country Link
CN (1) CN113534958B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107742260A (en) * 2017-11-24 2018-02-27 福建网能科技开发有限责任公司 A kind of artificial intelligence robot system for electrical power services field
CN108304155A (en) * 2018-01-26 2018-07-20 广州源创网络科技有限公司 A kind of man-machine interaction control method
CN109015647A (en) * 2018-08-22 2018-12-18 安徽爱依特科技有限公司 Mutual education robot system and its terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107742260A (en) * 2017-11-24 2018-02-27 福建网能科技开发有限责任公司 A kind of artificial intelligence robot system for electrical power services field
CN108304155A (en) * 2018-01-26 2018-07-20 广州源创网络科技有限公司 A kind of man-machine interaction control method
CN109015647A (en) * 2018-08-22 2018-12-18 安徽爱依特科技有限公司 Mutual education robot system and its terminal

Also Published As

Publication number Publication date
CN113534958A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
US10671342B2 (en) Non-contact gesture control method, and electronic terminal device
US20210109598A1 (en) Systems, methods and devices for gesture recognition
EP2972662A1 (en) Brain computer interface (bci) system based on gathered temporal and spatial patterns of biophysical signals
CN107608877A (en) A kind of automation application program interface method of testing and test system based on machine learning
CN103530659A (en) Face recognition method and attendance system combining original and symmetrical face facial images
CN106651566A (en) Card-free withdrawal method and apparatus
CN116259004B (en) Student learning state detection method and system applied to online education
CN113069125A (en) Head-mounted equipment control system, method and medium based on brain wave and eye movement tracking
CN113534958B (en) Man-machine interaction control system and method based on multi-round dialogue
CN110991277B (en) Multi-dimensional multi-task learning evaluation system based on deep learning
CN108388513B (en) Automatic testing method and device
CN110766057A (en) Gesture recognition device and method
JP6906820B1 (en) Concentration ratio determination program
CN105425942A (en) Method and system for pushing learning content to display interface
CN111724638A (en) AR interactive learning method and electronic equipment
CN105825195A (en) Intelligent cooking behavior identification device and method
CN115437500A (en) Service handling method and device based on gesture interaction and self-service equipment
JP2003058298A (en) Information classifying device, information classifying method, information classifying program and computer readable recording medium
CN105334242A (en) Method for identifying authenticity of liquid beverage based on characteristic value of liquid beverage
WO2022089196A1 (en) Image processing method and apparatus, and electronic device and storage medium
CN209028660U (en) A kind of motor vehicle Inspection and Supervision system based on PDA Smart Verify terminal
CN111611979A (en) Intelligent health monitoring system and method based on facial scanning
Liu et al. Smart Crib Control System Based on Sentiment Analysis
JP6849255B1 (en) Dementia symptom discrimination program
CN114397964B (en) Method and device for detecting effective fixation point, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant