CN115525750A - Robot phonetics detection visualization method and device, electronic equipment and storage medium - Google Patents

Robot phonetics detection visualization method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115525750A
CN115525750A CN202211254123.9A CN202211254123A CN115525750A CN 115525750 A CN115525750 A CN 115525750A CN 202211254123 A CN202211254123 A CN 202211254123A CN 115525750 A CN115525750 A CN 115525750A
Authority
CN
China
Prior art keywords
test
preset
user
flow chart
intention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211254123.9A
Other languages
Chinese (zh)
Inventor
徐秋媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202211254123.9A priority Critical patent/CN115525750A/en
Publication of CN115525750A publication Critical patent/CN115525750A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to an artificial intelligence technology, and discloses a robot surgery detection visualization method, which comprises the following steps: the method comprises the steps of taking a historical man-machine conversation record corresponding to a user meeting a preset role distance threshold value between simulated user features as a test reference sample, taking user input information in the test reference sample as test language materials, receiving a simulated reply language given by a preset robot aiming at the test language materials, constructing a language test flow chart of the preset robot according to the test language materials and the simulated reply language, constructing the language reference flow chart of the preset robot according to the test reference sample, identifying difference nodes in the language test flow chart and the language reference flow chart, and highlighting the corresponding difference nodes according to preset rules. The invention also provides a device, equipment and medium for detecting and visualizing the robot surgery. The invention can solve the problem that the robot talk detection is not intuitive in the scenes of financial automatic service, on-line medical self-service and the like.

Description

Robot phonetics detection visualization method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a robot technology detection visualization method and device, electronic equipment and a computer readable storage medium.
Background
The man-machine conversation function is an application which is very common in the field of artificial intelligence at present. For example, in self-service financial service applications and online medical self-service applications, a conversation robot is used to quickly and accurately analyze user requirements according to user conversation contents, and a pre-designed conversation reply or guide the user according to the user requirements so as to meet the needs of the user.
With the development of science and technology, the scenes of the application of the dialogue robot become more and more complex, and the corresponding dialogue design of the dialogue robot becomes more and more delicate and bulky. For example, as a business application scenario is upgraded, the conversational robot speech also needs to be upgraded, however, a huge conversational system may have a situation that the conversational robot speech is inconsistent before and after the conversational system is upgraded, and therefore, it is increasingly important to perform pre-detection on the conversational robot speech.
At present, the language model based on deep learning is adopted to carry out relevant robot speech detection, and only input and output are visible to detection personnel in the detection mode, and the detection process is similar to a black box and cannot be visually checked, so that the detection personnel cannot visually understand detection logic, and further cannot better carry out relevant modification on subsequent robot speech.
Disclosure of Invention
The invention provides a robot talk detection visualization method and device, electronic equipment and a computer readable storage medium, and mainly aims to solve the problem that robot talk detection is not intuitive.
In order to achieve the above object, the present invention provides a method for visualizing robotic surgery detection, comprising:
extracting simulation user characteristics of preset simulation users, and screening historical man-machine conversation records corresponding to users meeting preset role distance thresholds between the simulation user characteristics and the preset man-machine conversation library as test reference samples;
taking user input information in the test reference sample as test linguistic data, receiving a simulated answer utterance given by a preset robot aiming at the test linguistic data, and constructing a linguistic test flow chart of the preset robot according to the test linguistic data and the simulated answer utterance;
constructing a dialogical reference flow chart of the preset robot according to the test reference sample;
and identifying the difference nodes in the dialect test flow chart and the dialect reference flow chart, and highlighting the corresponding difference nodes according to a preset rule.
Optionally, the screening, in a preset human-computer conversation library, a historical human-computer conversation record corresponding to a user meeting a preset role distance threshold between the user feature and the simulated user feature as a test reference sample includes:
acquiring a user image set in the preset man-machine conversation library;
calculating a character distance between the simulated user features and each user representation in the set of user representations;
and selecting the man-machine conversation record corresponding to the user image meeting the role distance of the preset role distance threshold value as the test reference sample.
Optionally, the constructing a language-technology testing flowchart of the preset robot according to the testing corpus and the simulated answer language technology includes:
identifying the user intention of the test corpus, and generating a corresponding user intention node according to the identified user intention;
performing service intention identification on the simulated answer operation, and generating a corresponding service intention node according to the identified service intention;
longitudinally arranging all the user intention nodes and all the service intention nodes according to the time sequence of the test corpus and the simulated answer operation appearing in the question and answer of the preset robot to obtain an intention node queue;
and calculating an intention distance between every two adjacent intention nodes in the intention node queue, generating a connecting line with a preset proportional length according to the intention distance, and connecting the adjacent intention nodes in series by using the connecting line to obtain the dialogistic test flow chart.
Optionally, the performing user intention recognition on the test corpus includes:
generating a text vector matrix of the test corpus;
extracting text features of the test corpus from the text vector matrix;
calculating a probability value between the text feature and a preset user intention label by using a pre-trained activation function;
and selecting a user intention label corresponding to the probability value which is greater than or equal to a preset probability threshold value as a user intention corresponding to the test corpus.
Optionally, the generating a text vector matrix of the test corpus includes:
performing word segmentation processing on the test corpus to obtain a plurality of text word segments;
selecting one text participle from the plurality of text participles one by one as a target participle, and counting the co-occurrence times of the target participle and the adjacent text participle of the target participle which commonly appear in a preset neighborhood range of the target participle;
constructing a co-occurrence matrix by using the co-occurrence times corresponding to each text participle;
respectively converting the text participles into word vectors, and splicing the word vectors into a vector matrix;
and performing product operation by using the co-occurrence matrix and the vector matrix to obtain a text vector matrix.
Optionally, the identifying the difference nodes in the dialect test flow chart and the dialect reference flow chart, and highlighting the corresponding difference nodes according to a preset rule, includes:
sequentially taking a user intention node in the conversational testing flow chart as a detection node, and taking the length of a connecting line between the detection node and a service intention node directly connected with the detection node as a detection distance;
taking a user intention node consistent with the detection node in the conversational reference flow chart as a reference node, and taking the length of a connecting line between the reference node and a business intention node directly connected with the reference node as a reference distance;
and calculating a distance difference between the detection distance and the reference distance, and taking the service intention node directly connected with the detection node and the service intention node directly connected with the reference node as the difference node when the distance difference is larger than a preset distance threshold.
Optionally, the calculating, by using a pre-trained activation function, a probability value between the text feature and a preset user intention label includes:
calculating a probability value between the text feature and a preset user intention label by using the following activation function:
Figure BDA0003889114400000031
where p (a | x) is the probability value between the text feature x and the user intent tag a, w a And the weight vector is the weight vector of the user intention label a, T is the operation sign of transposition calculation, exp is the operation sign of expectation calculation, and a is the number of the preset user intention labels.
In order to solve the above problems, the present invention also provides a robotic surgery detection visualization apparatus, the apparatus comprising:
a test sample acquisition module: the system comprises a human-computer interaction library, a user characteristic extraction module, a character distance detection module and a character distance detection module, wherein the human-computer interaction library is used for extracting simulation user characteristics of preset simulation users, and screening historical human-computer interaction records corresponding to users meeting preset character distance threshold values between the simulation user characteristics and the preset human-computer interaction records as test reference samples;
a test flow chart generation module: the language test system is used for taking user input information in the test reference sample as test language materials, receiving a simulated reply language given by a preset robot aiming at the test language materials, and constructing a language test flow chart of the preset robot according to the test language materials and the simulated reply language;
a reference flow chart generation module: a conversational reference flow chart for constructing the preset robot according to the test reference sample;
a flow chart comparison module: and the system is used for identifying the difference nodes in the dialect test flow chart and the dialect reference flow chart and highlighting the corresponding difference nodes according to a preset rule.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one computer program; and
and the processor executes the program stored in the memory to realize the robot talk detection visualization method.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, the at least one computer program being executed by a processor in an electronic device to implement the above-mentioned method for visualizing a robotic surgery detection.
The method comprises the steps of constructing a language reference flow chart by using a test reference sample, visually reflecting historical performance of a preset robot before testing by using the language reference flow chart, using user input information in the test reference sample as test language materials, further constructing a language test flow chart according to the test language materials and simulated reply languages of the preset robot, visually reflecting the test performance of the current preset robot by using the language test flow chart, finally comparing the language test flow chart with the language reference flow chart to obtain difference nodes in the two graphs, highlighting the corresponding difference nodes, and more visually and stereoscopically expressing the test effect.
Drawings
Fig. 1 is a schematic flowchart of a method for visualizing a robotic surgery detection according to an embodiment of the present invention;
fig. 2 is a detailed implementation flowchart of one step in the robot surgery detection visualization method according to an embodiment of the present invention;
fig. 3 is a detailed implementation flowchart of another step in the method for visualizing robotic surgery detection according to an embodiment of the present invention;
fig. 4 is a detailed implementation flowchart of another step in the method for visualizing robotic surgery detection according to an embodiment of the present invention;
FIG. 5 is a functional block diagram of a robotic surgery detection and visualization apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device for implementing the method for detecting and visualizing a robotic surgery according to an embodiment of the present invention.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides a robot surgery detection visualization method. The execution subject of the robot surgery detection visualization method includes, but is not limited to, at least one of the electronic devices that can be configured to execute the method provided by the embodiments of the present application, such as a server, a terminal, and the like. In other words, the robotics detection visualization method may be executed by software or hardware installed in a terminal device or a server device, and the software may be a block chain platform. The service end can be an independent server, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, content Delivery Network (CDN), big data and artificial intelligence platform and the like.
Referring to fig. 1, a flowchart of a robot surgery detection visualization method according to an embodiment of the present invention is shown. In this embodiment, the method for visualizing robotic surgery detection includes:
s1, extracting simulation user characteristics of a preset simulation user, and screening historical man-machine conversation records corresponding to users meeting a preset role distance threshold value between the simulation user characteristics and the history man-machine conversation records in a preset man-machine conversation library to serve as test reference samples;
in the embodiment of the invention, the preset simulation user refers to a user role set in a robot speech detection scene. In practical application, the related user roles can be set according to the scene covered by the actual robot technology. For example, in a fund consultation scenario, user roles of general company staff, individual operators, retirees, and the like may be set. Accordingly, the basic information of the simulated user refers to the basic information of the user related to the robot surgery scene, for example, in the medical outpatient registration self-help consultation scene, the corresponding basic information of the simulated user includes information such as age, name, occupation, history disease history, allergic medicine, current disease symptom, and the like.
In the embodiment of the invention, the basic information of the simulation user can be manually set through the interface input function provided by the simulated man-machine conversation window. Or storing preset basic information of the simulated user by using a configuration file, and randomly reading the corresponding basic information of the simulated user from the configuration file before the detection of the robot surgery.
In another alternative embodiment of the present invention, the user information in the history log of the man-machine interaction can be used to initialize the basic information of the preset simulated user.
In the embodiment of the invention, the preset man-machine conversation library is a database for storing historical man-machine conversation records, and the man-machine conversation library can search related conversation records according to conditions such as conversation time, conversation scenes, conversation users and the like.
In detail, referring to fig. 2, the screening, in a preset human-machine conversation library, a historical human-machine conversation record corresponding to a user meeting a preset role distance threshold between the simulated user features as a test reference sample includes:
s11, acquiring a user image set in the preset man-machine conversation library;
s12, calculating a role distance between the simulated user features and each user portrait in the user portrait set;
and S13, selecting the man-machine conversation record corresponding to the user image meeting the role distance of the preset role distance threshold value as the test reference sample.
In the embodiment of the present invention, the key user characteristics may be occupation characteristics, or combined characteristics such as occupation plus age. In practical application, corresponding key user characteristics can be determined according to the to-be-detected robot surgery scene.
In the embodiment of the invention, the preset key user characteristic entries can be maximally matched with the basic information of the simulated user to obtain the corresponding key user characteristics.
It can be understood that, in the historical human-computer interaction records stored in the preset human-computer interaction library, each group of human-computer interaction records can extract corresponding user attributes, and related user figures can be constructed based on the user attributes, and generally, the human-computer interaction records corresponding to the same user figure have the same or similar characteristics.
In the embodiment of the invention, the role distance between the key user characteristics and each user portrait can be calculated by using an Euclidean distance and Mahalanobis distance equidistance formula.
In the embodiment of the invention, the role distance threshold value can be configured according to actual conditions.
Further, by adding a time screening condition, a man-machine conversation record with a relatively short time can be screened as the test reference sample.
According to the embodiment of the invention, the historical man-machine conversation record related to the basic information of the simulation user is matched in the preset man-machine conversation library to be used as the test reference sample for operation, so that the dialectical scene of the robot to be tested can be reduced, and correspondingly, the corresponding dialectical scene of the robot to be tested can be changed by changing the basic information of the simulation user, so that the full coverage of the test on the dialectical scene of the robot is achieved.
S2, taking user input information in the test reference sample as a test corpus, receiving a simulated reply dialect given by a preset robot aiming at the test corpus, and constructing a dialect test flow chart of the preset robot according to the test corpus and the simulated reply dialect;
in the embodiment of the invention, the test linguistic data can be automatically input into the man-machine dialogue window through the simulated man-machine dialogue window according to a certain preset speed, and after the preset robot receives the corresponding test linguistic data, the relevant dialogues are generated according to preset logic, namely, the answering dialogues are simulated.
In the embodiment of the invention, after the preset robot finishes the reply of the related test corpora, the test corpora and the actual situation of the simulated reply dialect can be combined to construct the dialect test flow chart of the preset robot.
In the embodiment of the invention, the language-skill testing flow chart refers to that in the process of performing language-skill testing on the preset robot, question-answer logics between the testing linguistic data and the simulated answer language-skill of the preset robot are connected in series and displayed in the form of the flow chart, so that text information can be converted into structural information.
In detail, referring to fig. 3, the constructing a dialect testing flowchart of the preset robot according to the test corpus and the simulated reply dialect includes:
s21, identifying the user intention of the test corpus, and generating a corresponding user intention node according to the identified user intention;
s22, identifying the business intention of the simulated answer operation, and generating a corresponding business intention node according to the identified business intention;
s23, longitudinally arranging all the user intention nodes and all the service intention nodes according to the time sequence of the test corpus and the simulated answer speech in the question and answer of the preset robot to obtain an intention node queue;
and S24, calculating an intention distance between every two adjacent intention nodes in the intention node queue, generating a connecting line with a preset proportional length according to the intention distance, and connecting the adjacent intention nodes in series by using the connecting line to obtain the conversational testing flow chart.
In the embodiment of the invention, the user intention or the service intention can be recognized by utilizing a preset text intention recognition model based on deep learning.
In detail, the performing user intention recognition on the test corpus includes:
generating a text vector matrix of the test corpus;
extracting text features of the test corpus from the text vector matrix;
calculating a probability value between the text feature and a preset user intention label by using a pre-trained activation function;
and selecting a user intention label corresponding to the probability value which is greater than or equal to a preset probability threshold value as a user intention corresponding to the test corpus.
In the embodiment of the invention, because the test corpus consists of natural language, if the test corpus is directly analyzed, a large amount of computing resources are occupied, and the analysis efficiency is low, therefore, the test corpus can be converted into a text vector matrix, and further, the text content expressed by the natural language is converted into a numerical form.
In the embodiment of the invention, methods such as Global Vectors for Word retrieval (Global Word Vectors) and Embedding Layer can be adopted to convert the test corpus into the text vector matrix.
In an embodiment of the present invention, the generating the text vector matrix of the test corpus includes:
performing word segmentation processing on the test corpus to obtain a plurality of text word segments;
selecting one text participle from the plurality of text participles one by one as a target participle, and counting the co-occurrence times of the target participle and the adjacent text participle of the target participle which commonly appear in a preset neighborhood range of the target participle;
constructing a co-occurrence matrix by using the co-occurrence times corresponding to each text participle;
respectively converting the text participles into word vectors, and splicing the word vectors into a vector matrix;
and performing product operation by using the co-occurrence matrix and the vector matrix to obtain a text vector matrix.
In detail, a preset standard dictionary can be used for performing word segmentation processing on the test corpus to obtain a plurality of text participles, and the standard dictionary comprises a plurality of standard participles.
For example, the test corpus is searched in the standard dictionary according to different lengths, and if the standard participle identical to the test corpus can be searched, the searched standard participle can be determined to be the text participle of the test corpus.
For example, the co-occurrence matrix can be constructed by using the co-occurrence number corresponding to each text participle as follows:
Figure BDA0003889114400000091
wherein, X i,j And the co-occurrence frequency of the keyword i in the test corpus and the adjacent text participle j of the keyword i is determined.
In the embodiment of the present invention, models with a word vector conversion function, such as a word2vec model and an NLP (Natural Language Processing) model, may be adopted to respectively convert the plurality of test corpora into word vectors, and then the word vectors are spliced into a vector matrix of the test corpora, and the vector matrix and the co-occurrence matrix are subjected to product operation to obtain a text vector matrix.
In the embodiment of the invention, the text features of the test corpus can be extracted according to the text vector matrix by utilizing a preset text intention recognition model based on deep learning.
In the embodiment of the present invention, the activation function includes, but is not limited to, a softmax activation function, a sigmoid activation function, and a relu activation function. The preset user intention label is determined according to a scene covered by an actual robot operation, for example, taking insurance self-help consultation as an example, and the preset user intention label includes but is not limited to insurance withdrawal consultation, false newspaper consultation, insurance policy income inquiry and the like.
In one embodiment of the present invention, the probability value may be calculated using the following activation function:
Figure BDA0003889114400000092
where p (a | x) is the probability value between the text feature x and the user intention label a, w a The weight vector of the user intention label a is shown, T is a transposition operation symbol, exp is an expected operation symbol, and A is the number of preset user intention labels.
It should be noted that the method for identifying the service intention of the simulated answer operation may be the same as the method for identifying the user intention of the test corpus, and details are not repeated here.
In the embodiment of the invention, the intention distance between two adjacent intention nodes can be calculated by using Euclidean distance, mahalanobis distance, manhattan distance formulas and the like.
In detail, the generating of the connection line with a preset proportional length according to the size of the intention distance includes:
normalizing all the intention distances;
and multiplying the normalized intention distance by the preset ratio to obtain a connecting line with a preset ratio length.
According to the embodiment of the invention, the pronunciation test process of the preset robot can be visually displayed by constructing the pronunciation test flow chart of the preset robot.
S3, constructing a dialogical reference flow chart of the preset robot according to the test reference sample;
in the embodiment of the present invention, the conversational reference flowchart is constructed based on the user input information and the corresponding robot reply information in the test reference sample, so that the conversational reference flowchart can reflect the historical performance of the preset robot in the same service scenario.
It should be noted that the method for constructing the dialogical reference flow chart of the preset robot according to the test reference sample is the same as the method for constructing the dialogical test flow chart of the preset robot according to the test corpus and the simulated answer dialect, and details are not repeated here.
And S4, identifying the difference nodes in the dialect test flow chart and the dialect reference flow chart, and highlighting the corresponding difference nodes according to a preset rule.
It will be appreciated that the test reference sample is from a historical human machine dialog record in the pre-defined human machine dialog library, which may be a historical version of the robot prior to performing the dialog upgrade. By comparing the current tactical test flow chart with the tactical reference flow chart reflecting the historical version, some nodes with differences can be obtained, and the difference nodes can be adjusted nodes or inconsistent nodes before and after the preset robot. Service personnel can focus on the difference nodes to analyze the related robot dialogues, so that the robot dialogues are further perfected.
In detail, referring to fig. 4, the identifying the difference node in the tactical test flow chart and the tactical reference flow chart includes:
s41, sequentially taking a user intention node in the conversational testing flow chart as a detection node, and taking the length of a connecting line between the detection node and a business intention node directly connected with the detection node as a detection distance;
s42, taking a user intention node consistent with the detection node in the conversational reference flow chart as a reference node, and taking the length of a connecting line between the reference node and a business intention node directly connected with the reference node as a reference distance;
and S43, calculating a distance difference between the detection distance and the reference distance, and taking the service intention node directly connected with the detection node and the service intention node directly connected with the reference node as the difference node when the distance difference is larger than a preset distance threshold.
In the embodiment of the invention, the distance threshold value can be set according to the actual test condition. When the distance difference is larger than the preset distance threshold, it indicates that the answer difference given by the preset robot in the historical answers and the answer in the current test answer is large for the same user intention, and key analysis is needed.
In the embodiment of the present invention, the preset rule may be that the difference node is subjected to rendering operations such as highlighting and amplifying, so that the detected contrast effect is more prominent.
The embodiment of the invention utilizes the test reference sample to construct the dialect reference flow chart, the historical performance of the preset robot before testing can be intuitively reflected through the dialect reference flow chart, then the user input information in the test reference sample is used as the test linguistic data, and the dialect test flow chart is further constructed according to the test linguistic data and the simulated reply of the preset robot, the dialect test flow chart can vividly reflect the test performance of the current preset robot, and finally, the differential nodes in the two graphs are obtained by comparing the dialect test flow chart and the dialect reference flow chart, and the corresponding differential nodes are highlighted, so that the test effect can be more intuitively and stereoscopically expressed.
Fig. 5 is a functional block diagram of a robotic surgery detection and visualization apparatus according to an embodiment of the present invention.
The device 100 for detecting and visualizing the robotic surgery according to the present invention can be installed in an electronic device. According to the function realized, the robot surgery detection visualization device 100 comprises: a test sample acquiring module 101, a test flowchart generating module 102, a reference flowchart generating module 103, and a flowchart comparing module 104. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions of the respective modules/units are as follows:
the test sample acquisition module 101: the system comprises a user characteristic extraction module, a user characteristic analysis module, a role distance threshold value analysis module, a role distance analysis module and a role distance analysis module, wherein the user characteristic extraction module is used for extracting the simulation user characteristics of a preset simulation user, and screening the historical man-machine conversation records corresponding to the users meeting the preset role distance threshold value between the simulation user characteristics and the preset man-machine conversation library as test reference samples;
the test flow diagram generation module 102: the language test system is used for taking user input information in the test reference sample as test language materials, receiving a simulated reply language given by a preset robot aiming at the test language materials, and constructing a language test flow chart of the preset robot according to the test language materials and the simulated reply language;
the reference flowchart generation module 103: a verbal reference flow chart for constructing the preset robot according to the test reference sample;
the flowchart comparison module 104: and the system is used for identifying the difference nodes in the dialect test flow chart and the dialect reference flow chart and highlighting the corresponding difference nodes according to a preset rule.
In detail, when the modules in the device 100 for detecting and visualizing a robotic surgery according to the embodiment of the present invention are used, the same technical means as the method for detecting and visualizing a robotic surgery described in fig. 1 to 4 is adopted, and the same technical effects can be produced, which is not described herein again.
Fig. 6 is a schematic structural diagram of an electronic device for implementing a robot-induced speech detection visualization method according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a robotics detection visualization program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of a robot-induced visual inspection program, but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., a robot-induced speech detection visualization program, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 6 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 6 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The robot-speech-detection visualization program stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, enable:
extracting simulation user characteristics of preset simulation users, and screening historical man-machine conversation records corresponding to users meeting preset role distance thresholds between the simulation user characteristics and the preset man-machine conversation library as test reference samples;
taking user input information in the test reference sample as a test corpus, receiving a simulated reply dialect given by a preset robot aiming at the test corpus, and constructing a dialect test flow chart of the preset robot according to the test corpus and the simulated reply dialect;
constructing a dialectical reference flow chart of the preset robot according to the test reference sample;
and identifying the difference nodes in the dialect test flow chart and the dialect reference flow chart, and highlighting the corresponding difference nodes according to a preset rule.
Further, the integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
extracting the simulation user characteristics of a preset simulation user, and screening historical man-machine conversation records corresponding to users meeting a preset role distance threshold value between the simulation user characteristics and a preset man-machine conversation library as test reference samples;
taking user input information in the test reference sample as a test corpus, receiving a simulated reply dialect given by a preset robot aiming at the test corpus, and constructing a dialect test flow chart of the preset robot according to the test corpus and the simulated reply dialect;
constructing a dialogical reference flow chart of the preset robot according to the test reference sample;
and identifying the difference nodes in the dialect test flow chart and the dialect reference flow chart, and highlighting the corresponding difference nodes according to a preset rule.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The embodiment of the application can acquire and process related data based on the holographic projection technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A method for visualization of robotic conversational detection, the method comprising:
extracting the simulation user characteristics of a preset simulation user, and screening historical man-machine conversation records corresponding to users meeting a preset role distance threshold value between the simulation user characteristics and a preset man-machine conversation library as test reference samples;
taking user input information in the test reference sample as a test corpus, receiving a simulated reply dialect given by a preset robot aiming at the test corpus, and constructing a dialect test flow chart of the preset robot according to the test corpus and the simulated reply dialect;
constructing a dialogical reference flow chart of the preset robot according to the test reference sample;
and identifying the difference nodes in the dialect test flow chart and the dialect reference flow chart, and highlighting the corresponding difference nodes according to a preset rule.
2. The method for visualizing robotic speech detection as in claim 1, wherein said screening historical human-machine-conversation records corresponding to users between said simulated user features that meet a preset role-distance threshold as test reference samples in a preset human-machine-conversation library comprises:
acquiring a user image set in the preset man-machine conversation library;
calculating a character distance between the simulated user features and each user representation in the set of user representations;
and selecting the man-machine conversation record corresponding to the user image meeting the role distance of the preset role distance threshold value as the test reference sample.
3. The method for visualizing conversational robot detection of claim 1, wherein the constructing of the conversational test flow chart of the preset robot according to the test corpus and the simulated reply conversational speech comprises:
identifying the user intention of the test corpus, and generating a corresponding user intention node according to the identified user intention;
performing service intention identification on the simulated answer operation, and generating a corresponding service intention node according to the identified service intention;
longitudinally arranging all the user intention nodes and all the service intention nodes according to the time sequence of the test corpus and the simulated answer operation in the question and answer of the preset robot to obtain an intention node queue;
and calculating the intention distance between every two adjacent intention nodes in the intention node queue, generating connecting lines with preset proportional length according to the intention distance, and connecting the adjacent intention nodes in series by using the connecting lines to obtain the dialogistic test flow chart.
4. The visualization method for robotic speech detection according to claim 3, wherein said performing user intent recognition on said test corpus comprises:
generating a text vector matrix of the test corpus;
extracting text features of the test corpus from the text vector matrix;
calculating a probability value between the text feature and a preset user intention label by using a pre-trained activation function;
and selecting a user intention label corresponding to the probability value which is greater than or equal to the preset probability threshold value as the user intention corresponding to the test corpus.
5. The method for visualization of robotic speech detection as claimed in claim 4, wherein said generating a text vector matrix of said test corpus comprises:
performing word segmentation processing on the test corpus to obtain a plurality of text word segments;
selecting one text participle from the plurality of text participles one by one as a target participle, and counting the co-occurrence times of the target participle and the adjacent text participle of the target participle which commonly appear in a preset neighborhood range of the target participle;
constructing a co-occurrence matrix by using the co-occurrence times corresponding to each text participle;
respectively converting the text participles into word vectors, and splicing the word vectors into a vector matrix;
and performing product operation by using the co-occurrence matrix and the vector matrix to obtain a text vector matrix.
6. The method for visualizing robotic surgery detection according to any one of claims 1-3, wherein the identifying the difference nodes in the surgery test flow chart and the surgery reference flow chart and highlighting the corresponding difference nodes according to a preset rule comprises:
sequentially taking a user intention node in the conversational testing flow chart as a detection node, and taking the length of a connecting line between the detection node and a service intention node directly connected with the detection node as a detection distance;
taking a user intention node consistent with the detection node in the conversational reference flow chart as a reference node, and taking the length of a connecting line between the reference node and a business intention node directly connected with the reference node as a reference distance;
and calculating a distance difference between the detection distance and the reference distance, and taking the service intention node directly connected with the detection node and the service intention node directly connected with the reference node as the difference node when the distance difference is larger than a preset distance threshold.
7. The method of robotic speech detection visualization of claim 4, wherein said calculating probability values between the text features and preset user intent labels using a pre-trained activation function comprises:
calculating a probability value between the text feature and a preset user intention label by using the following activation function:
Figure FDA0003889114390000031
where p (a | x) is the probability value between the text feature x and the user intent tag a, w a The weight vector of the user intention label a is shown, T is a transposition operation symbol, exp is an expected operation symbol, and A is the number of preset user intention labels.
8. A robotic surgery detection visualization device, the device comprising:
a test sample acquisition module: the system comprises a user characteristic extraction module, a user characteristic analysis module, a role distance threshold value analysis module, a role distance analysis module and a role distance analysis module, wherein the user characteristic extraction module is used for extracting the simulation user characteristics of a preset simulation user, and screening the historical man-machine conversation records corresponding to the users meeting the preset role distance threshold value between the simulation user characteristics and the preset man-machine conversation library as test reference samples;
a test flow chart generation module: the language test system is used for taking user input information in the test reference sample as test language materials, receiving a simulated reply language given by a preset robot aiming at the test language materials, and constructing a language test flow chart of the preset robot according to the test language materials and the simulated reply language;
a reference flow chart generation module: a verbal reference flow chart for constructing the preset robot according to the test reference sample;
a flow chart comparison module: the method is used for identifying the difference nodes in the dialect test flow chart and the dialect reference flow chart and highlighting the corresponding difference nodes according to a preset rule.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of robotic surgery detection visualization of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a method for visualization of robotic surgery detections according to any one of claims 1 to 7.
CN202211254123.9A 2022-10-13 2022-10-13 Robot phonetics detection visualization method and device, electronic equipment and storage medium Pending CN115525750A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211254123.9A CN115525750A (en) 2022-10-13 2022-10-13 Robot phonetics detection visualization method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211254123.9A CN115525750A (en) 2022-10-13 2022-10-13 Robot phonetics detection visualization method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115525750A true CN115525750A (en) 2022-12-27

Family

ID=84701473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211254123.9A Pending CN115525750A (en) 2022-10-13 2022-10-13 Robot phonetics detection visualization method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115525750A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116384410A (en) * 2023-04-14 2023-07-04 天津睿锋智联科技有限公司 Visual processing method and system for digital factory
CN117956082A (en) * 2024-03-27 2024-04-30 福建博士通信息股份有限公司 Electric marketing method combined with AI voice

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116384410A (en) * 2023-04-14 2023-07-04 天津睿锋智联科技有限公司 Visual processing method and system for digital factory
CN116384410B (en) * 2023-04-14 2024-06-07 西安信一诺航空科技有限公司 Visual processing method and system for digital factory
CN117956082A (en) * 2024-03-27 2024-04-30 福建博士通信息股份有限公司 Electric marketing method combined with AI voice
CN117956082B (en) * 2024-03-27 2024-06-07 福建博士通信息股份有限公司 Electric marketing method combined with AI voice

Similar Documents

Publication Publication Date Title
US11501080B2 (en) Sentence phrase generation
CN113220836B (en) Training method and device for sequence annotation model, electronic equipment and storage medium
US11822568B2 (en) Data processing method, electronic equipment and storage medium
CN115525750A (en) Robot phonetics detection visualization method and device, electronic equipment and storage medium
CN112988963B (en) User intention prediction method, device, equipment and medium based on multi-flow nodes
CA3048356A1 (en) Unstructured data parsing for structured information
CN114648392B (en) Product recommendation method and device based on user portrait, electronic equipment and medium
CN113378970B (en) Sentence similarity detection method and device, electronic equipment and storage medium
CN113704410A (en) Emotion fluctuation detection method and device, electronic equipment and storage medium
CN115238670A (en) Information text extraction method, device, equipment and storage medium
CN113360654B (en) Text classification method, apparatus, electronic device and readable storage medium
CN114398902A (en) Chinese semantic extraction method based on artificial intelligence and related equipment
CN112837466B (en) Bill recognition method, device, equipment and storage medium
CN113808616A (en) Voice compliance detection method, device, equipment and storage medium
CN113254814A (en) Network course video labeling method and device, electronic equipment and medium
CN112232088A (en) Contract clause risk intelligent identification method and device, electronic equipment and storage medium
CN115510188A (en) Text keyword association method, device, equipment and storage medium
CN116432646A (en) Training method of pre-training language model, entity information identification method and device
CN115346095A (en) Visual question answering method, device, equipment and storage medium
CN115358817A (en) Intelligent product recommendation method, device, equipment and medium based on social data
CN115221323A (en) Cold start processing method, device, equipment and medium based on intention recognition model
CN111859985B (en) AI customer service model test method and device, electronic equipment and storage medium
CN114401346A (en) Response method, device, equipment and medium based on artificial intelligence
CN114548114A (en) Text emotion recognition method, device, equipment and storage medium
CN113808577A (en) Intelligent extraction method and device of voice abstract, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination