WO2020199701A1 - 对话交互方法、图形用户界面、终端设备以及网络设备 - Google Patents

对话交互方法、图形用户界面、终端设备以及网络设备 Download PDF

Info

Publication number
WO2020199701A1
WO2020199701A1 PCT/CN2020/070344 CN2020070344W WO2020199701A1 WO 2020199701 A1 WO2020199701 A1 WO 2020199701A1 CN 2020070344 W CN2020070344 W CN 2020070344W WO 2020199701 A1 WO2020199701 A1 WO 2020199701A1
Authority
WO
WIPO (PCT)
Prior art keywords
dialogue
semantic
semantic entity
data
user interface
Prior art date
Application number
PCT/CN2020/070344
Other languages
English (en)
French (fr)
Inventor
陈晓
钱莉
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20782878.1A priority Critical patent/EP3920043A4/en
Publication of WO2020199701A1 publication Critical patent/WO2020199701A1/zh
Priority to US17/486,943 priority patent/US20220012432A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas

Definitions

  • This application relates to the field of artificial intelligence, in particular to dialogue interaction methods, graphical user interfaces, terminal devices and network devices.
  • the dialogue system which can also be called question answering system, question answering robot, etc.
  • the dialogue system is a system developed and developed in recent years with the emergence of artificial intelligence (AI) technology. It can use accurate and concise natural language to answer users' questions using natural language, and it can meet users' needs for fast and accurate information acquisition.
  • AI artificial intelligence
  • the dialog system can display the dialog data between the user and the dialog system through a graphical user interface (GUI), that is, the dialog data between the user and the dialog system can be presented as a dialog view in the GUI corresponding to the dialog system.
  • GUI graphical user interface
  • the dialog view displayed in the GUI can visually display the dialog data between the user and the dialog system for the user to view.
  • the user needs to look forward (such as upward) and search to review the historical dialogue data. This is not conducive for users to quickly understand the entire content of the conversation, and it is also not conducive for users to quickly make decisions based on the content of the conversation.
  • This application provides a dialogue interaction method, a graphical user interface, a terminal device, and a network device, and solves the problem that the current dialogue system is not conducive for users to quickly understand the entire content of the dialogue.
  • a dialog interaction method which can be applied to a terminal device in a dialog system.
  • the method includes: the terminal device displays a dialog view in a first area of a target dialog user interface, and displays the dialog view on the target dialog user interface.
  • the conceptual view is displayed in the second area of the target dialogue, the target dialogue user interface is the graphical user interface corresponding to the target dialogue, the dialogue view is used to display the dialogue data of the target dialogue, the conceptual view is used to display the knowledge graph subgraph corresponding to the target dialogue, and the target dialogue corresponds to
  • the sub-graph of the knowledge graph includes multiple semantic entities, and the semantic relationship between each of the multiple semantic entities.
  • the multiple semantic entities include the first semantic entity, and the first semantic entity is the dialogue data of the target dialogue Semantic entities that exist in.
  • the target dialogue is a dialogue between two dialogue parties or multiple dialogue parties with an association relationship in the dialogue system
  • the target dialogue user interface is a graphical user interface for displaying dialogue data sent by the two dialogue parties or dialogue parties.
  • the terminal device in the dialogue system displays the dialogue user interface, in addition to displaying the dialogue data of the target dialogue, it also displays the knowledge graph subgraph corresponding to the target dialogue, and the knowledge graph subgraph corresponding to the target dialogue includes the dialogue data
  • the terminal device in the dialogue system displays the dialogue user interface, in addition to displaying the dialogue data of the target dialogue, it also displays the knowledge graph subgraph corresponding to the target dialogue, and the knowledge graph subgraph corresponding to the target dialogue includes the dialogue data
  • the multiple semantic entities included in the knowledge graph subgraph corresponding to the target dialogue further include one or more second semantic entities associated with the first semantic entity.
  • the foregoing second semantic entity may include a semantic entity adjacent to the first semantic entity in the knowledge graph. Further, the second semantic entity includes some of the semantic entities adjacent to the first semantic entity in the knowledge graph. This part of the semantic entity may be a semantic entity that is adjacent to the first semantic entity in the knowledge graph and is used in the dialogue process with a frequency higher than the first frequency threshold.
  • the dialogue process may refer to the dialogue process of the target dialogue or Refers to the dialogue process in the entire dialogue system (that is, the dialogue process including multiple dialogues in the dialogue system).
  • the partial semantic entity may also be a semantic entity that is adjacent to the first semantic entity in the knowledge graph and determined based on the user portrait. Part of the semantic entities in the semantic entities adjacent to the first semantic entity in the knowledge graph is not limited to the above two cases, and this application does not limit it.
  • the above-mentioned second semantic entity may also include a semantic entity whose path distance from the first semantic entity in the subgraph of the knowledge graph is less than the first distance threshold, that is, in the knowledge graph and the first semantic entity.
  • the semantic entity adjacent to the semantic entity may include some semantic entities among the semantic entities adjacent to the first semantic entity.
  • This part of the semantic entity may be a semantic entity whose use frequency is adjacent to the first semantic entity in the dialogue process and whose use frequency in the dialogue process is higher than the second frequency threshold.
  • the dialogue process may refer to the dialogue process of the target dialogue, and It can refer to the dialogue process in the entire dialogue system.
  • the partial semantic entity may also be a semantic entity that is adjacent to the first semantic entity in the knowledge graph and determined based on the user portrait. Part of the semantic entities in the semantic entities adjacent to the first semantic entity are not limited to the above two cases, and this application does not limit it.
  • the knowledge graph subgraph corresponding to the target dialogue includes not only the first semantic entity used to summarize the summary of the dialogue data, but also the second semantic entity, the second semantic entity and the first semantic entity The entities are associated, and the second semantic entity plays a role in guiding the conversation topic, enhancing the user's conversation experience.
  • the above method further includes: in the case of acquiring new dialog data, the terminal device updates the conceptual view, and the updated conceptual view is used to display an update based on the new dialog data
  • the updated knowledge graph subgraph includes the semantic entities existing in the new dialogue data, or the semantic entities existing in the new dialogue data and the semantics associated with the semantic entities existing in the new dialogue data entity.
  • the knowledge graph subgraph displayed in the conceptual view will be updated with the generation of dialogue data, realizing the synchronization of the dialogue data and the knowledge graph subgraph; the updated knowledge graph subgraph also includes the semantics existing in the new dialogue data
  • the semantic entity associated with the entity plays a role in guiding the topic.
  • the above method further includes: in the case that the number of semantic entities in the knowledge graph sub-graph is greater than the first number, the terminal device deletes one or more of the semantic entities in the knowledge graph sub-graph Semantic entities.
  • the terminal device deletes one or more of the semantic entities in the knowledge graph sub-graph Semantic entities.
  • the above method further includes: in a case where a first operation acting on the first dialog data displayed in the dialog view is detected, the terminal device responds to the first operation,
  • the third semantic entity is highlighted in the conceptual view, and the third semantic entity includes a semantic entity existing in the first dialogue data, and/or a semantic entity associated with the semantic entity existing in the first dialogue data.
  • the third semantic entity may also include a semantic entity whose topic relevance to the first conversation data is higher than a relevance threshold.
  • the above method further includes: in the case of detecting a second operation acting on the fourth semantic entity displayed in the conceptual view, the terminal device responds to the second operation,
  • the second dialog data is displayed in the dialog view, and the fourth semantic entity is a semantic entity existing in the second dialog data, or a semantic entity associated with the semantic entity existing in the second dialog data.
  • the second dialogue data may also be historical dialogue data whose topic association degree with the fourth semantic entity is higher than the association degree threshold.
  • the above method further includes: in the case of detecting a second operation acting on the fourth semantic entity displayed in the conceptual view, the terminal device responds to the second operation,
  • the conceptual view displays the summary information of the second dialog data
  • the fourth semantic entity is a semantic entity existing in the second dialog data, or a semantic entity associated with the semantic entity existing in the second dialog data.
  • the terminal device may display the summary information of the second conversation data with the latest generation time in the conceptual view.
  • the terminal device when the dialog data in the dialog view is selected, the terminal device highlights the semantic entity corresponding to the dialog data in the conceptual view; when the semantic entity in the conceptual view, the terminal device The dialog data corresponding to the semantic entity is displayed in the dialog view, which realizes the collaborative interaction between the dialog view and the conceptual view, helps users locate the semantic entity and historical dialog content, and improves the user's dialog experience.
  • the above method further includes: in the case of detecting a third operation acting on the task semantic entity displayed in the conceptual view, the terminal device responds to the third operation, The key information corresponding to the task semantic entity is displayed in the view.
  • the method further includes: detecting the fourth operation acting on the key information And in the case that the user's intention for the key information is acquired, the terminal device responds to the fourth operation to trigger the execution of a dialogue task that meets the user's intention.
  • the method further includes: the terminal device according to the result obtained by executing the dialog task that meets the user's intention, Update key information in the conceptual view.
  • the knowledge graph subgraph displayed in the conceptual view includes not only the semantic entities existing in the dialogue data in the dialogue view, but also the task semantic entity.
  • the task semantic entity serves as a clear dialogue system. The function of the boundary allows users to learn the functions of the dialogue system based on these task semantic entities.
  • the above method further includes: when a new semantic entity that has a semantic relationship with the semantic entity in the historical dialogue data in the knowledge graph is recognized, and the new semantic The entity does not exist in the historical dialogue data, and the terminal device initiates a dialogue based on the semantic entity in the historical dialogue data and the new semantic entity.
  • the terminal device actively initiates a dialogue according to the association relationship between the various concepts in the historical dialogue data, which plays a role of guiding the topic, making the dialogue content richer.
  • another method for dialogue interaction is provided, which can be applied to a network device in a dialogue system.
  • the method includes: the network device generates a knowledge graph subgraph corresponding to the target dialogue according to the dialogue data of the target dialogue, and the target dialogue corresponds to The subgraph of the knowledge graph includes multiple semantic entities, and the semantic relationship between each semantic entity in the multiple semantic entities.
  • the multiple semantic entities include a first semantic entity, and the first semantic entity is in the dialogue data Existing semantic entity; the network device sends the knowledge graph sub-graph corresponding to the target dialogue to the terminal device, and the knowledge graph sub-graph corresponding to the target dialogue is used by the terminal device to display the dialogue view in the first area of the target dialogue user interface.
  • the conceptual view is displayed in the second area of the target dialogue user interface, the dialogue view is used to display the dialogue data of the target dialogue, the conceptual view is used to display the knowledge graph subgraph corresponding to the target dialogue, and the target dialogue user interface is the graphical user interface corresponding to the target dialogue .
  • the target dialogue is a dialogue between two dialogue parties or multiple dialogue parties with an association relationship in the dialogue system
  • the target dialogue user interface is a graphical user interface for displaying dialogue data sent by the two dialogue parties or dialogue parties.
  • the network device generates the knowledge graph subgraph corresponding to the target dialogue according to the dialogue data of the target dialogue, and sends the generated knowledge graph subgraph to the terminal device, so that when the terminal device displays the dialogue user interface, in addition to displaying the target dialogue In addition to the dialogue data, the knowledge graph subgraph corresponding to the target dialogue is also displayed.
  • the knowledge graph subgraph corresponding to the target dialogue includes the semantic entities existing in the dialogue data. These semantic entities are equivalent to the summary and generalization of the dialogue data of the target dialogue. It is helpful for users to quickly understand the summary of historical dialogue content, and achieve the purpose of reviewing historical dialogue content.
  • the above method further includes: the network device updates the knowledge graph subgraph corresponding to the target dialogue according to the new dialogue data, and sends the updated knowledge graph subgraph to the terminal device,
  • the updated knowledge graph subgraph is used by the terminal device to update the conceptual view.
  • the updated knowledge graph subgraph includes the semantic entity existing in the new dialogue data, or, the semantic entity existing in the new dialogue data and the new dialogue data.
  • a graphical user interface on a terminal device has a display screen, a memory, and one or more processors, and the one or more processors are used to execute one or more stored in the memory.
  • the dialogue view is used to display the dialogue data of the target dialogue
  • the conceptual view is used to display the knowledge graph subgraph corresponding to the target dialogue.
  • the knowledge graph subgraph corresponding to the target dialogue includes multiple semantic entities, and each of the multiple semantic entities.
  • the multiple semantic entities include a first semantic entity, and the first semantic entity is a semantic entity existing in the dialogue data of the target dialogue.
  • the target dialogue is a dialogue between two dialogue parties or multiple dialogue parties with an association relationship in the dialogue system
  • the target dialogue user interface is a graphical user interface for displaying dialogue data sent by the two dialogue parties or dialogue parties.
  • the multiple semantic entities included in the knowledge graph subgraph corresponding to the target dialogue further include one or more second semantic entities associated with the first semantic entity.
  • the foregoing second semantic entity may include a semantic entity adjacent to the first semantic entity in the knowledge graph. Further, the second semantic entity includes some of the semantic entities adjacent to the first semantic entity in the knowledge graph. This part of the semantic entity may be a semantic entity that is adjacent to the first semantic entity in the knowledge graph and is used in the dialogue process with a frequency higher than the first frequency threshold.
  • the dialogue process may refer to the dialogue process of the target dialogue or Refers to the dialogue process in the entire dialogue system (that is, the dialogue process including multiple dialogues in the dialogue system).
  • the partial semantic entity may also be a semantic entity that is adjacent to the first semantic entity in the knowledge graph and determined based on the user portrait. Part of the semantic entities in the semantic entities adjacent to the first semantic entity in the knowledge graph is not limited to the above two cases, and this application does not limit it.
  • the aforementioned second semantic entity may also include a semantic entity whose path distance from the first semantic entity in the subgraph of the knowledge graph is less than the first distance threshold, that is, in the knowledge graph font and the first semantic entity.
  • the semantic entity adjacent to the semantic entity may include some semantic entities among the semantic entities adjacent to the first semantic entity. This part of the semantic entity may be a semantic entity whose use frequency is adjacent to the first semantic entity in the dialogue process and whose use frequency in the dialogue process is higher than the second frequency threshold.
  • the dialogue process may refer to the dialogue process of the target dialogue, and It can refer to the dialogue process in the entire dialogue system.
  • the partial semantic entity may also be a semantic entity that is adjacent to the first semantic entity in the knowledge graph and determined based on the user portrait. Part of the semantic entities in the semantic entities adjacent to the first semantic entity are not limited to the above two cases, and this application does not limit it.
  • the conceptual view is updated, and the updated conceptual view is used to display the knowledge graph subgraph updated according to the new dialogue data.
  • the latter knowledge graph sub-graph includes semantic entities existing in the new dialogue data, or semantic entities existing in the new dialogue data and semantic entities associated with the semantic entities existing in the new dialogue data.
  • the knowledge graph subgraph displayed in the conceptual view will be updated with the generation of dialogue data, realizing the synchronization of the dialogue data and the knowledge graph subgraph; the updated knowledge graph subgraph also includes the semantics existing in the new dialogue data
  • the semantic entity associated with the entity plays a role in guiding the topic.
  • the third semantic entity includes a semantic entity existing in the first dialogue data, and/or a semantic entity associated with the semantic entity existing in the first dialogue data.
  • the third semantic entity may also include a semantic entity whose topic relevance to the first conversation data is higher than a relevance threshold.
  • the fourth semantic entity is a semantic entity existing in the second dialogue data, or a semantic entity associated with the semantic entity existing in the second dialogue data.
  • the second dialogue data may also be historical dialogue data whose topic association degree with the fourth semantic entity is higher than the association degree threshold.
  • the fourth semantic entity is a semantic entity existing in the second dialog data, or a semantic entity associated with the semantic entity existing in the second dialog data. Further, the summary information of the second dialogue data with the latest generation time can be displayed in the conceptual view.
  • the task semantic entity corresponding to the task is displayed in the conceptual view Key information.
  • the fourth operation acting on the key information is detected and the target In the case of the user's intention of the key information, in response to the fourth operation, the execution of the dialogue task that meets the user's intention is triggered.
  • the key information is updated in the conceptual view according to the result of executing the dialog task that meets the user's intention .
  • a dialogue is initiated based on the semantic entity in the historical dialogue data and the new semantic entity.
  • a terminal device may include a display screen, a memory, and one or more processors.
  • the one or more processors are used to execute one or more computer programs stored in the memory.
  • the terminal device implements the first aspect or any one of the implementation manners of the first aspect.
  • the terminal device may include a device that can implement the foregoing first aspect or any one of the methods in the first aspect.
  • a network device may include a memory and one or more processors.
  • the one or more processors are used to execute one or more computer programs stored in the memory.
  • the network device implements the foregoing second aspect or any one of the implementation manners of the second aspect.
  • the network device may include a device that can implement the foregoing second aspect or any one of the implementation manners of the second aspect.
  • a computer program product containing instructions is provided, when the computer program product is run on a terminal device, the terminal device is caused to execute any one of the methods in the first aspect or the implementation manner of the first aspect.
  • a computer program product containing instructions is provided.
  • the network device executes any method in the second aspect or the implementation manner of the second aspect.
  • a computer-readable storage medium including instructions, which when the instructions are executed on a terminal device, cause the terminal device to execute any method in the first aspect or the implementation manners of the first aspect.
  • a computer-readable storage medium including instructions, which when the instructions are executed on a network device, cause the network device to execute any method in the first aspect or the implementation manners of the first aspect.
  • a communication system may include terminal equipment and may also include network equipment.
  • the terminal device may be the terminal device of the fourth aspect or the fifth aspect, and the network device may be the network device of the sixth aspect or the seventh aspect.
  • FIG. 1 is a schematic diagram of the system architecture of a dialogue system provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • 3A-3F are some graphical user interfaces implemented on the terminal device during the process of entering the target dialog user interface provided by the embodiments of the present application;
  • 4A-4H are some graphical user interfaces implemented on the terminal device after entering the target dialog user interface provided by an embodiment of the present application;
  • 5A-5B are schematic flowcharts of a dialogue interaction method provided by an embodiment of the present application.
  • 6A-6E are schematic diagrams of another flow chart of a dialogue interaction method provided by an embodiment of the present application.
  • Figures 7A-7B are schematic diagrams of another flow of a dialogue interaction method provided by an embodiment of the present application.
  • FIGS. 8A-8B are schematic diagrams of yet another flow chart of the dialogue interaction method provided by an embodiment of the present application.
  • FIGS. 9A-9B are schematic diagrams of yet another flow chart of the dialog interaction method provided by an embodiment of the present application.
  • 10A-10B are schematic diagrams of yet another flow chart of a dialogue interaction method provided by an embodiment of the present application.
  • 11A-11B are schematic diagrams of another flow of a dialogue interaction method provided by an embodiment of the present application.
  • 12A-12C are schematic diagrams of another flow of a dialogue interaction method provided by an embodiment of the present application.
  • FIGS. 13A-13B are schematic diagrams of yet another flow chart of a dialogue interaction method provided by an embodiment of the present application.
  • Fig. 14 is a structural block diagram of a network device provided by an embodiment of the present application.
  • the technical solution of the present application can be applied to a dialogue system that uses a user interface to display dialogue data.
  • the dialogue data is the voice data or text data sent by the multiple parties or both parties in the dialogue system regarding the dialogue scene or the dialogue environment in which they are used to express the opinions or thoughts or logic of the dialogue parties or the two parties.
  • Data and dialogue data can also be called session data, chat data, question and answer data, etc., and this application does not limit it.
  • User interface is a medium interface for interaction and information exchange between applications or operating systems and users. It realizes the conversion between the internal form of information and the form acceptable to users.
  • the user interface of the application is the source code written in a specific computer language such as java, extensible markup language (XML), etc.
  • the interface source code is parsed and rendered on the terminal device, and finally presented as content that can be recognized by the user.
  • Control also called widget, is the basic element of user interface. Typical controls include toolbar, menu bar, text box, button, scroll bar (scrollbar), pictures and text.
  • the attributes and content of the controls in the interface are defined by tags or nodes.
  • XML specifies the controls contained in the interface through nodes such as ⁇ Textview>, ⁇ ImgView>, and ⁇ VideoView>.
  • a node corresponds to a control or attribute in the interface, and the node is parsed and rendered as user-visible content.
  • applications such as hybrid applications, usually include web pages in their interfaces.
  • a webpage also called a page, can be understood as a special control embedded in the application program interface.
  • the webpage is source code written in a specific computer language, such as hypertext markup language (HTML), cascading style Tables (cascading style sheets, CSS), java scripts (JavaScript, JS), etc.
  • web page source code can be loaded and displayed as user-recognizable content by a browser or a web page display component with similar functions.
  • the specific content contained in a web page is also defined by tags or nodes in the source code of the web page.
  • HTML defines the elements and attributes of the web page through ⁇ p>, ⁇ img>, ⁇ video>, and ⁇ canvas>.
  • GUI which refers to a user interface related to computer operations that is displayed graphically. It can be an icon, window, control and other interface elements displayed on the display screen of the terminal device.
  • the control can include icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, Widgets, etc. Visual interface elements.
  • the dialogue system that uses the user interface to display dialogue data may be a dialogue system based on human-computer interaction
  • the parties involved in the dialogue system based on human-computer interaction may be humans and machines, that is, users and equipment.
  • the dialogue system based on human-computer interaction may be a dialogue system oriented to individual users and used to provide services for individual users. It may be various auxiliary applications (applications, APPs) installed on terminal devices, such as Siri and Cortana. , Alexa, Google Now or other auxiliary APPs used to provide assistant services for independent individual users.
  • the dialogue system based on human-computer interaction can also be a dialogue system for all users to provide some kind of service to all users. It can be a variety of customer service assistants and jobs designed by various enterprises or companies to solve the problems of employees or users. Assistants, intelligent robots, etc., for example, can be Ali Huawei.
  • the dialogue system that uses the user interface to display dialogue data may also be a dialogue system based on instant messaging, and the dialogue parties involved in the dialogue system based on instant messaging may be two or more users.
  • a dialogue system based on instant communication is a communication system used to establish instant communication between two or more users, and specifically can be communication tools such as QQ, WeChat, DingTalk, and Fetion that use the network to transmit dialogue data in real time.
  • the system architecture of the dialogue system can be as shown in Figure 1.
  • the dialogue system 10 may be composed of a terminal device 101 and a network device 102.
  • the terminal device 101 faces the user and can interact with the user.
  • the terminal device 101 can obtain various operations initiated by the user through input peripherals (such as a display screen, a microphone, etc.), and initiate a request to the network device based on the operation initiated by the user to obtain
  • the network device generates a response according to the operation initiated by the user, and outputs the response to the user through output peripherals (such as a display screen, a speaker, etc.).
  • the terminal device can obtain dialogue data input by the user, send the dialogue data to the network device, and then receive the dialogue data generated by the network device according to the dialogue data. Reply the data, and display the reply data to the user through the display screen.
  • the terminal device may be a device with a display function, such as a mobile phone, a computer, an IPAD, or an e-reader.
  • the network device 102 is used to provide dialogue-related background support for the dialogue system.
  • the network device 102 can receive a request initiated by the terminal device based on an operation initiated by the user, execute the corresponding operation according to the request and generate a response, and return the response to the terminal Equipment to complete the interaction between the dialogue system and the user.
  • the network device is a dialogue system based on instant messaging
  • the network device may receive dialogue data A sent by the first terminal device, and the network device may send the dialogue data A to the second user terminal, and the second user terminal is the The destination end of the dialog data A, and then in the case of receiving the dialog data B sent by the second user terminal to the first user terminal, the dialog data B is sent to the first user terminal, thereby completing the first user terminal Dialogue interaction with the second user terminal.
  • the network device 102 may include a real-time communication server, a database server, etc.
  • the real-time communication server may be used to interact with the terminal device 101, and the database server may be used to store various data used to implement the functions implemented by the dialogue system.
  • the dialogue system is a dialogue system based on human-computer interaction
  • the dialogue system based on human-computer interaction uses a knowledge graph to generate reply data
  • the database server can be used to store the dialogue data and a knowledge graph for generating reply data.
  • the database server can be used to store each instant messaging account in the instant messaging system and the instant messaging relationship (such as a friend relationship) between each instant messaging account.
  • the dialogue system may also be composed of a terminal device as an independent device.
  • the terminal device may also perform all or part of the operations performed by the network device 102 in the system architecture shown in FIG. 1.
  • FIG. 2 exemplarily shows a schematic structural diagram of a terminal device 200.
  • the terminal device 200 may include a processor 210, a memory 220, a display screen 230, an audio module 240, a speaker 240A, a receiver 240B, a microphone 240C, a sensor module 250, a communication component 260, etc., where the sensor module 250 may include a pressure sensor 250A, fingerprint Sensor 250B, touch sensor 250C, etc. It can be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the terminal device 200.
  • the processor 210 may include one or more processing units.
  • the processor 210 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • GPU graphics processing unit
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the user 200 may also include one or more processors 210.
  • the processor 210 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 210 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 210. If the processor 210 needs to use the instruction or data again, it can be directly called from the memory. Repeated access is avoided, the waiting time of the processor 210 is reduced, and the efficiency of the terminal device 200 is improved.
  • the processor 210 may include one or more interfaces. Interfaces may include integrated circuit (I2C) interfaces, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interfaces, pulse code modulation (PCM) interfaces, universal asynchronous transmitters and receivers Receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous transmitters and receivers Receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • the memory 220 may be used to store one or more computer programs, and the one or more computer programs include instructions.
  • the processor 210 can run the above-mentioned instructions stored in the memory 220 to enable the terminal device 200 to execute the dialog interaction methods provided in some embodiments of the present application, as well as various functional applications and data processing.
  • the memory 220 may include a storage program area and a storage data area. Among them, the storage program area can store the operating system; the storage program area can also store one or more application programs (such as a gallery, contacts, etc.) and so on.
  • the data storage area can store data (such as photos, contacts, etc.) created during the use of the terminal device 200.
  • the memory 220 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • UFS universal flash storage
  • the terminal device 200 can implement a display function through a GPU, a display screen 230, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 230 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 210 may include one or more GPUs, which execute instructions to generate or change display information.
  • the display screen 230 is used to display images, videos, etc.
  • the display 230 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the terminal device 200 may include 2 or N display screens 230, and N is a positive integer greater than 2.
  • the terminal device 200 can implement audio functions through an audio module 240, a speaker 240A, a receiver 240B, a microphone 240C, and an application processor. For example, music playback, recording, etc.
  • the audio module 240 is used for converting digital audio information into an analog audio signal for output, and also for converting an analog audio input into a digital audio signal.
  • the audio module 240 can also be used to encode and decode audio signals.
  • the audio module 240 may be provided in the processor 210, or part of the functional modules of the audio module 240 may be provided in the processor 210.
  • the speaker 240A also called a "speaker", is used to convert audio electrical signals into sound signals.
  • the terminal device 200 can listen to music through the speaker 240A, or listen to a hands-free call.
  • the receiver 240B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the terminal device 200 answers a call or voice message, it can receive the voice by bringing the receiver 240B close to the human ear.
  • the microphone 240C also called a "microphone” or a “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 240C through the human mouth, and input the sound signal to the microphone 240C.
  • the terminal device 200 may be provided with at least one microphone 240C. In other embodiments, the terminal device 200 may be provided with two microphones 240C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the terminal device 200 may also be provided with three, four or more microphones 240C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the pressure sensor 250A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 250A may be provided on the display screen 230.
  • the capacitive pressure sensor may include at least two parallel plates with conductive material. When a force is applied to the pressure sensor 250A, the capacitance between the electrodes changes.
  • the terminal device 200 determines the strength of the pressure according to the change in capacitance.
  • the terminal device 200 detects the intensity of the touch operation according to the pressure sensor 250A.
  • the terminal device 200 may also calculate the touched position based on the detection signal of the pressure sensor 250A.
  • touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the fingerprint sensor 250B is used to collect fingerprints.
  • the terminal device 200 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the touch sensor 250C can also be called a touch panel or a touch-sensitive surface.
  • the touch sensor 250C may be disposed on the display screen 230, and the touch sensor 250C and the display screen 230 form a touch screen, which is also called a “touch screen”.
  • the touch sensor 250C is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display 230.
  • the touch sensor 250C may also be disposed on the surface of the terminal device 200, which is different from the position of the display screen 230.
  • the communication component 260 may be used for the terminal device 200 to communicate with other communication devices, and the other communication devices may be, for example, network devices (such as servers).
  • the communication component 260 may include a wired communication interface, such as an Ethernet port, an optical fiber interface, and the like.
  • the communication component 260 may also include a wireless communication interface.
  • the communication component 260 may include a radio frequency interface and a radio frequency circuit to implement the functions implemented by the wireless communication interface.
  • the radio frequency circuit may include a transceiver and components (such as conductors, wires, etc.) for transmitting and receiving electromagnetic waves in free space during wireless communication.
  • terminal device 200 may include more or less components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the terminal device 200 exemplarily shown in FIG. 2 can display various user interfaces described in the following embodiments through a display screen 230.
  • the terminal device 200 can also detect touch operations in each user interface through the touch sensor 230C, such as a click operation in each user interface (such as a touch operation on an icon, a double-click operation), and for example, an upward movement in each user interface. Or swipe down, or perform circle-drawing gestures, and so on.
  • the terminal device may also detect the user's operation in the user interface through other input peripherals except the touch sensor. For example, the terminal device may detect the user's voice operation in the user interface through the microphone 240C.
  • the terminal device can also detect a non-touch gesture operation or action operation of the user in the user interface through a camera not shown in FIG. 2.
  • the terminal device can also detect gesture operations, such as moving the mouse, clicking the mouse, etc., through input peripherals such as a mouse and a touchpad not shown in FIG. 2, which is not limited to the description here.
  • the two parties to the dialogue or one of the dialogue parties can enter the target dialogue user interface to perform operations related to dialogue data, such as sending dialogue data, viewing dialogue data, deleting dialogue data, and so on.
  • operations related to dialogue data such as sending dialogue data, viewing dialogue data, deleting dialogue data, and so on.
  • the dialog user interface refers to a graphical user interface on the terminal device 200 that is used to display the dialog data sent by the two parties or the parties to the dialog.
  • the dialog user interface may be a user interface on the terminal device 200 for displaying dialog data sent by the dialog system and the user.
  • the dialogue user interface may be a user interface on the terminal device 200 for displaying dialogue data sent by two or more users.
  • the target dialogue user interface is the dialogue user interface of the target dialogue, and the target dialogue refers to a dialogue between two or more parties to a dialogue with an association relationship.
  • the target dialogue refers to the dialogue between the holding user or user of the terminal device and the dialogue system, that is, the dialogue between the holding user or user of the terminal device and the terminal device.
  • the target dialogue refers to a dialogue between two or more instant messaging users who have an instant messaging relationship.
  • instant messaging user 1 and instant messaging user 2, instant messaging user 3, and instant messaging user 4 all have a friend relationship
  • instant messaging user 1 forms an instant messaging group, which includes instant messaging users 1.
  • Instant messaging user 2, instant messaging user 3, the target conversation can be a separate conversation between instant messaging user 1 and instant messaging user 2, or a separate conversation between instant messaging user 1 and instant messaging user 3. It can also be a separate conversation between the instant messaging user 1 and the instant messaging user 4, or it can be an instant messaging group conversation between the instant messaging user 1, the instant messaging user 2, and the instant messaging user 3.
  • the user can enter the target dialog user interface from the user interface for the application menu.
  • the following describes some graphical user interfaces on the terminal device when the user enters the target dialog user interface from the user interface used for the application menu.
  • FIG. 3A exemplarily shows an exemplary graphical user interface 31 for an application menu on a terminal device.
  • the graphical user interface 31 may include: a status bar 301, a tray 302 with icons of commonly used applications, and icons 303 of other applications. among them:
  • the status bar 301 may include: one or more signal strength indicators 304 for mobile communication signals (also called cellular signals), and one or more signal strength indicators 305 for wireless fidelity (Wi-Fi) signals , Battery status indicator 306, time indicator 307, etc.
  • signal strength indicators 304 for mobile communication signals also called cellular signals
  • signal strength indicators 305 for wireless fidelity (Wi-Fi) signals Battery status indicator 306, time indicator 307, etc.
  • the tray 302 with commonly used application icons can be used to display application icons that are frequently used on the terminal device 200 or set by the user or set by default by the system, such as the phone icon 308 and the contact icon 309 shown in FIG. 3A. , SMS icon 310, camera icon 311.
  • the tray 302 of the commonly used application icons can also be used to display icons of applications corresponding to the dialogue system (hereinafter the applications corresponding to the dialogue system are referred to as target applications). For example, it can be used to display some icons based on Icons for instant messaging chat tools (such as Dingding and Fetion).
  • the other application icons 303 refer to the icons of applications other than commonly used applications installed on the terminal device 200, such as the Wechat icon 312, the QQ icon 313, and the Twitter icon shown in FIG. 3A. 314. Facebook icon 315, mailbox icon 316, cloud service icon 317, memo icon 318, Alipay icon 319, gallery icon 320, and settings icon 321.
  • the other application icon 303 may include the icon of the target application, and the icon of the target application may be, for example, the WeChat icon 312, the QQ icon 313, etc. shown in FIG. 3A.
  • the other application icons 303 may be distributed on multiple pages.
  • the graphical user interface 31 may also include a page indicator 322.
  • the page indicator 322 may be used to indicate which application the user is currently browsing. The user can slide the area of other application icons left and right to browse application icons in other pages.
  • the graphical user interface 31 exemplarily shown in FIG. 3A may be a home screen.
  • the terminal device 200 may further include a home screen key 323.
  • the main screen key 323 may be a physical key or a virtual key.
  • the home screen key can be used to receive instructions from the user and return the currently displayed user interface to the home interface, so that the user can view the home screen at any time.
  • the user's instruction can be an operation instruction for the user to press the home screen key once, or an operation instruction for the user to press the home screen key twice in a short period of time, or the user can long press the home screen within a predetermined time Key operation instructions.
  • the home screen key can also be integrated with a fingerprint recognizer, so that when the home screen key is pressed, fingerprints are collected and recognized.
  • FIGS. 3B to 3C exemplarily show the graphical user interface implemented by the terminal device when the user enters the target dialog user interface on the terminal device 200 from the user interface for the application menu.
  • the terminal device 200 displays the graphical user interface 41 of the target application.
  • the graphical user interface 41 may include: a status bar 401, a title bar 402, an option navigation bar 403, and a page content display area 404. among them:
  • status bar 401 refers to the status bar 301 in the user interface 31 shown in FIG. 3A, which will not be repeated here.
  • the title bar 402 may include a return key 416 and a current page indicator 417.
  • the return key 416 may be used to return to the upper level of the menu.
  • the current page indicator 417 may be used to indicate the current page, for example, the text information "WeChat" is not limited to text information, and the current page indicator may also be an icon.
  • the option navigation bar 403 is used to display multiple application options of the target application.
  • the option navigation bar 403 includes application options 405 ("WeChat”), application options 406 ("Contacts”), application options 407 ("discover”) and applications Option 408 ("I").
  • the page content display area 404 is used to display the lower-level menu or content of the application option selected by the user.
  • the content displayed in the option content display area 404 may vary with the application option selected by the user.
  • the terminal device 200 detects a click operation of an application option in the option navigation bar 403, in response to the click operation, the terminal device 200 can display the next-level menu or content of the application option in the option content display area 404, and Display the title of the application option in the option title bar.
  • the content displayed in the page content display area 404 is the content corresponding to application option 405 ("WeChat”), including option 409 ("QQ mailbox reminder"), option 410 ("subscription number”), option 411 ("XXX”), Option 412 (“YYY”), option 413 (" ⁇ "), option 414 ("Xiao Li”), option 415 (“Xiao Zhao”).
  • option 411, option 412, option 413, and option 414 are dialog selections.
  • the application option 405 (“WeChat”) can be referred to as a dialog application option
  • the page content display area corresponding to the dialog application option can be used to display one or more dialog options.
  • one dialogue option corresponds to one instant messaging session.
  • the dialog application option can also be called “friends” (for example, the target application is Alipay), "message” (for example, the target application is QQ, Taobao, etc.), "chat”, etc., and is not limited to the description here. .
  • Target dialogue user interface 51 may include: a status bar 501, a title bar 502, a dialogue area 503, and a dialogue input area 504. among them:
  • the status bar 501 may refer to the status bar 301 in the graphical user interface 31 shown in FIG. 3A, and the title bar 502 may refer to the title bar 402 in the graphical user interface 41 shown in FIG. 3B, which will not be repeated here.
  • the conversation area 503 may include a conversation view 506 and a concept view 505.
  • the area occupied by the dialogue view 506 in the target dialogue user interface 51 may be referred to as the first area
  • the area occupied by the conceptual view 505 in the target dialogue user interface 51 may be referred to as the second area.
  • the dialog view 506 is used to display the dialog data of the target dialog.
  • the concept view 505 is used to display the subgraph of the knowledge graph corresponding to the target dialogue.
  • the knowledge graph subgraph corresponding to the target dialogue may include multiple semantic entities, and the semantic relationship between each semantic entity in the multiple semantic entities, and the multiple semantic entities may include semantic entities existing in the dialogue data of the target dialogue .
  • the knowledge graph subgraph corresponding to the target dialogue please refer to the subsequent description.
  • the dialog input area 504 is an area for a user who holds or uses the terminal device 200 to input dialog data, and the user who holds or uses the terminal device 200 can input the dialog data through text and/or voice.
  • the user selects the graphical user interface to be displayed in sequence through multiple click operations to enter the target dialog user interface from the graphical user interface for the application menu. It is not limited to the way of sequentially selecting the graphical user interface to be displayed through multiple clicks. In an alternative embodiment, the user can also select the graphical user interface to be displayed in other ways in order to realize the selection from the application menu.
  • the graphical user interface enters the target dialogue user interface. For example, the graphical user interface to be displayed can be selected in sequence by double-clicking, drawing a circle, voice, etc. This application is not limited.
  • the specific selection times that is, how many times can you enter the target dialog user interface, are related to the user interface design of the target application, and this application does not limit it.
  • FIG. 3D exemplarily shows the graphical user interface implemented on the terminal device when the user enters the target dialog user interface on the terminal device 200 from the user interface for the application menu.
  • the terminal device 200 displays the target dialogue user interface 51 in response to the pressing operation.
  • the target dialogue user interface 51 please refer to the introduction corresponding to FIG. 3C, which will not be repeated here.
  • the terminal device can enter the target dialog user interface from the user interface for the application menu by long pressing the home screen key to directly evoke the target dialog user interface. It is not limited to the long-press mode.
  • the user can also directly awaken the dialog user interface through other methods to enable entering the dialog user interface from the user interface for the application menu, for example, through the application
  • the user interface of the menu can be evoked by drawing a circle, double-clicking or triple-clicking the home screen key to awaken, and voice awakening, etc., to evoke the target dialogue user interface, which is not limited by this application.
  • the user can also enter the target dialog user interface through another user interface displayed on the terminal device 200, which is not limited in this application, and the other user interface may be other user interfaces on the terminal device that are not target applications.
  • the other user interface may be a user interface of a memo on the terminal device.
  • the dialog data of the target dialog has been generated in the dialog system, and the dialog system has the function of displaying the dialog data of one or more dialogs generated before the current dialog.
  • the start and end of a dialogue can be measured by whether to enter the target dialogue user interface or the opening and closing of the target application. That is, entering the target dialogue user interface represents the beginning of a dialogue, and exiting the target dialogue user interface represents a dialogue Or, when the target application is opened, it represents the beginning of a conversation, and when the target application is closed, it represents the end of a conversation.
  • the target dialogue user interface displayed on the terminal device 200 may also refer to FIG. 3E, which exemplarily shows an exemplary target dialogue user interface implemented by the terminal device 200.
  • the target dialogue user interface 51 may include a status bar 501, a title bar 502, a dialogue area 503, and a dialogue input area 504.
  • a status bar 501 For the status bar 501, the title bar 502, and the dialogue input area 504, please refer to the description corresponding to FIG. 3C.
  • 3C or 3D is that the dialogue view 506 in the dialogue area 503 displays historical dialogue data 507 ("What is the weather in Shenzhen today?" How?", "Shenzhen turns to cloudy today, the temperature is 16-28 degrees Celsius"), the historical dialogue data 507 is the dialogue data generated before entering the target dialogue user interface, and the conceptual view 505 in the dialogue area 504 shows a knowledge graph
  • the subgraph 508, the knowledge graph subgraph 508 includes semantic entities ("Shenzhen", "weather”, “temperature”) existing in the historical dialogue data 507.
  • the dialogue system does not have the function of displaying the dialogue data of one or more dialogues generated before this dialogue, or, before entering the target dialogue user interface, there is no dialogue data of the target dialogue in the dialogue system.
  • the target dialogue user interface displayed on the terminal device 200 can also be seen in FIG. 3F, and FIG. 3F exemplarily shows an exemplary target dialogue user interface implemented on the terminal device 200.
  • the target dialogue user interface 51 may include a status bar 501, a title bar 502, a dialogue area 503, and a dialogue input area 504.
  • the status bar 501, the title bar 502, and the dialogue input area 504 please refer to the description corresponding to FIG. 3C.
  • the difference between the target dialogue user interface 51 shown in FIG. 3F and the target dialogue user interface 51 shown in FIG. 3E is that the dialogue view 506 in the dialogue area 503 does not display dialogue data, and the conceptual view 505 in the dialogue area 503 displays There is a knowledge graph subgraph 509.
  • the knowledge graph subgraph 509 can be called the initial knowledge graph subgraph.
  • the initial knowledge graph subgraph may include multiple initial semantic entities ("Shenzhen”, “weather”, “temperature”, “Shenzhen University”, “Ma Huateng”, “Tencent”, “Huawei”, “5G”).
  • the initial semantic entity can be one or more of the following:
  • the initial semantic entity is the semantic entity that exists in the dialogue data of one or more dialogues before this dialogue;
  • the initial semantic entity is the most popular semantic entity in the dialogue system
  • the initial semantic entity is the semantic entity related to the to-do items in the user's schedule
  • the initial semantic entity is a semantic entity determined based on the user profile of the user.
  • the initial semantic entity may also have other situations, which are not limited in this application.
  • the specific introduction of the several initial semantic entities described above please refer to the description of the subsequent method embodiments.
  • Figures 3C-3F exemplarily show some possible situations of the target dialog user interface.
  • the target dialog user interface may also include a view switch button.
  • the view switch button can switch the type of view displayed in the dialog area. That is, through the view switch button, Only the dialog view is displayed in the dialog area, or only the concept view is displayed, or the dialog view and the concept view are displayed.
  • the function of the view switch button can also be to turn on or off the conceptual view, that is, through the view switch button, the conceptual view can be closed so that only the dialogue view is displayed in the dialogue area, or the conceptual view can be turned on to make the dialogue
  • the dialog view and concept view are displayed in the area.
  • the view switching button may also be an interface element such as an icon, an option bar, and a floating window.
  • the target dialog view interface 51 may also not contain the title bar 502 shown in FIGS. 3C to 3F. This application does not limit the specific presentation of the target dialogue user interface when entering the target view interface.
  • FIGS. 3A to 3F are only a few examples cited in this application to explain some graphical user interfaces implemented on the terminal device during the process of entering the target dialog user interface. Cause restrictions.
  • the user After entering the target dialog user interface, the user can perform operations related to the dialog data, and the content displayed on the target dialog user interface is related to the user operation.
  • the following introduces some graphical user interfaces implemented on the terminal device after entering the target dialog user interface.
  • FIG. 4A exemplarily shows a graphical user interface implemented on the terminal device 200 when new dialog data is generated.
  • A1 in FIG. 4A is the target dialogue user interface implemented on the terminal device 200 when entering the target dialogue user interface.
  • new dialogue data 511 "Who is the MVP of the NBA in the 97-98 season?", "It is Michael Jordan"
  • the terminal device 200 acquires the new dialogue data.
  • Dialogue data 511 the terminal device 200 updates the dialogue view 506 and the conceptual view 505, the updated dialogue view 506 displays the new dialogue data 511, and the updated conceptual view 505 displays the knowledge graph subgraph 510 and the knowledge graph subgraph 510
  • the semantic entities (“NBA”, “MVP”, “Michael Jordan" existing in the new dialogue data 511 are included.
  • the terminal device updates the dialogue view and the conceptual view.
  • the updated dialogue view displays the new dialogue data
  • the updated conceptual view displays the new dialogue
  • the updated knowledge graph subgraph of the data, and the updated knowledge graph subgraph includes the semantic entities existing in the new dialogue data.
  • the knowledge graph subgraph displayed in the conceptual view may include semantic entities associated with the semantic entities existing in the dialog data in addition to the semantic entities existing in the dialog data.
  • FIG. 4B exemplarily shows a graphical user interface implemented on a terminal device that displays a semantic entity associated with a semantic entity existing in dialog data.
  • the dialogue view 506 displays dialogue data 512 ("Who is the MVP of the NBA in the 97-98 season?", "It is Michael Jordan")
  • the concept view 505 displays the knowledge graph subgraph 513, knowledge graph
  • the subgraph 513 includes a first semantic entity ("NBA”, “MVP”, “Michael Jordan")
  • the first semantic entity is a semantic entity existing in the dialogue data 512
  • a second semantic entity associated with the first semantic entity (“Sports”, “Basketball”, “Football”, “La Liga”, “Messi”, “James Harden”).
  • the knowledge graph subgraph displayed in the conceptual view may also include a second semantic entity, which is related to the first semantic entity.
  • the semantic entity of the association please refer to the description of the subsequent method embodiment.
  • the semantic entities and the number of semantic entities included in the knowledge graph sub-graph displayed in the conceptual view may change with changes in the dialogue data.
  • the shape of the knowledge graph sub-graph and the semantic entities in the knowledge graph sub-graph The way of displaying can also change as the number of semantic entities in the subgraph of the knowledge graph changes.
  • FIG. 4C exemplarily shows some graphical user interfaces implemented on the terminal device 200 when the knowledge graph sub-graph displayed in the conceptual view changes with changes in dialogue data.
  • the dialogue view 506 shows dialogue data 513 ("Who is the MVP of the NBA in the 97-98 season", "It is Michael Jordan"), the dialogue data 513 is less, and the conceptual view 505 shows In the knowledge graph subgraph 514, the number of semantic entities in the knowledge graph subgraph 514 is also small, and the semantic entities in the knowledge graph subgraph 514 are presented in the conceptual view 505 in a relatively sparse and stretched manner.
  • the dialog data 515 is displayed in the dialog view 506.
  • a new dialog data 516 has been added ("Harden also had an MVP?", "Yes, Harden is the MVP of the season", “Harden and Jordan both play for the Chicago Bulls")
  • the dialogue data increases
  • the concept view 505 shows the knowledge graph subgraph 517, compared to the knowledge graph subgraph 514 in C1
  • the number of semantic entities in the knowledge graph subgraph 517 has increased, and the two semantic entities "Chicago” and "Chicago Bulls” have been added.
  • the knowledge graph subgraph 517 The semantic entities in are presented in the conceptual view 505 in a relatively denser manner.
  • the dialogue view 506 has displayed multiple rounds of dialogue data, and the dialogue view 506 has displayed dialogue data 518 ("I really want to watch the Barcelona game", "Okay, it just happens to be from Spain.”
  • the dialogue view 506 has displayed dialogue data 518
  • the dialogue view 506 has displayed dialogue data 518
  • I really want to watch the Barcelona game "Okay, it just happens to be from Spain.
  • there will be a Barcelona game on November 3rd Do you need me to book a ticket for you?", "Ok”, "I have already booked a ticket for the Barcelona game on November 3rd, the front row position Already booked, I have selected a good seat for you as much as possible”, “Then book the hotel and air ticket for me”, “Okay, I have booked the air ticket for November 2 and the hotel for three days.
  • the dialogue data increases, and the knowledge graph 519 is displayed in the conceptual view 505, which is compared with the knowledge graph subgraph 517 in C2.
  • the number of semantic entities in the graph subgraph 519 is further increased.
  • the semantic entities in the knowledge graph subgraph 519 are presented in the conceptual view 505 in a parallel laying manner.
  • the dialogue view 506 shows dialogue data 520.
  • a new dialogue number 521 has been added ("How is the climate in Barcelona recently?", "Barcelona has a good climate recently , The temperature and humidity are appropriate, and the temperature is kept at 8-17 degrees Celsius”)
  • the dialogue data is further increased, and the knowledge graph subgraph 522 is displayed in the concept view 505.
  • some semantic entities are deleted ("Basketball")
  • the semantic entities (“climate”, "temperature”, “humidity") in the new dialogue data 521 are added, and the semantic entities in the knowledge graph subgraph 522 are presented in the conceptual view in parallel 505 in.
  • FIG. 4C is only for the purpose of explaining that the number of semantic entities and the number of semantic entities contained in the subgraph of the knowledge graph displayed in the mind view of this application can change with the change of the dialogue data and when the number of semantic entities is large.
  • the presented examples of semantic entities in a dense and compact manner do not limit this application.
  • the conceptual view and the dialog view can interact collaboratively.
  • FIG. 4D-FIG. 4F exemplarily shows a graphical user interface implemented on the terminal device 200 when the conceptual view and the dialog view interact cooperatively.
  • the terminal device 200 highlights the third semantic entity 524 ("Barcelona”, "tourism”, “hotel”, “ticket”) in the conceptual view 505, and the third semantic entity 524 is the semantics related to the first dialogue data 523 Entity, for the specific definition of the third semantic entity, please refer to the description of the subsequent method embodiment.
  • the terminal device 200 displays in the dialog view 506
  • the second dialogue data 526 "Okay, it just happens to be the tourist season in Spain. There will be a Barcelona game on November 3rd. Do I need to book a ticket for you?", "I have already booked it for you on November 3rd. Tickets for the Barcelona game, the front row is fully booked, and I have chosen a good seat for you as much as possible.” "Okay, I have booked your ticket for November 2nd and a three-day hotel. I will live in the stadium.
  • the second dialogue data 526 is For the dialog data related to the fourth semantic entity, for the specific definition of the second dialog data, refer to the description of the subsequent method embodiments.
  • the terminal device 200 displays the second semantic entity in the conceptual view 505 Dialogue data ("Okay, I booked your flight ticket for November 2nd, and a three-day hotel, living near the stadium, the name of the hotel is Barcelona X Hotel, the contact information is 1234567") summary information 526 (hotel Name: Barcelona X Hotel, contact information: 1234567).
  • the second dialog data is the dialog data related to the fourth semantic entity 525. For the specific definition of the second dialog data, refer to the description of the subsequent method embodiments.
  • the dialogue data displayed in the dialogue view and the semantic entity displayed in the conceptual view can have a linkage relationship.
  • the terminal device When a user acting on the dialogue data corresponding to the semantic entity or the semantic entity corresponding to the dialogue data is detected During operation, the terminal device will display the corresponding semantic entity or dialogue data in linkage.
  • the terminal device For the manner of determining the semantic entity corresponding to the dialog data or the dialog data corresponding to the semantic entity, please refer to the subsequent description.
  • FIGS. 4D-4F are only a few examples cited in this application for explaining the collaborative interaction between the conceptual view and the dialog view, and do not limit the application.
  • the above-mentioned click operations involved in FIGS. 4D to 4F may also be a double-click operation, a long-press operation, a voice instruction operation, and other user operations for selecting a certain view element.
  • the highlighting method involved in Figure 4D above can also be displayed through a pop-up window, a floating window, or a separate display (that is, only the semantic entities related to the dialog data selected by the user are displayed in the conceptual view), etc.
  • the method highlights the semantic entities related to the dialogue data selected by the user.
  • the knowledge graph subgraph displayed in the conceptual view can also be switched as the dialogue data in the dialogue view is switched, and the knowledge graph subgraph obtained by switching corresponds to the dialogue data displayed in the dialogue view; the dialogue data in the dialogue view It can also be switched as the knowledge graph sub-graph in the conceptual view is switched, and the dialogue data obtained by the switch corresponds to the knowledge graph sub-graph displayed in the conceptual view.
  • the specific method of collaborative interaction between the conceptual view and the dialog view is not limited in this application.
  • the conceptual view may also display task semantic entities for triggering dialogue tasks, and each task semantic entity may correspond to one or more dialogue tasks.
  • FIG. 4G exemplarily shows a graphical user interface implemented on the terminal device 200 when a dialog task is triggered.
  • the terminal device 200 triggers the execution of a dialog task that conforms to the user's intention of the dialog data 529, which is to reserve a flight ticket of Air China. After triggering the execution of the dialog task that conforms to the user's intention of the dialog data 529, the terminal device 200 updates the key information 528 in the conceptual view 505 ("Beijing-Barcelona flight number: Air China xxx departure time: h2: m2 Seating: to be selected").
  • the conceptual view can also be used to display the task semantic entity used to trigger the dialogue task in the knowledge graph sub-graph.
  • the terminal device responds to the user's operation to trigger the execution of a dialogue task that meets the user's intention.
  • the graphical user interface shown in FIG. 4G is only an example in this application for explaining the system functions corresponding to the functional semantic entity and the triggering functional semantic entity, and does not limit the application. In an alternative implementation, there may also be a way to trigger the execution of a dialogue task that meets the user's intention.
  • the above-mentioned key information can also exist in view elements such as icons, buttons, floating windows, and pop-up frames. Clicking on the view element corresponding to the key information triggers the display of the next-level menu or detailed content of the key information, and then clicks The selected method triggers a dialogue task that meets the user's intention.
  • This application does not limit the specific method of triggering the dialogue task through the task semantic entity.
  • FIG. 4H exemplarily shows some graphical user interfaces implemented on the terminal device 200 when the dialogue system initiates a dialogue.
  • the semantic entity 531 (“Who is the MVP of the NBA in the 97-98 season”, “Is Michael Jordan”, and “Harden won the MVP?") in the historical dialogue data 530 is identified Harden” and "Jordan") have a new semantic entity 532 ("Chicago Bulls") with a semantic relationship in the subgraph of the knowledge graph, and the new semantic entity 532 does not exist in the historical dialogue data 530.
  • the terminal device The semantic entity 531 existing in the dialogue data 530 and the new semantic entity 532 initiate a dialogue, and the initiated third dialogue data 533 (“Harden and Jordan both play for the Chicago Bulls”) is displayed in the dialogue view 506.
  • the graphical user interface corresponding to the dialogue system may have other presentation modes, for example, the target dialogue user
  • the conceptual view and dialogue view in the interface can also be presented in a left-right arrangement.
  • the target dialog user interface may not contain view elements such as status bar and title bar.
  • the specific presentation method of the graphical user interface corresponding to the dialogue system is not limited in this application.
  • the solution of the present application is based on the knowledge graph, and a conceptual view for displaying the subgraph of the knowledge graph corresponding to the target dialogue is added to the target dialogue user interface, the dialogue view and the conceptual view
  • the collaborative interaction between them plays a role in reviewing historical dialogue data, guiding topic trends, and prompting users to the functional boundaries of the dialogue system, which improves the user's dialogue interaction experience.
  • the following introduces a technical solution for implementing the foregoing embodiment of the graphical user interface.
  • the knowledge graph can also be called a scientific knowledge graph, which is a knowledge base that stores various entities in the real world and the relationships between these entities in a graph structure.
  • the knowledge graph is composed of nodes and edges, where nodes represent entities in the real world, and edges represent the relationship between entities and entities.
  • the knowledge graph can be a general domain knowledge graph, and the general domain knowledge graph can also be called an open domain knowledge graph, which refers to entities and relationships in a variety of fields, emphasizing the integration of more entities, and focusing on knowledge
  • the breadth of knowledge graph can be applied to fields such as intelligent search.
  • the knowledge graph can also be a vertical domain knowledge graph.
  • the vertical domain knowledge graph can also be called an industry knowledge graph, which refers to a knowledge graph constructed by relying on data from a specific industry.
  • the industry knowledge graph focuses on the depth of knowledge. Atlas can be understood as an industry knowledge base based on semantic technology.
  • the knowledge graph subgraph is a subgraph of the knowledge graph, that is, the knowledge graph subgraph is a part of the knowledge graph.
  • the nodes and relationships contained in the knowledge graph subgraph are all derived from the knowledge graph. Based on a certain selection rule, one or Multiple nodes and one or more association relationships can be combined to form a knowledge graph subgraph.
  • the knowledge graph subgraph corresponding to the target dialogue is a knowledge graph subgraph determined based on the pre-established knowledge graph and the dialogue data of the target dialogue. It is about determining the correspondence of the target dialogue based on the pre-established knowledge graph and the dialogue data of the target dialogue.
  • the knowledge graph subgraph please refer to the subsequent description.
  • a semantic entity can refer to something or something that is distinguishable and independent. Specifically, a semantic entity can refer to a certain person (such as Yao Ming), a certain city (such as Shenzhen), or a certain book ( Such as the biography of famous people), a certain kind of plant (such as spider plant), etc., are not limited to the description here. It can also refer to a collection of entities with the same characteristics, which is a collective term for collections, categories, types, etc., such as countries, nations, people, and geography. It can also refer to the description or interpretation of something that is distinguishable and independent, or a collection of entities with the same characteristics.
  • a semantic entity can exist in the form of a node in the knowledge graph or a subgraph of the knowledge graph.
  • Semantic relationship is used to connect two semantic entities, is used to describe the association or intrinsic characteristics between two entities, which represents the relationship between two semantic entities in the real world.
  • a semantic relationship can exist in the form of an edge in the knowledge graph or knowledge graph subgraph.
  • the dialog interaction method can be implemented on the aforementioned dialog system.
  • the overall flow of the dialog interaction method on the terminal device side can be as follows: the terminal device is the first in the target dialog user interface The dialog view is displayed in the area, and the conceptual view is displayed in the second area of the target dialog user interface.
  • the target dialogue user interface the first area, the dialogue view, the second area, and the conceptual view, please refer to the relevant description of the graphical user embodiment shown in 3C, which will not be repeated here.
  • target dialogue user interface 51 shown in the aforementioned embodiments of FIGS. 3C to 3F or 4A to 4H.
  • Some processes of the dialogue interaction method used for realizing the terminal device displaying the knowledge graph subgraph corresponding to the target dialogue in the conceptual view of the target dialogue user interface please refer to the embodiments of FIGS. 3E-3F, 4A-4C, and 4H.
  • the implementation process of the dialog interaction method corresponding to the embodiment in Figure 3F.
  • the implementation process can be applied to the scenario where the terminal device displays the knowledge graph subgraph corresponding to the target dialog in the conceptual view of the target dialog user interface when entering the target dialog user interface .
  • a schematic flow chart of a dialogue interaction method corresponding to the embodiment shown in FIG. 3F may be as shown in FIG. 5A.
  • the flow may be applicable to a dialogue system composed of network equipment and terminal equipment, and specifically includes the following steps:
  • the network device generates a knowledge graph subgraph corresponding to the target dialogue.
  • the knowledge graph subgraph corresponding to the target dialogue may be the knowledge graph subgraph 509 shown in FIG. 3F.
  • the knowledge graph subgraph corresponding to the target dialogue is the initial knowledge graph subgraph, and the initial knowledge graph subgraph includes one or more initial semantic entities.
  • the initial semantic entity may be the aforementioned embodiment in FIG. 3F Any one or more of the initial semantic entities described.
  • the semantic entities that exist in the dialogue data of one or more dialogues before this dialogue may specifically be semantic entities that are frequently mentioned in the user's dialogue history, that is, semantic entities that frequently appear in historical dialogue records.
  • the historical dialogue record here refers to the dialogue record generated before the current dialogue corresponding to the target dialogue. For example, if the target dialogue is a dialogue between instant messaging user A and instant messaging user B, then the semantic entities that exist in the dialogue data generated from one or more conversations before this conversation are those that frequently appear in instant messaging users A and The semantic entity in the historical conversation record of instant messaging user B.
  • the target dialogue is the dialogue between the user and the dialogue system
  • the semantic entities that exist in the dialogue data of one or more dialogues before this dialogue are often appearing in the historical dialogue records of the user and the dialogue system Semantic entity.
  • the meaning of “frequently” may mean that the frequency of occurrence or being mentioned exceeds a preset frequency threshold.
  • the value of the frequency threshold is not limited in this application.
  • the semantic entity with high popularity in the dialogue system may be the semantic entity frequently mentioned by most users who use the dialogue system in the dialogue system, that is, the semantic entity that often appears in the historical dialogue records of most users.
  • the historical dialogue record here refers to the dialogue record of most users generated before the current dialogue corresponding to the target dialogue.
  • the semantic entities with high popularity in the dialogue system are semantic entities that often appear in the historical dialogue records of most instant messaging users who use the dialogue system.
  • the dialogue system is a dialogue system based on human-computer interaction, then the semantic entities with higher popularity in the dialogue system are semantic entities that often appear in the historical dialogue records of all users who use the dialogue system.
  • the meaning of “most users” may refer to users whose ratio to all users using the dialogue system exceeds the first ratio, where the first ratio is a ratio value greater than one-half.
  • the meaning of "frequently” can mean that the frequency that appears or is mentioned exceeds the preset frequency threshold.
  • the value of the frequency threshold is not limited in this application.
  • the dialogue system has the function of saving the dialogue data of one or more dialogues generated before this dialogue.
  • the network device can determine the initial semantic entity based on the dialogue data stored in the dialogue system that is generated from one or more dialogues before this dialogue, and then based on the knowledge graph stored in the dialogue system and the initial semantic entity , Generate the knowledge graph subgraph corresponding to the target dialogue.
  • the semantic entity related to the to-do items in the user’s schedule may specifically be memos, notes, to-do items, notepads, etc., recorded on the terminal device, such as plans or plans on applications that record the user’s schedule or plan. Semantic entities in the arrangement. For example, if the to-do item on the terminal device records the user's schedule in the next few days, the semantic entity related to the to-do item in the user's schedule may be a semantic entity existing in the schedule in the next few days, such as meeting time, Conference rooms, contacts, etc.
  • the semantic entity determined based on the user portrait of the user may specifically be a semantic entity determined based on data related to the user's daily behavior (such as shopping behavior, search behavior, out-of-office record, exercise record, etc.) that meets a certain aspect of the user's characteristics . For example, if it is determined that the user often goes to the gym according to the data related to the user's daily life, the semantic entity determined based on the user portrait of the user may be a semantic entity related to fitness, such as a treadmill, aerobics, etc.
  • the terminal device can collect the schedule or plan recorded by the user, or data related to the user's daily behavior, and collect the schedule or plan recorded by the user, or the daily schedule or plan of the user.
  • the behavior-related data is sent to the network device.
  • the network device can determine the initial semantic entity according to the schedule or plan or data related to the user's daily behavior, and then generate the knowledge graph subgraph corresponding to the target dialogue based on the knowledge graph saved in the dialogue system and the initial semantic entity .
  • the manner of generating the knowledge graph subgraph corresponding to the target dialogue may be: query the knowledge graph according to the initial semantic entity to determine the Based on the semantic relationship between the initial semantic entities and the semantic relationship between the initial semantic entities and the initial semantic entities, a knowledge graph sub-graph corresponding to the target dialogue is generated.
  • the network device sends the knowledge graph subgraph corresponding to the target dialogue to the terminal device.
  • the network device may directly send the generated knowledge graph sub-graph to the terminal device, or send it to the terminal device through other network devices, or store the knowledge graph sub-graph in a memory or other device to be read by the terminal device.
  • the terminal device displays the knowledge graph sub-graph corresponding to the target dialogue in the conceptual view of the target dialogue user interface.
  • the terminal device may display the knowledge graph subgraph corresponding to the target dialogue in the conceptual view of the target dialogue user interface as shown in FIG. 3F.
  • the schematic flow chart of another dialogue interaction method corresponding to the embodiment in FIG. 3F may be as shown in FIG. 5B.
  • the flow may be applicable to a dialogue system consisting of only terminal devices, and specifically includes the following steps:
  • the terminal device generates a knowledge graph subgraph corresponding to the target dialogue.
  • An example of the knowledge graph subgraph corresponding to the target dialogue may be the knowledge graph subgraph 509 shown in FIG. 3F.
  • the knowledge graph sub-graph corresponding to the target dialogue may refer to the knowledge graph sub-graph corresponding to the target dialogue described in step S511, which will not be repeated here.
  • the terminal device displays the knowledge graph subgraph corresponding to the target dialogue in the conceptual view of the target dialogue user interface.
  • the terminal device may display the knowledge graph subgraph corresponding to the target dialogue in the conceptual view of the target dialogue user interface as shown in FIG. 3F.
  • the dialogue system does not have the function of displaying the dialogue data of one or more dialogues before this dialogue, or when entering the target dialogue Before the user interface, if the dialogue data of the target dialogue has not been generated in the dialogue system, the terminal device displays the knowledge graph subgraph corresponding to the target dialogue in the target dialogue user interface, and the knowledge graph subgraph corresponding to the target dialogue is the initial
  • the semantic entities in the initial knowledge graph subgraph can play a role in guiding conversation topics and enhance the user experience.
  • a schematic flow chart of a dialogue interaction method corresponding to the embodiments of Figures 3E, Figure 4A and Figure 4C may be as shown in Figure 6A.
  • This flow can be applied to a human-computer interaction based dialogue system composed of network equipment and terminal equipment, specifically including The following steps:
  • S611 The terminal device obtains input dialog data input by the user.
  • the input dialog data input by the user may be voice data or text data.
  • the terminal device may collect sound signals through a microphone to obtain input dialogue data input by the user.
  • the terminal device may also obtain the user's operation of inputting characters through a touch screen or keyboard, etc., to obtain input dialog data input by the user.
  • the input dialog data input by the user may be the dialog data "How is the weather in Shenzhen?" shown in FIG. 3E.
  • S612 The terminal device sends the input dialog data to the network device, and the network device receives the input dialog data.
  • S613 The network device generates reply dialogue data according to the input dialogue data.
  • the network device can identify the semantic entities existing in the input dialog data, query the knowledge graph stored in the dialog system according to the identified semantic entities, to determine the semantic relationship between the identified semantic entities, and then the identification results
  • the semantic entity and the semantic relationship obtained by the query are input into the pre-trained Encoder-Decoder (Encoder-Decoder) model, and the dialogue data output by the Encoder-Decoder model is determined as the reply dialogue data.
  • Encoder-Decoder Encoder-Decoder
  • the network device can identify the semantic entity existing in the input dialog data through entity extraction.
  • Entity extraction can also be called named entity learning or named entity recognition.
  • the entity extraction method may be any one of a rule and dictionary-based method, a statistical machine learning-based method, or an open domain-oriented method, which is not limited in the embodiment of the present application.
  • the reply dialogue data generated by the network device can be the dialogue data "Shenzhen turns to cloudy today, and the temperature is 16-28 degrees Celsius" as shown in Figure 3E, or the dialogue data shown in Figures 4A-4C "is Michael Jordan".
  • the network device generates a knowledge graph subgraph corresponding to the target dialogue according to the input dialogue data and the reply dialogue data.
  • the knowledge graph subgraph corresponding to the target dialogue includes semantic entities existing in the input dialogue data and the reply dialogue data.
  • the network device can identify the semantic entities existing in the input dialogue data and the reply dialogue data, and then generate the knowledge graph subgraph corresponding to the target dialogue according to the identified semantic entities.
  • An example of the knowledge graph subgraph corresponding to the target dialogue generated according to the identified semantic entity may be the knowledge graph subgraph 508 shown in FIG. 3E.
  • the network device identifying the semantic entity existing in the input dialog data and the reply dialog data refer to the way in step S613 that the network device can identify the semantic entity existing in the input dialog data by entity extraction; the network device obtains the result according to the recognition
  • the specific implementation manner of generating the knowledge graph subgraph corresponding to the target dialogue by the semantic entity can refer to the specific implementation manner of generating the knowledge graph subgraph corresponding to the target dialogue by the network device in step S511 according to the knowledge graph stored in the dialogue system and the initial semantic entity. , I won’t repeat it here.
  • the network device sends the reply dialogue data and the knowledge graph subgraph corresponding to the target dialogue to the terminal device, and the terminal device receives the reply dialogue data and the knowledge graph subgraph corresponding to the target dialogue.
  • the terminal device displays the reply dialogue data in the dialogue view of the target dialogue user interface, and displays the knowledge graph subgraph corresponding to the target dialogue in the conceptual view of the target dialogue user interface.
  • the terminal device displays the reply dialogue data in the dialogue view of the target dialogue user interface, and displays the knowledge graph subgraph corresponding to the target dialogue in the conceptual view of the target dialogue user interface, as shown in Figure 3E, Figure 4A-4C Show.
  • FIG. 6B The schematic flow diagram of another dialogue interaction method corresponding to the embodiments in Figure 3E, Figure 4A, and Figure 4C may be as shown in Figure 6B.
  • This flow can be applied to a human-computer interaction-based dialogue system consisting of only terminal devices, specifically including the following step:
  • S621 The terminal device obtains input dialog data input by the user.
  • step S621 refers to the description of step S611, which will not be repeated here.
  • S622 The terminal device generates reply dialogue data according to the input dialogue data.
  • the terminal device generates a knowledge graph subgraph corresponding to the target dialogue according to the input dialogue data and the reply dialogue data.
  • the terminal device displays the reply dialogue data in the dialogue view of the target dialogue user interface, and displays the knowledge graph subgraph corresponding to the target dialogue in the conceptual view of the target dialogue user interface.
  • the terminal device displays the reply dialogue data in the dialogue view of the target dialogue user interface, and displays the knowledge graph subgraph corresponding to the target dialogue in the conceptual view of the target dialogue user interface, which may be as shown in Figure 3E, Figure 4A or Figure 4C Show.
  • a schematic flow chart of another dialogue interaction method corresponding to the embodiments of Figures 3E, Figure 4A, and Figure 4C may be as shown in Figure 6C.
  • This flow can be applied to a dialogue system based on instant messaging composed of terminal equipment and network equipment, specifically including The following steps:
  • S631 The terminal device obtains input dialog data input by the user.
  • step S621 refers to the description of step S611, which will not be repeated here.
  • S632 The terminal device sends the input dialogue data to the network device, and the network device receives the input dialogue data.
  • the network device generates a knowledge graph subgraph corresponding to the target dialogue according to the input dialogue data.
  • the network device generates the knowledge graph sub-graph corresponding to the target dialogue according to the input dialogue data, please refer to the description in step S614 that the network device generates the knowledge graph sub-graph corresponding to the target dialogue according to the input dialogue data and the reply dialogue data. I won't repeat it here.
  • the network device sends the knowledge graph subgraph corresponding to the target dialogue to the terminal device, and the terminal device receives the knowledge graph subgraph corresponding to the target dialogue.
  • the terminal device displays the knowledge graph subgraph corresponding to the target dialogue in the conceptual view of the target dialogue user interface.
  • the terminal device displays the reply dialogue data in the dialogue view of the target dialogue user interface, and displays the knowledge graph subgraph corresponding to the target dialogue in the conceptual view of the target dialogue user interface, which may be as shown in Figure 3E or Figure 4A-4C. Show.
  • the dialogue system when the dialogue system is a human-computer interaction based dialogue system, in addition to generating dialogue reply data according to the input dialogue data input by the user, the dialogue system can also actively generate and initiate a dialogue.
  • a graphical user interface in which a dialogue system actively initiates a dialogue can refer to FIG. 4H.
  • a schematic flow chart of a dialogue interaction method corresponding to the embodiment in FIG. 4H may be as shown in FIG. 6D. The flow may be applicable to a dialogue system based on human-computer interaction composed of network devices and terminal devices, and specifically includes the following steps:
  • the third dialog data is the dialog data actively initiated by the network device, that is, the dialog data actively initiated by the dialog system.
  • the specific implementation manner for generating the third dialog data by the network device will be described in detail in the subsequent method embodiments, and will not be described here too much.
  • the network device generates a corresponding knowledge graph subgraph of the target dialogue according to the third dialogue data.
  • the specific implementation of the network device generating the knowledge graph subgraph corresponding to the target dialogue according to the third dialogue data may refer to the description of the network device generating the knowledge graph subgraph corresponding to the target dialogue according to the input dialogue data and the reply dialogue data in step S614. I won't repeat them here.
  • the network device sends the knowledge graph subgraph corresponding to the target dialogue and the third dialogue data to the terminal device, and the terminal device receives the knowledge graph subgraph corresponding to the target dialogue and the third dialogue data.
  • the terminal device displays the third dialogue data in the dialogue view of the target dialogue user interface, and displays the knowledge graph subgraph corresponding to the target dialogue in the conceptual view of the target dialogue user interface.
  • the schematic flow chart of another dialogue interaction method corresponding to the embodiment in FIG. 4H may be as shown in FIG. 6E.
  • the flow may be applicable to a dialogue system based on human-computer interaction composed of only terminal devices, and specifically includes the following steps:
  • S651 The terminal device generates third dialog data.
  • S652 The terminal device generates a corresponding knowledge graph subgraph of the target dialogue according to the third dialogue data.
  • the specific implementation of the terminal device generating the knowledge graph subgraph corresponding to the target dialogue according to the third dialogue data may refer to the description of the network device generating the knowledge graph subgraph corresponding to the target dialogue according to the input dialogue data and the reply dialogue data in step S614. I won't repeat them here.
  • the terminal device displays the third dialogue data in the dialogue view of the target dialogue user interface, and displays the knowledge graph subgraph corresponding to the target dialogue in the conceptual view of the target dialogue user interface.
  • the input dialogue data and the reply dialogue data involved in the above-mentioned embodiment in FIGS. 6A-6B, the dialogue data involved in the above-mentioned embodiment in FIG. 6C, and the third dialogue data involved in the above-mentioned embodiment in FIGS. 6D-6E can be collectively referred to as dialogue data.
  • dialogue data can be collectively referred to as dialogue data.
  • the knowledge graph subgraph corresponding to the target dialogue includes the first semantic entity, and the first semantic entity is the semantics existing in the dialogue data Entity, the first semantic entity is equivalent to the summary and generalization of the dialogue data of the target dialogue, which helps to quickly read and understand the summary of the historical dialogue content, so as to achieve the purpose of replying to the historical dialogue content.
  • the input dialogue data and reply dialogue data involved in the above-mentioned embodiment of FIGS. 6A-6B, the above-mentioned dialogue data involved in FIG. 6C, and the above-mentioned third dialogue data involved in FIGS. 6D-6E are new dialogue data, combine It can be seen from any method embodiment corresponding to FIGS. 4C and FIGS. 6A-6E that when new dialogue data is acquired, the terminal device will update the conceptual view, and the updated conceptual view is used to display the knowledge updated according to the new dialogue data
  • the graph subgraph, the updated knowledge graph subgraph includes the semantic entities existing in the new dialogue data.
  • the knowledge graph subgraph corresponding to the target dialogue may further include one or more second semantic entities associated with the first semantic entity.
  • An example of the knowledge graph subgraph corresponding to the target dialogue including the second semantic entity may be the knowledge graph subgraph 511 shown in FIG. 4B.
  • the second semantic entity associated with the first semantic entity may have the following situations:
  • the second semantic entity may include a semantic entity adjacent to the first semantic entity in the knowledge graph, that is, a semantic entity that has a semantic relationship with the first semantic entity in the knowledge graph.
  • the semantic entity adjacent to the first semantic entity in the knowledge graph may be, for example, the semantic entities “James Harden”, “NBA”, “La Liga” and “Messi” shown in FIG. 4B, where “ “James Harden”, “La Liga”, “NBA” and “Messi” are semantic entities that have a semantic relationship with the semantic entity "MVP”.
  • the second semantic entity may include a partial semantic entity adjacent to the first semantic entity in the knowledge graph.
  • the part of the semantic entity adjacent to the first semantic entity in the knowledge graph may be used in the dialogue process with a frequency higher than the first frequency threshold, and is connected to the first semantic entity in the knowledge graph.
  • Neighboring semantic entities may refer to the frequency of use in the target dialogue, and the frequency of use in the dialogue process is higher than the first frequency threshold and the semantic entity adjacent to the first semantic entity in the knowledge graph refers to the frequent The semantic entity that appears in the historical dialogue record corresponding to the target dialogue and is adjacent to the first semantic entity in the knowledge graph.
  • the frequency of use may also refer to the frequency of use in all dialogues in the dialogue system.
  • the frequency of use in the dialogue process is higher than the first frequency threshold and the semantic entity adjacent to the first semantic entity in the knowledge graph refers to The semantic entity that often appears in the historical dialogue records corresponding to all dialogues in the dialogue system and is adjacent to the first semantic entity in the knowledge graph.
  • the historical dialogue record here can be the historical dialogue record of the current dialogue corresponding to the target dialogue, or all the historical dialogue records corresponding to the target dialogue (ie the historical dialogue record of this dialogue and the history generated before this dialogue Conversation record).
  • the semantic entities adjacent to the first semantic entity in the knowledge graph are "Ren Zhengfei”, “mobile phone”, “5G”, “network equipment”, “Glory”, and “Hisilicon” respectively.
  • the frequency of use is the frequency of use in the target conversation, and the first frequency threshold is 20 times/week.
  • the frequency of "Ren Zhengfei” in the historical dialogue record of the target dialogue is 1 time/week
  • the frequency of "mobile phone” in the historical dialogue record of the target dialogue is 25 times/week
  • the historical dialogue record of "5G” in the target dialogue The frequency of appearance in the target dialogue is 18 times/week
  • the frequency of "Glory” in the historical dialogue record of the target dialogue is 10 times/week
  • the frequency of "Hisilicon” in the historical dialogue record of the target dialogue is 3 times/week
  • the semantic entity "mobile phone” is determined to be a semantic entity whose use frequency is higher than the first frequency threshold in the dialogue process and is adjacent to the first semantic entity in the knowledge graph.
  • the frequency of use is the frequency of use in all dialogues in the dialogue system, and the first frequency threshold is 200 times/day.
  • the frequency of "Ren Zhengfei” in the historical dialogue record of the target dialogue is 10 times/day
  • the frequency of "mobile phone” in the historical dialogue record of the target dialogue is 250 times/day
  • the historical dialogue record of "5G” in the target dialogue The frequency of appearance in the target dialogue is 300 times/day
  • the frequency of "Glory” in the historical dialogue record of the target dialogue is 220 times/day
  • the frequency of "Hisilicon” in the historical dialogue record of the target dialogue is 30 times/day
  • the semantic entities "mobile phone", “5G” and “Glory” are determined as the semantic entities that are used in the dialogue process higher than the first frequency threshold and are adjacent to the first semantic entity in the knowledge graph.
  • the partial semantic entity adjacent to the first semantic entity in the knowledge graph may be a semantic entity determined based on the user profile and adjacent to the first semantic entity in the knowledge graph.
  • the definition of the semantic entity determined based on the user portrait reference may be made to the description of the foregoing step S511, which is not repeated here.
  • the part of the semantic entity adjacent to the first semantic entity in the knowledge graph is not limited to the above two feasible implementation manners.
  • which part of the semantic entity adjacent to the first semantic entity in the knowledge graph is used as the first semantic entity
  • the second semantic entity is not limited in the embodiment of this application.
  • the second semantic entity may also include a semantic entity whose semantic relationship path distance with the first semantic entity in the knowledge graph is less than the first distance threshold.
  • the distance of the semantic relationship path between two semantic entities can be measured by the number of semantic entities included in the semantic relationship path of the two semantic entities in the knowledge graph.
  • the semantic relationship path can be equal to that of the two semantic entities in the knowledge graph. The number of semantic entities contained in the path of the most phrase sense relationship in the figure is reduced by one.
  • the second semantic entity may include a partial semantic entity whose semantic relationship path distance with the first semantic entity in the knowledge graph is less than a first distance threshold.
  • the part of the semantic entities whose semantic relationship path distance with the first semantic entity in the knowledge graph is less than the first distance threshold may be those whose use frequency is higher than the second frequency threshold during the dialogue process
  • the part of the semantic entities whose semantic relationship path distance with the first semantic entity in the knowledge graph is less than the first distance threshold may be determined based on the user profile and compared with the first semantic entity in the knowledge graph.
  • a semantic entity whose semantic relationship path distance is less than the first distance threshold For the description of the path distance of the semantic relationship, please refer to the foregoing description, which will not be repeated here.
  • the second semantic entity associated with the first semantic entity is not limited to the above situation. Specifically, which semantic entity in the knowledge graph is determined as the semantic entity associated with the first semantic entity is not limited in the embodiment of the present application.
  • the knowledge graph corresponding to the target dialogue includes not only the first semantic entity, but also the second semantic entity associated with the first semantic entity. Semantic entities play a role in guiding conversation topics and can enhance the user's conversation experience.
  • Some processes of the dialogue interaction method used to realize the collaborative interaction of the dialogue view and the conceptual view displayed on the terminal device can be applied to the scene where the dialogue data and the knowledge graph sub-graphs are already displayed in the target dialogue user interface, namely In a scenario where one or more rounds of dialogue have been conducted.
  • a graphical user interface for collaborative interaction between the dialogue view and the conceptual view displayed on the terminal device please refer to the embodiments of FIG. 4D-4G.
  • a schematic flow diagram of a dialogue interaction method corresponding to the embodiment in FIG. 4D may be as shown in FIG. 7A.
  • the flow may be applied to a dialogue system composed of only terminal devices, and specifically includes the following steps:
  • the terminal device detects a first operation acting on the first conversation data.
  • the first dialogue data is any dialogue data displayed in the dialogue view in the target dialogue user interface.
  • the first dialogue data may be, for example, the dialogue data shown in FIG. 4D "then help me book the hotel and air ticket together".
  • the first operation acting on the first dialogue data specifically refers to the operation of selecting the first dialogue data, and the first operation may have various forms.
  • the first operation may be an operation of clicking the first dialogue data in the dialogue view
  • the first operation may also be an operation of double-clicking the first dialogue data in the dialogue view
  • the first operation may also be dragging in the dialogue view.
  • the operation of dragging the first dialog data, etc. is not limited to the description here.
  • the specific form of the first operation is not limited in the embodiment of the present application.
  • the terminal device determines a third semantic entity according to the first dialog data.
  • the third semantic entity is a semantic entity displayed in the conceptual view of the target dialogue user interface and related to or corresponding to the first dialogue data.
  • the semantic entity related to or corresponding to the first dialog data may include a semantic entity existing in the first dialog data.
  • the terminal device can identify the semantic entity existing in the first dialog data and determine it as the third semantic entity.
  • the terminal device recognizing the semantic entity existing in the first dialog data reference may be made to the manner in which the network device recognizes the semantic entity existing in the input dialog data in step S613, which will not be repeated here.
  • the semantic entity related to or corresponding to the first dialog data may also include a semantic entity associated with a semantic entity existing in the first dialog data.
  • the terminal device may identify the semantic entity existing in the first dialog data, and then determine the semantic entity associated with the semantic entity existing in the first dialog data in the knowledge graph sub-graph displayed in the conceptual view as the third semantic entity.
  • the semantic entity related to or corresponding to the first dialog data may include a semantic entity existing in the first dialog data and a semantic entity associated with the semantic entity existing in the first dialog data.
  • the semantic entity related to or corresponding to the first dialog data may also include a semantic entity whose similarity between the topic tag and the topic tag corresponding to the first dialog data is higher than the relevance threshold, that is, the third The similarity between the topic tag corresponding to the semantic entity and the topic tag corresponding to the first dialogue data is higher than the relevance threshold.
  • the terminal device can determine the topic tag corresponding to the first conversation data, and respectively determine the topic tag corresponding to each semantic entity in the knowledge graph sub-graph displayed in the conceptual view, and then the topic tag corresponding to each semantic entity is associated with the first dialog Similarity matching is performed on the topic tags corresponding to the data to determine the semantic entities whose similarity between the topic tags and the topic tags corresponding to the first conversation data is higher than the relevance threshold, and then the topic tags are similar to the topic tags corresponding to the first conversation data The semantic entity whose degree is higher than the relevance degree threshold is determined as the first semantic entity.
  • the terminal device may determine the topic tags corresponding to the first conversation data and the topic tags corresponding to each semantic entity in the knowledge graph sub-graph displayed in the conversation view through a topic recognizer obtained through pre-training.
  • the semantic entity related to or corresponding to the first dialogue data is not limited to the above description.
  • the specific semantic entity displayed in the conceptual view as the semantic entity related to or corresponding to the first dialogue data depends on the relevant information in the conceptual view in the dialogue system.
  • the specific design of the corresponding relationship between the semantic entity and the dialog data in the dialog view is not limited in the embodiment of the present application.
  • S713 The terminal device highlights the third semantic entity in the conceptual view of the target dialogue user interface.
  • the terminal device highlighting the third semantic entity in the conceptual view of the target dialogue user interface may refer to displaying the third semantic entity in the conceptual view of the target dialogue user interface in a manner different from other semantic entities displayed in the conceptual view of the target dialogue user interface.
  • the third semantic entity is displayed, and the other semantic entity refers to a semantic entity other than the third semantic entity displayed in the conceptual view in the target dialog user interface.
  • the target dialogue user interface when the terminal device highlights the third semantic entity in the conceptual view of the target dialogue user interface may be as shown in D2 in FIG. 4D.
  • the flow diagram of another dialog interaction method corresponding to the embodiment in FIG. 4D may be as shown in FIG. 7B.
  • the flow may be applicable to a dialog system composed of terminal devices and network devices, and specifically includes the following steps:
  • S721 The terminal device detects a first operation acting on the first dialog data.
  • step S721 can refer to step S711, which will not be repeated here.
  • the terminal device sends a semantic entity confirmation request to the network device.
  • the semantic entity confirmation request is used to request to obtain the semantic entity to be highlighted.
  • the semantic entity confirmation request includes the first dialog data, and the network device receives the semantic entity confirmation request.
  • the network device determines a third semantic entity according to the first dialog data.
  • step S712 for the specific implementation manner of the third semantic entity and the network device determining the third semantic entity according to the first dialog data, please refer to the description of step S712, which will not be repeated here.
  • S724 The network device sends the third semantic entity to the terminal device, and the terminal device receives the third semantic entity.
  • the terminal device highlights the third semantic entity in the conceptual view of the target dialogue user interface.
  • step S725 for the related description of step S725, please refer to step S715, which will not be repeated here.
  • the target dialogue user interface has already displayed dialogue data and knowledge graph subgraphs.
  • the terminal device will highlight the semantic entities related to the dialogue data in the conceptual view of the target dialogue user interface, which realizes the collaborative interaction between the dialogue view and the conceptual view, which helps the user to locate To specific semantic entities, the user experience is improved.
  • a schematic flow chart of a dialogue interaction method corresponding to the embodiment in FIG. 4E may be as shown in FIG. 8A.
  • the flow may be applicable to a dialogue system consisting of only terminal devices, and specifically includes the following steps:
  • S811 The terminal device detects the second operation acting on the fourth semantic entity.
  • the fourth semantic entity is the semantic entity displayed in the conceptual view in the target dialog user interface.
  • the fourth semantic entity may be the semantic entity "Barcelona" shown in FIG. 4E, for example.
  • the second operation acting on the fourth semantic entity specifically refers to the operation of selecting the fourth semantic entity, and the second operation may have various forms.
  • the second operation may be the operation of clicking the fourth semantic entity in the conceptual view
  • the second operation may also be the operation of double-clicking the fourth semantic entity in the conceptual view
  • the second operation may also be the operation of clicking the fourth semantic entity in the conceptual view.
  • the fourth semantic entity is the operation of drawing a circle in the center
  • the second operation can also be the operation of dragging the fourth semantic entity in the conceptual view
  • the second operation can also be the operation of voice controlling the fourth semantic entity (that is, the user speaks See the voice instructions of the fourth semantic entity), etc., not limited to the description here.
  • the specific form of the second operation is not limited in the embodiment of the present application.
  • S812 The terminal device determines second dialogue data according to the fourth semantic entity.
  • the second dialogue data is historical dialogue data related to or corresponding to the fourth semantic entity.
  • the historical dialogue data refers to the dialogue data of the target dialogue that has been generated in the dialogue system.
  • the historical dialogue data here is the historical dialogue data of the current dialogue corresponding to the target dialogue, which specifically refers to the dialogue data of one or more rounds of dialogue that has been conducted.
  • the historical dialogue data related to or corresponding to the fourth semantic entity may be historical dialogue data in which the second semantic entity exists, that is, the fourth semantic entity exists in the second dialogue data.
  • the terminal device may search for the dialogue data of the fourth semantic entity in the dialogue data of one or more rounds of dialogue that has been conducted, and determine it as the second dialogue data.
  • the terminal device may compare the text data corresponding to the dialogue data of one or more rounds of dialogue that has been conducted with the fourth semantic entity, so as to determine that there is historical dialogue data of the second semantic entity.
  • the historical dialogue data related to or corresponding to the fourth semantic entity may also be historical dialogue data in which there is a semantic entity associated with the fourth semantic entity, that is, the historical dialogue data associated with the fourth semantic entity
  • the semantic entity exists in the second dialog data.
  • the terminal device may search the dialogue data of one or more rounds of dialogues that has been conducted for dialogue data that has a semantic entity associated with the fourth semantic entity, and determine it as the second dialogue data.
  • the historical dialogue data related to or corresponding to the fourth semantic entity may also be historical dialogue data in which the similarity between the topic tag and the topic tag corresponding to the fourth semantic entity is higher than the relevance threshold, that is, The similarity between the topic tag corresponding to the second dialogue data and the topic tag corresponding to the second semantic entity is higher than the association degree threshold.
  • the terminal device can determine the topic tags of the second semantic entity and the topic tags corresponding to each historical dialogue data, and then match the topic tags of each historical dialogue data with the topic tags of the second semantic entity to determine the topic tags and the first Two historical dialogue data whose similarity of the topic tag corresponding to the second semantic entity is higher than the relevance threshold, and then determine the historical dialogue data whose similarity between the topic tag and the topic tag corresponding to the second semantic entity is higher than the relevance threshold is the second conversation data.
  • determining the topic tag of the second semantic entity and the topic tag corresponding to each historical dialogue data refer to the aforementioned step S712 for determining the topic tag corresponding to the first dialog data and the knowledge graph sub-graph displayed in the dialogue view. The way of topic tags corresponding to each semantic entity will not be repeated here.
  • the historical dialogue data related or corresponding to the fourth semantic entity is not limited to the above description.
  • the specific historical dialogue data used as the dialogue related or corresponding to the fourth semantic entity depends on the semantic entity and the semantic entity in the conceptual view in the dialogue system.
  • the specific design of the correspondence between the dialog data in the dialog view is not limited in the embodiment of the present application.
  • the terminal device displays the second dialogue data in the dialogue view of the target dialogue user interface.
  • the target dialogue user interface when the terminal device displays the second dialogue data in the dialogue view of the target dialogue user interface may be as shown in E2 in FIG. 4E.
  • FIG. 8B A schematic flow diagram of another dialog interaction method corresponding to the embodiment in FIG. 4E is shown in FIG. 8B.
  • the flow can be applied to a dialog system composed of a terminal device and a network device, and specifically includes the following steps:
  • the terminal device detects the second operation acting on the fourth semantic entity.
  • step S821 can refer to step S811, which will not be repeated here.
  • S822 The terminal device sends the fourth semantic entity to the network device, and the network device receives the fourth semantic entity.
  • S823 The network device determines second dialogue data according to the fourth semantic entity.
  • step S812 for the description of the second dialog data and the specific implementation manner for the network device to determine the second dialog data according to the fourth semantic entity, please refer to the description of step S812, which will not be repeated here.
  • S824 The network device sends the second dialogue data to the terminal device, and the terminal device receives the second dialogue data.
  • S825 The terminal device displays the second dialogue data in the dialogue view of the target dialogue user interface.
  • the target dialogue user interface when the terminal device displays the second dialogue data in the dialogue view of the target dialogue user interface may be as shown in E2 in FIG. 4E.
  • the target dialogue user interface has already displayed dialogue data and knowledge graph subgraphs, when the concept view
  • the terminal device will display dialogue data related to the semantic entity, which realizes the system interaction between the dialogue view and the conceptual view, helps users locate historical dialogue content, and improves user experience.
  • a schematic flow chart of a dialogue interaction method corresponding to the embodiment in FIG. 4F may be as shown in FIG. 9A.
  • the flow may be applicable to a dialogue system consisting of only terminal devices, and specifically includes the following steps:
  • the terminal device detects a second operation acting on the fourth semantic entity.
  • step S911 can refer to step S811, which will not be repeated here.
  • S912 The terminal device determines second dialog data according to the second semantic entity.
  • the second dialogue data with the latest generation time refers to the latest historical dialogue data in the historical dialogue data whose topic relevance with the second semantic entity is higher than the relevance threshold.
  • the second dialogue data with the latest generation time may be one or multiple.
  • the terminal device may determine the second dialog data according to the method described in step S812.
  • S913 The terminal device displays the summary information of the second dialog data in the conceptual view of the target dialog user interface.
  • the summary information of the second dialogue data is a content summary or content summary of the second dialogue data, which is used to describe the second dialogue data concisely and reflect the main content of the second dialogue data.
  • the terminal device can identify the main content of the second dialog data to determine the summary information of the second dialog data.
  • the method of identifying the main content of the second dialog data is not limited in this application.
  • the main content of the second dialogue data can be identified through a pre-trained abstract information extraction model.
  • the target dialogue user interface when the terminal device displays the summary information of the second dialogue data in the conceptual view of the target dialogue user interface may be as shown in F2 in FIG. 4F.
  • the terminal device may display the summary information of the second dialogue data with the latest generation time in the conceptual view of the target dialogue user interface.
  • FIG. 9B A schematic flow diagram of another dialog interaction method corresponding to the embodiment in FIG. 4F is shown in FIG. 9B.
  • the flow is applicable to a dialog system composed of terminal devices and network devices, and specifically includes the following steps:
  • the terminal device detects the second operation acting on the fourth semantic entity.
  • step S911 can refer to step S811, which will not be repeated here.
  • S922 The terminal device sends the fourth semantic entity to the network device, and the network device receives the second semantic entity.
  • S923 The network device determines second dialogue data according to the second semantic entity.
  • step S912 for the specific implementation manner of the second dialog data and the network device determining the second dialog data according to the fourth semantic entity, please refer to the description of step S912, which will not be repeated here.
  • S924 The network device sends the second dialogue data to the terminal device, and the terminal device receives the second dialogue data.
  • S925 The terminal device displays the summary information of the second dialog data in the conceptual view of the target dialog user interface.
  • step S925 can refer to step S913, which will not be repeated here.
  • the target dialogue user interface has already displayed dialogue data and knowledge graph sub-graphs, when the concept view When the semantic entity in is selected, the terminal device will display the summary information of the dialogue data related to the semantic entity, which helps users quickly understand the main content of the dialogue data related to the semantic entity.
  • a schematic flow chart of a dialogue interaction method corresponding to the embodiment of FIG. 4G may be as shown in FIG. 10A.
  • the flow may be applicable to a dialogue system composed of terminal equipment and network equipment, and specifically includes the following steps:
  • the step of triggering the display of the functional option corresponding to the functional semantic entity includes steps S1011 to S1012.
  • the terminal device detects a third operation acting on the task semantic entity.
  • the task semantic entity is the semantic entity displayed in the conceptual view in the target dialogue user interface.
  • One task semantic entity can be used to trigger one or more dialogue tasks, and the task semantic entity is used to indicate the functional boundary of the dialogue system.
  • the task semantic entity can be a semantic entity used to describe various travel tools, such as airplanes, trains, cars, etc., or it can be a semantic entity related to various travel tools, such as air tickets, train tickets, and ferry tickets.
  • Semantic entities describing various travel tools or semantic entities related to various travel tools can be used to indicate travel-related dialogue tasks possessed by the dialogue system, such as booking air tickets/tickets/ferry tickets, canceling air tickets/tickets/ferry tickets Wait.
  • the task semantic entity can also be a semantic entity used to describe a certain expected transaction, such as travel, meeting, catering, etc. It can also be a semantic entity related to the expected transaction, such as hotels, conference rooms, and various tours. The names of scenic spots or restaurants, etc., used to describe the semantic entity of a certain expected transaction or the semantic entity related to the expected transaction can be used to indicate the "plan" type of dialog tasks that the dialog system has, such as booking a hotel, Book conference rooms, book tickets, navigate, book hotel rooms, etc.
  • the task semantic entity is not limited to the description here, and specifically which semantic entities can be used as task semantic entities, with one or more dialogue tasks in the dialogue system, are not limited in this application.
  • the third operation acting on the task semantic entity specifically refers to the operation of selecting the functional semantic entity.
  • the third operation can take many forms.
  • the specific form of the third operation can refer to the form of the second operation acting on the second semantic entity, which will not be repeated here.
  • the specific form of the third operation is not limited in the embodiment of the present application.
  • the terminal device displays the key information corresponding to the task semantic entity in the conceptual view of the target dialog user interface.
  • the key information corresponding to the task semantic entity refers to each slot and the value of the dialog task corresponding to the task semantic entity.
  • the slot refers to various core information (such as time and geographic location) corresponding to the dialogue task, and the value on the slot is the specific content of the core information.
  • the dialogue task corresponding to the task semantic entity is booking a ticket
  • the slot of the dialogue task of booking a ticket can include core information such as "airline”, “departure time”, "seat number”, and "boarding gate”.
  • the value on the position may include the specific content of the airline, the specific content of the departure time, the specific content of the seat number, and the specific content of the boarding gate, and so on.
  • the target dialogue user interface when the terminal device displays the key information corresponding to the task semantic entity in the conceptual view of the target dialogue user interface may be as shown in G2 in FIG. 4G.
  • the dialogue task corresponding to the task semantic entity may also be triggered.
  • the step of triggering the dialog task corresponding to the functional semantic entity includes steps S1013 to S1015.
  • the terminal device detects the fourth operation acting on the key information corresponding to the task semantic entity, and acquires the user's intention for the key information of the task semantic entity.
  • the fourth operation acting on the key information refers to the operation of selecting the key information corresponding to the task semantic entity.
  • the fourth operation can take many forms.
  • the specific form of the fourth operation can refer to the form of the second operation acting on the second semantic entity, which will not be repeated here.
  • the specific form of the third operation is not limited in the embodiment of the present application.
  • the user's intention for the key information can be obtained by obtaining the dialog data input by the user after the fourth operation is detected, wherein the dialog data input by the user can be voice data input by the user, It can also be text data entered by the user.
  • the click operation shown in FIG. 4G is the fourth operation.
  • the user's intention for the key information can also be obtained according to the fourth operation.
  • the fourth operation is a voice control operation (that is, the user speaks a voice command related to key information)
  • the voice content corresponding to the operation of the voice operation can be obtained to obtain the user's intention for the key information.
  • the terminal device sends a dialog task execution request to the network device.
  • the dialog task execution request is used to request the network device to perform a dialog task that meets the user's intention.
  • the terminal device may send the dialogue data corresponding to the user's intention of the key information of the task semantic entity to the network device.
  • the dialog task that meets the user's intention determined according to the user's intention is “modify meeting time”
  • the specific content of the dialog task is "modify meeting The time is ten o'clock in the morning”.
  • S1015 The network device executes a dialogue task that meets the user's intention.
  • the network device executes a dialogue task that conforms to the user's intention according to the dialogue data corresponding to the user's intention of the key information of the task semantic entity.
  • the terminal device may also update key information corresponding to the task semantic entity.
  • the step of updating the key information corresponding to the task semantic entity includes S1016 to S1017.
  • the network device sends the result of executing the dialog task that meets the user's intention to the terminal device, and the terminal device receives the result of executing the dialog task corresponding to the functional semantic entity.
  • the terminal device updates the key information corresponding to the task semantic entity in the conceptual view of the target dialog user interface according to the result of executing the dialog task that meets the user's intention.
  • the terminal device updates the key information corresponding to the task semantic entity in the conceptual view of the target dialog user interface, which means that the terminal device adds the execution to the key information corresponding to the task semantic entity according to the result of executing the dialog task that meets the user's intention.
  • the result of the dialog task that meets the user's intention, or the result of the dialog task that meets the user's intention is used to replace the original result corresponding to the result.
  • the original result is "flight number: xx1 departure time: h1: m1 minutes”.
  • the result is "flight No.: Air China xx2 Departure time: h2: m2 minutes”
  • use the result of performing the dialogue task that meets the user's intention to replace the original result corresponding to the result that is, use "Flight number: Air China xxx Departure time: h2: m2 minutes
  • the target dialogue user interface after replacement is shown as G4 in Figure 4G.
  • FIG. 10B A schematic flow diagram of another dialogue interaction method corresponding to the embodiment of FIG. 4G is shown in FIG. 10B.
  • the flow can be applied to a dialogue system consisting of only terminal devices, and specifically includes the following steps:
  • the terminal device detects a third operation acting on the task semantic entity.
  • the terminal device displays the key information corresponding to the task semantic entity in the conceptual view of the target dialog user interface.
  • the terminal device detects the fourth operation acting on the key information corresponding to the task semantic entity, and acquires the user's intention for the key information of the task semantic entity.
  • steps S1021 to S1023 can refer to steps S1011 to S1013, which will not be repeated here.
  • S1024 The terminal device executes a dialogue task that meets the user's intention.
  • the terminal device updates key information corresponding to the task semantic entity in the conceptual view of the target dialog user interface according to the result of executing the dialog task that meets the user's intention.
  • the terminal device updates the key information corresponding to the task semantic entity in the conceptual view of the target dialog user interface according to the result of executing the dialogue task corresponding to the functional semantic entity. Refer to the description of step S1017, which will not be repeated here. .
  • the knowledge graph subgraph displayed in the conceptual view includes semantic entities that exist in the dialogue data in the dialogue view, and also includes triggers The task semantic entities of the dialogue task. These task semantic entities play a role in indicating the functional boundaries of the dialogue system, so that the user can learn the functions of the dialogue system based on these task semantic entities.
  • a schematic diagram of a dialog interaction method corresponding to the embodiment in FIG. 4H may be as shown in FIG. 11A.
  • the process can be applied to a dialog system based on human-computer interaction composed of network devices and terminal devices, and specifically includes the following steps:
  • the network device detects that a fifth semantic entity and a sixth semantic entity having a semantic relationship exist in the knowledge graph, the fifth semantic entity exists in the historical dialogue data, and the sixth semantic entity does not exist in the historical dialogue data.
  • the historical dialogue data refers to the dialogue data of the target dialogue that has been generated in the dialogue system.
  • the historical dialogue data here may refer to the historical dialogue data of the current dialogue corresponding to the target dialogue.
  • the historical dialogue data here can also be all historical dialogue data corresponding to the target dialogue (ie, historical dialogue data of this dialogue and historical dialogue data generated before this dialogue).
  • the fifth semantic entity and the sixth semantic entity that have a semantic relationship in the knowledge graph can have the following situations:
  • semantic entity there is a semantic entity in the historical dialogue data, and this semantic entity has a semantic relationship with another semantic entity that does not exist in the historical dialogue data. Then, the semantic entity can be called the fifth semantic entity. A semantic entity that does not exist in the historical dialogue data can be called the fifth semantic entity.
  • the at least two semantic entities there are at least two semantic entities in the historical dialogue data, the at least two semantic entities have a semantic relationship with the same semantic entity in the historical dialogue data, and the at least two semantic entities do not exist with another one
  • the at least two semantic entities may be referred to as the fifth semantic entity
  • the other semantic entity that does not exist in the historical dialogue data may be referred to as the sixth semantic entity.
  • the historical dialogue data is "Who is the MVP of the NBA in the 97-98 season?" and “Is Michael Jordan”.
  • the semantic entities in the historical dialogue data are "NBA”, “MVP”, “Michael Jordan”.
  • the semantic entity "NBA” has a semantic relationship with the semantic entities "Basketball” and "Michael Jordan”.
  • the semantic entity "Basketball” does not exist in the historical dialogue data, so the semantic entity “NBA” is the first Five semantic entities, the semantic entity “basketball” is the sixth semantic entity that has a semantic relationship with the semantic entity "NBA”; the semantic entity “MVP” and the semantic entities “James Harden”, “Michael Jordan”, “Messi” and " There is a semantic relationship in La Liga, and the semantic entities “James Harden”, “Messi” and “La Liga” do not exist in the historical dialogue data, then the semantic entity “MVP” is the fifth semantic entity, and the semantic entity “James Harden”, “Messi” and “La Liga” are the sixth semantic entities that have a semantic relationship with the semantic entity "MVP".
  • the semantic entities "NBA” and “Michael Jordan” both have a semantic relationship with “MVP”. It is assumed that in the knowledge graph, the semantic entities “NBA” and “Michael Jordan” are also related to the semantic entity “Bill”. "Carwright” has a semantic relationship, the semantic entities “NBA” and “Michael Jordan” are the fifth semantic entity, and the semantic entity “Bill Carright” is the first semantic entity that has a semantic relationship with the semantic entities "NBA” and "Michael Jordan”. Six semantic entities.
  • the fifth semantic entity and the sixth semantic entity that have a semantic relationship in the knowledge graph may also be other situations, which are not limited in the embodiment of the present application.
  • S1112 The network device generates third dialogue data according to the fifth semantic entity, the sixth semantic entity, and the semantic relationship between the fifth semantic entity and the sixth semantic entity.
  • the network device may input the fifth semantic entity, the sixth semantic entity, and the semantic relationship between the fifth semantic entity and the sixth semantic entity into the Encoder-Decoder model obtained by pre-training, and the data output by the Encoder-Decoder model Determined as the third conversation data.
  • the third dialogue data is dialogue data actively initiated by the dialogue system.
  • S1113 The network device sends the third dialogue data to the terminal device, and the terminal device receives the third dialogue data.
  • S1114 The terminal device displays the third dialogue data in the dialogue view of the target dialogue user interface.
  • the terminal device to display the third dialogue data in the dialogue view of the target dialogue user interface, refer to C2 in FIG. 4C, where the dialogue data "Harden and Jordan both play for the Chicago Bulls" is the third dialogue data.
  • FIG. 11B A schematic diagram of another dialog interaction method corresponding to the embodiment in FIG. 4H may be as shown in FIG. 11B.
  • This process is applicable to a dialog system based on human-computer interaction consisting of only terminal devices, and specifically includes the following steps:
  • the terminal device detects a fifth semantic entity and a sixth semantic entity that have a semantic relationship in the knowledge graph.
  • the fifth semantic entity exists in the historical dialogue data, and the sixth semantic entity does not exist in the historical dialogue data.
  • step S1111 for the definition and description of the fifth semantic entity, the sixth semantic entity and the historical dialogue data, please refer to the relevant description of step S1111, which will not be repeated here.
  • the terminal device generates third dialogue data according to the fifth semantic entity, the sixth semantic entity, and the semantic relationship between the fifth semantic entity and the sixth semantic entity.
  • step S1122 can be referred to the specific implementation manner of step S1112, which will not be repeated here.
  • the terminal device displays the third dialogue data in the dialogue view of the target dialogue user interface.
  • the dialogue system based on human-computer interaction can also actively initiate dialogue data based on the association relationship between various concepts in the historical dialogue data, and the dialogue system takes the initiative
  • the initiated third dialogue data plays a role in guiding the topic, making the dialogue content richer.
  • the flow of a dialogue interaction method for deleting semantic entities may be as shown in FIG. 12A.
  • the flow may be applied to a dialogue system composed of network equipment and terminal equipment, and specifically includes the following steps:
  • S1211 The network device generates a knowledge graph subgraph corresponding to the target dialogue.
  • the network device sends the knowledge graph subgraph corresponding to the target dialogue to the terminal device, and the terminal device receives the knowledge graph subgraph corresponding to the target dialogue.
  • steps S1211 to S1212 can refer to the description of steps S611 to S615 or steps S631 to S634 or steps S641 to S643, which will not be repeated here.
  • the first number may be the maximum number of semantic entities that can be displayed in the conceptual view displayed on the terminal device, and the value of the first number is related to the size of the conceptual view displayed on the terminal device, where the conceptual view displayed on the terminal device The larger the size, the larger the value of the first number.
  • the terminal device may delete one or more of the following semantic entities in the knowledge graph subgraph corresponding to the target dialogue:
  • a semantic entity that does not appear in the historical dialogue data may refer to the historical dialogue data of this dialogue, or may refer to all historical dialogue data corresponding to the target dialogue that has been generated in the dialogue system.
  • the semantic entities that do not appear in the historical dialogue data refer to those semantic entities that do not exist in the historical dialogue data, that is, the semantic entities that are not involved in the historical dialogue data.
  • a semantic entity whose semantic relationship path distance with the seventh semantic entity in the knowledge graph subgraph corresponding to the target dialogue is greater than the second distance threshold, and the seventh semantic entity is the semantic entity existing in the latest dialogue data displayed in the dialogue view
  • the latest dialogue data displayed in the dialogue view is the one or more dialogue data corresponding to the target dialogue that has the latest generation time in the dialogue system.
  • the embodiment of the present application does not limit which semantic entities in the knowledge graph subgraph corresponding to the target dialogue are specifically deleted by the terminal device in the process of deleting the semantic entity in the knowledge graph subgraph corresponding to the target dialogue.
  • S1214 The terminal device displays in the conceptual view of the target dialog user interface the knowledge graph subgraph corresponding to the target dialog after the semantic entity is deleted.
  • the target dialog user interface when the terminal device displays in the conceptual view of the target dialog user interface the knowledge graph subgraph corresponding to the target dialog after the semantic entity is deleted may be as shown in C4 in FIG. 4C.
  • the flow of another dialogue interaction method for deleting the semantic entity displayed in the conceptual view may be as shown in FIG. 12B.
  • This method is applicable to a dialogue system composed of network devices and terminal devices, and specifically includes the following steps:
  • the network device generates a knowledge graph subgraph corresponding to the target dialogue.
  • step S1221 can refer to the description of steps S611 to S614 or steps S631 to S634 or steps S641 to S642, which will not be repeated here.
  • the semantic entity in the knowledge graph sub-graph corresponding to the target dialogue that can be deleted by the network device can refer to the description of step S1213, which will not be repeated here.
  • the network device sends the knowledge graph subgraph corresponding to the target dialogue after the semantic entity is deleted to the terminal device, and the terminal device receives the knowledge graph subgraph corresponding to the target dialogue after the semantic entity is deleted.
  • the terminal device displays, in the conceptual view of the target dialog user interface, the knowledge graph sub-graph corresponding to the target dialog after the semantic entity is deleted.
  • the target dialog user interface when the terminal device displays in the conceptual view of the target dialog user interface the knowledge graph subgraph corresponding to the target dialog after the semantic entity is deleted may be as shown in C4 in FIG. 4C.
  • the flow of another dialogue interaction method for deleting the semantic entity displayed in the conceptual view may be as shown in FIG. 11C.
  • the flow may be applied to a dialogue system consisting of only terminal devices, and specifically includes the following steps:
  • the terminal device generates a knowledge graph subgraph corresponding to the target dialogue.
  • step S1231 can refer to the description of steps S621 to S623 or steps S651 to S652, which will not be repeated here.
  • the terminal device displays, in the conceptual view of the target dialog user interface, the knowledge graph sub-graph corresponding to the target dialog after the semantic entity is deleted.
  • steps S1232 to S1233 can refer to the description of the foregoing steps S1213 to S1214, which will not be repeated here.
  • the flow of a dialogue interaction method for adjusting the presentation of semantic entities in the conceptual view may be as shown in FIG. 13A.
  • the flow may be applicable to a dialogue system composed of network devices and terminal devices, and specifically includes the following steps:
  • the network device generates a knowledge graph subgraph corresponding to the target dialogue.
  • the network device sends the knowledge graph subgraph corresponding to the target dialogue to the terminal device, and the terminal device receives the knowledge graph subgraph corresponding to the target dialogue.
  • steps S1311 to S1312 can refer to the description of steps S611 to S615 or steps S631 to S634 or steps S641 to S643, which will not be repeated here.
  • the second number is smaller than the first number in step S1213.
  • Displaying the semantic entities in the knowledge graph subgraph corresponding to the target dialogue in a dense and compact manner in the conceptual view of the target dialogue user interface specifically refers to changing the size of the area occupied by the semantic entity in the conceptual view, and the semantic entity in the conceptual view.
  • One or more of the location of the area occupied by the user interface and the distance between the two semantic entities in the conceptual view enable more semantic entities to be fully displayed in the conceptual view of the target dialog user interface.
  • the semantic entities in the knowledge graph subgraph corresponding to the target dialogue are displayed in a dense and compact manner in the conceptual view of the target dialogue user interface, which may be displayed in a parallel laying manner as shown in C3 or C4 in FIG. 4C
  • the subgraph of the knowledge graph corresponding to the target dialogue is displayed in a dense and compact manner in the conceptual view of the target dialogue user interface, which may be displayed in a parallel laying manner as shown in C3 or C4 in FIG. 4C
  • the subgraph of the knowledge graph corresponding to the target dialogue are displayed in a dense and compact manner in the conceptual view of the target dialogue user interface, which may be displayed in a parallel laying manner as shown in C3 or C4 in FIG. 4C
  • the subgraph of the knowledge graph corresponding to the target dialogue is displayed in a dense and compact manner in the conceptual view of the target dialogue user interface, which may be displayed in a parallel laying manner as shown in C3 or C4 in FIG. 4C
  • the subgraph of the knowledge graph corresponding to the target dialogue is displayed in
  • the flow of a dialogue interaction method for adjusting the presentation of semantic entities in the conceptual view may be as shown in FIG. 13B.
  • the flow may be applicable to a dialogue system consisting of only terminal devices, and specifically includes the following steps:
  • the terminal device generates a knowledge graph subgraph corresponding to the target dialogue.
  • step S1331 can refer to the description of steps S621 to S623 or steps S651 to S652, which will not be repeated here.
  • the terminal device displays the knowledge graph subgraph corresponding to the target dialogue in a dense and compact manner in the conceptual view of the target dialogue user interface Semantic entities in.
  • step S1322 please refer to the description of step S1313, which will not be repeated here.
  • Figure 14 is a structural block diagram of a network device provided by an embodiment of the present application.
  • the network device 1400 may include a processor 1401, a memory 1402, a communication interface 1403 and any other similar or suitable components. These components can communicate on one or more communication buses, which can be a memory bus, a peripheral bus, and so on.
  • the processor 1401 may be a general-purpose processor, such as a central processing unit (CPU), and the processor 302 may also include a hardware chip.
  • the hardware chip may be one or a combination of the following: application specific integrated circuit (application specific integrated circuit). integrated circuit, ASIC), field programmable logic gate array (field programmable gate array, FPGA), complex programmable logic device (complex programmable logic device, CPLD).
  • the processor 1401 may process data received by the communication interface 1403, and the processor 1401 may also process data to be sent to the communication interface 1403 for transmission through a wired transmission medium.
  • the processor 1401 may be used to read and execute computer-readable instructions. Specifically, the processor 1401 may be used to call a program stored in the memory 1402, such as a program for implementing the dialogue interaction method provided by one or more embodiments of the present application on the network device side, and execute the instructions contained in the program.
  • a program stored in the memory 1402 such as a program for implementing the dialogue interaction method provided by one or more embodiments of the present application on the network device side, and execute the instructions contained in the program.
  • the memory 1402 is coupled with the processor 1401, and is used to store various software programs and/or multiple sets of instructions.
  • the memory 1402 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • An operating system is built in the memory 1402, for example, operating systems such as Linux and Windows.
  • the memory 1402 may also have a built-in network communication program, and the network communication program may be used to communicate with other devices.
  • the memory 1402 may be used to store the implementation program on the network device side of the dialog interaction method provided by one or more embodiments of the present application.
  • the implementation of the dialog interaction method provided by this application please refer to the foregoing method embodiment.
  • the communication interface 1403 can be used for the network device 300 to communicate with other devices, such as terminal devices.
  • the communication interface 1403 may include a wired communication interface.
  • it can be an Ethernet interface, an optical fiber interface, and so on.
  • the communication interface 1403 may also include a wireless communication interface.
  • the computer program product includes one or more computer instructions.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted through the computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a service or data center integrated with one or more available media.
  • the usable medium may be a semiconductor medium (for example, SSD) or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

对话交互方法、图形用户界面(31,41)、终端设备(101,200)以及网络设备(102,300)。其中,方法包括:终端设备(101,200)在目标对话用户界面(51)的第一区域中显示对话视图(506),并在所述目标对话用户界面(51)的第二区域中显示概念视图(505),所述目标对话用户界面(51)为目标对话对应的图形用户界面(31,41),所述对话视图(506)用于显示所述目标对话的对话数据(507,511,512,513,515,516,518,520,521,529),所述概念视图(505)用于显示目标对话对应的知识图谱子图(508,509,510,513,514,517,519,522),所述知识图谱子图(508,509,510,513,514,517,519,522)包括多个语义实体(531),以及,所述多个语义实体(531)中的各个语义实体(531)相互之间的语义关系,所述多个语义实体(531)包括第一语义实体,所述第一语义实体为所述对话数据(507,511,512,513,515,516,518,520,521,529)中存在的语义实体(531)。该技术方案可提高用户的对话交互体验。

Description

对话交互方法、图形用户界面、终端设备以及网络设备 技术领域
本申请涉及人工智能领域,尤其涉及对话交互方法、图形用户界面、终端设备以及网络设备。
背景技术
对话***,又可以称之为问答***,问答机器人等,是近年来随着人工智能(artificial intelligence,AI)技术的出现而开发和发展起来的***。它可以用准确、简洁的自然语言回答用户利用自然语言提出的问题,其可以满足用户对于快速、准确地获取信息的需求。
对话***可以通过图形用户界面(graphical user interface,GUI)显示用户与对话***之间的对话数据,即用户与对话***之间的对话数据可以在对话***对应的GUI中以对话视图的方式呈现。GUI中显示的对话视图可以将用户与对话***之间的对话数据直观地显示出来,以便用户查看。随着用户与对话***之间的对话数据增多,由于显示空间有限,对话视图中无法显示全部的对话数据,用户需要通过往前翻看(如向上翻看)、查找等方式才能回顾历史对话数据,这样不利于用户快速了解对话的全部内容,也不利于用户基于对话内容快速做决定。
发明内容
本申请提供对话交互方法、图形用户界面、终端设备以及网络设备,解决目前的对话***不利于用户快速了解对话的全部内容的问题。
第一方面,提供一种对话交互方法,该方法可应用于对话***中的终端设备上,该方法包括:终端设备在目标对话用户界面的第一区域中显示对话视图,并在目标对话用户界面的第二区域中显示概念视图,目标对话用户界面为目标对话对应的图形用户界面,对话视图用于显示目标对话的对话数据,概念视图用于显示目标对话对应的知识图谱子图,目标对话对应的知识图谱子图包括多个语义实体,以及,这多个语义实体中的各个语义实体相互之间的语义关系,多个语义实体包括第一语义实体,第一语义实体为目标对话的对话数据中存在的语义实体。
其中,目标对话为对话***中具备关联关系的对话双方或对话多方之间的对话,目标对话用户界面为用于展示该对话双方或对话多方各自发出的对话数据的图形用户界面。
在技术方案中,对话***中的终端设备在显示对话用户界面时,除了显示目标对话的对话数据外,还显示目标对话对应的知识图谱子图,该目标对话对应的知识图谱子图包括对话数据中存在的语义实体,这些语义实体相当于是对目标对话的对话数据的摘要和概括,有助于用户快速了解历史对话内容的概要,从而达到回顾历史对话内容的目的。
结合第一方面,在一种可能的实现方式中,目标对话对应的知识图谱子图包括的多个语义实体还包括与第一语义实体相关联的一个或多个第二语义实体。
在一种可行的实施方式中,上述第二语义实体可以包括在知识图谱中与第一语义实体相邻的语义实体。进一步地,第二语义实体包括这些在知识图谱中与第一语义实体相邻的语义实体中的部分语义实体。该部分语义实体可以为在知识图谱中与第一语义实体相邻的并且在对话过程中使用频率高于第一频率阈值的语义实体,其中,对话过程可以是指目标对话的对话过程,也可以是指整个对话***中的对话过程(即包括对话***中的多个对话的对话过程)。该部分语义实体也可以为在知识图谱中与第一语义实体相邻的并且基于用户画像确定的语义实体。在知识图谱中与第一语义实体相邻的语义实体中的部分语义实体不限于上述两种情况,本申请不做限制。
在另一种可行的实施方式中,上述的第二语义实体也可以包括在知识图谱子图中与第一语义实体的路径距离小于第一距离阈值的语义实体,即在知识图谱中与第一语义实体邻近的语义实体。进一步地,第二语义实体可以包括这些与第一语义实体邻近的语义实体中的部分语义实体。该部分语义实体可以为在对话过程中使用频率与第一语义实体邻近的并且在对话过程中使用频率高于第二频率阈值的语义实体,其中,对话过程可以是指目标对话的对话过程,也可以是指整个对话***中的对话过程。该部分语义实体也可以为在知识图谱中与第一语义实体相邻的并且基于用户画像确定的语义实体。与第一语义实体邻近的语义实体中的部分语义实体不限于上述两种情况,本申请不做限制。
在上述几种可能的实现方式中,目标对话对应的知识图谱子图中除了包括用于概括对话数据的概要的第一语义实体外,还包括第二语义实体,第二语义实体与第一语义实体相关联,第二语义实体起到了引导对话话题的作用,增强了用户的对话体验。
结合第一方面,在一种可能的实现方式中,上述方法还包括:在获取到新的对话数据的情况下,终端设备更新概念视图,更新后的概念视图用于显示根据新的对话数据更新的知识图谱子图,更新后的知识图谱子图包括新的对话数据中存在的语义实体,或,新的对话数据中存在的语义实体以及与新的对话数据中存在的语义实体相关联的语义实体。概念视图中显示的知识图谱子图会随着对话数据的产生而更新,实现了对话数据与知识图谱子图的同步;更新后的知识图谱子图中还包括与新的对话数据中存在的语义实体相关联的语义实体,起到了引导话题的作用。
结合第一方面,在一种可能的实现方式中,上述方法还包括:在知识图谱子图中的语义实体的数量大于第一数量的情况下,终端设备在知识图谱子图中删除一个或多个语义实体。通过删除知识图谱子图中的语义实体,实现了对语义实体的动态删除,保证了概念视图的简洁。
结合第一方面,在一种可能的实现方式中,上述方法还包括:在检测到作用于对话视图中显示的第一对话数据的第一操作的情况下,终端设备响应于第一操作,在概念视图中突出显示第三语义实体,第三语义实体包括第一对话数据中存在的语义实体,和/或,与第一对话数据中存在的语义实体相关联的语义实体。可选地,该第三语义实体也可以包括与第一对话数据的话题关联度高于关联度阈值的语义实体。
结合第一方面,在一种可能的实现方式中,上述方法还包括:在检测到作用于概念视图中显示的第四语义实体的第二操作的情况下,终端设备响应于第二操作,在对话视图中显示第二对话数据,第四语义实体为第二对话数据中存在的语义实体,或,与第二对话数 据中存在的语义实体相关联的语义实体。可选地,该第二对话数据也可以为与第四语义实体的话题关联度高于关联度阈值的历史对话数据。
结合第一方面,在一种可能的实现方式中,上述方法还包括:在检测到作用于概念视图中显示的第四语义实体的第二操作的情况下,终端设备响应于第二操作,在概念视图中显示第二对话数据的摘要信息,第四语义实体为第二对话数据中存在的语义实体,或,与第二对话数据中存在的语义实体相关联的语义实体。进一步地,终端设备可以在概念视图中显示产生时间最晚的第二对话数据的摘要信息。
在上述几种可能的实现方式中,当对话视图中的对话数据被选中时,终端设备在概念视图中突出显示与该对话数据对应的语义实体;当概念视图中的语义实体时,终端设备在对话视图中显示与该语义实体对应的对话数据,实现了对话视图与概念视图之间的协同交互,有助于帮助用户定位到语义实体和历史对话内容,可以提升用户的对话体验。
结合第一方面,在一种可能的实现方式中,上述方法还包括:在检测到作用于概念视图中显示的任务语义实体的第三操作的情况下,终端设备响应于第三操作,在概念视图中显示任务语义实体对应的关键信息。
结合第一方面,在一种可能的实现方式中,终端设备响应于第三操作,在概念视图中显示任务语义实体对应的关键信息之后,还包括:在检测到作用于关键信息的第四操作并获取到针对关键信息的用户意图的情况下,终端设备响应于第四操作,触发执行符合用户意图的对话任务。
结合第一方面,在一种可能的实现方式中,终端设备响应于第四操作,触发执行符合用户意图的对话任务之后,还包括:终端设备根据执行符合用户意图的对话任务得到的结果,在概念视图中更新关键信息。
在上述几种可能的实现方式中,概念视图中显示的知识图谱子图除了包括对话视图中的对话数据中存在的语义实体外,还包括了任务语义实体,任务语义实体起到了明确对话***的功能边界的作用,使得用户可以根据这些任务语义实体获知对话***具备的功能。
结合第一方面,在一种可能的实现方式中,上述方法还包括:当识别到与历史对话数据中的语义实体在知识图谱中存在语义关系的新的语义实体,并且,所述新的语义实体不存在于历史对话数据中,终端设备根据历史对话数据中的语义实体和新的语义实体发起对话。终端设备根据历史对话数据中各个概念之间的关联关系,主动发起对话,起到了引导话题的作用,使得对话内容更加丰富。
第二方面,提供另一种对话交互方法,该方法可应用于对话***中的网络设备上,该方法包括:网络设备根据目标对话的对话数据生成目标对话对应的知识图谱子图,目标对话对应的知识图谱子图包括多个语义实体,以及,这多个语义实体中的各个语义实体相互之间的语义关系,多个语义实体包括第一语义实体,第一语义实体为所述对话数据中存在的语义实体;网络设备将目标对话对应的知识图谱子图发送给终端设备,目标对话对应的知识图谱子图被终端设备用于在目标对话用户界面的第一区域中显示对话视图,并在目标对话用户界面的第二区域中显示概念视图,对话视图用于显示目标对话的对话数据,概念视图用于显示目标对话对应的知识图谱子图,目标对话用户界面为目标对话对应的图形用户界面。
其中,目标对话为对话***中具备关联关系的对话双方或对话多方之间的对话,目标对话用户界面为用于展示该对话双方或对话多方各自发出的对话数据的图形用户界面。
在技术方案中,网络设备根据目标对话的对话数据生成目标对话对应的知识图谱子图,并将生成的知识图谱子图发送给终端设备,使得终端设备在显示对话用户界面时,除了显示目标对话的对话数据外,还显示目标对话对应的知识图谱子图,该目标对话对应的知识图谱子图包括对话数据中存在的语义实体,这些语义实体相当于是对目标对话的对话数据的摘要和概括,有助于用户快速了解历史对话内容的概要,达到了回顾历史对话内容的目的。
结合第二方面,在一种可能的实现方式中,上述方法还包括:网络设备根据新的对话数据更新目标对话对应的知识图谱子图,并将更新后的知识图谱子图发送给终端设备,该更新后的知识图谱子图被终端设备用于更新概念视图,更新后的知识图谱子图包括新的对话数据中存在的语义实体,或,新的对话数据中存在的语义实体以及与该新的对话数据中存在的语义实体相关联的语义实体。
第三方面,提供一种终端设备上的图形用户界面,上述终端设备具有显示屏、存储器以及一个或多个处理器,上述一个或多个处理器用于执行存储在上述存储器中的一个或多个计算机程序,该图形用户界面为目标对话对应的图形用户界面,该图形用户界面可包括:在图形用户界面的第一区域中显示对话视图,并在图形用户界面的第二区域中显示概念视图,对话视图用于显示目标对话的对话数据,概念视图用于显示目标对话对应的知识图谱子图,目标对话对应的知识图谱子图包括多个语义实体,以及,这多个语义实体中的各个语义实体相互之间的语义关系,多个语义实体包括第一语义实体,第一语义实体为目标对话的对话数据中存在的语义实体。
其中,目标对话为对话***中具备关联关系的对话双方或对话多方之间的对话,目标对话用户界面为用于展示该对话双方或对话多方各自发出的对话数据的图形用户界面。
结合第三方面,在一种可能的实现方式中,目标对话对应的知识图谱子图包括的多个语义实体还包括与第一语义实体相关联的一个或多个第二语义实体。
在一种可行的实施方式中,上述的第二语义实体可以包括在知识图谱中与第一语义实体相邻的语义实体。进一步地,第二语义实体包括这些在知识图谱中与第一语义实体相邻的语义实体中的部分语义实体。该部分语义实体可以为在知识图谱中与第一语义实体相邻的并且在对话过程中使用频率高于第一频率阈值的语义实体,其中,对话过程可以是指目标对话的对话过程,也可以是指整个对话***中的对话过程(即包括对话***中的多个对话的对话过程)。该部分语义实体也可以为在知识图谱中与第一语义实体相邻的并且基于用户画像确定的语义实体。在知识图谱中与第一语义实体相邻的语义实体中的部分语义实体不限于上述两种情况,本申请不做限制。
在一种可行的实施方式中,上述的第二语义实体也可以包括在知识图谱子图中与第一语义实体的路径距离小于第一距离阈值的语义实体,即在知识图谱字体中与第一语义实体邻近的语义实体。进一步地,第二语义实体可以包括这些与第一语义实体邻近的语义实体中的部分语义实体。该部分语义实体可以为在对话过程中使用频率与第一语义实体邻近的并且在对话过程中使用频率高于第二频率阈值的语义实体,其中,对话过程可以是指目标 对话的对话过程,也可以是指整个对话***中的对话过程。该部分语义实体也可以为在知识图谱中与第一语义实体相邻的并且基于用户画像确定的语义实体。与第一语义实体邻近的语义实体中的部分语义实体不限于上述两种情况,本申请不做限制。
结合第三方面,在一种可能的实现方式中,在获取到新的对话数据的情况下,更新概念视图,更新后的概念视图用于显示根据新的对话数据更新的知识图谱子图,更新后的知识图谱子图包括新的对话数据中存在的语义实体,或,新的对话数据中存在的语义实体以及与新的对话数据中存在的语义实体相关联的语义实体。概念视图中显示的知识图谱子图会随着对话数据的产生而更新,实现了对话数据与知识图谱子图的同步;更新后的知识图谱子图中还包括与新的对话数据中存在的语义实体相关联的语义实体,起到了引导话题的作用。
结合第三方面,在一种可能的实现方式中,在知识图谱子图中的语义实体的数量大于第一数量的情况下,在知识图谱子图中删除一个或多个语义实体。
结合第三方面,在一种可能的实现方式中,在检测到作用于对话视图中显示的第一对话数据的第一操作的情况下,响应于第一操作,在概念视图中突出显示第三语义实体,第三语义实体包括第一对话数据中存在的语义实体,和/或,与第一对话数据中存在的语义实体相关联的语义实体。可选地,该第三语义实体也可以包括与第一对话数据的话题关联度高于关联度阈值的语义实体。
结合第三方面,在一种可能的实现方式中,在检测到作用于概念视图中显示的第四语义实体的第二操作的情况下,响应于第二操作,在对话视图中显示第二对话数据,第四语义实体为第二对话数据中存在的语义实体,或,与第二对话数据中存在的语义实体相关联的语义实体。可选地,该第二对话数据也可以为与第四语义实体的话题关联度高于关联度阈值的历史对话数据。
结合第三方面,在一种可能的实现方式中,在检测到作用于概念视图中显示的第四语义实体的第二操作的情况下,响应于第二操作,在概念视图中显示第二对话数据的摘要信息,第四语义实体为第二对话数据中存在的语义实体,或,与第二对话数据中存在的语义实体相关联的语义实体。进一步地,可以在概念视图中显示产生时间最晚的第二对话数据的摘要信息。
结合第三方面,在一种可能的实现方式中,在检测到作用于概念视图中显示的任务语义实体的第三操作的情况下,响应于第三操作,在概念视图中显示任务语义实体对应的关键信息。
结合第三方面,在一种可能的实现方式中,在响应于第三操作,在概念视图中显示任务语义实体对应的关键信息之后,在检测到作用于关键信息的第四操作并获取到针对关键信息的用户意图的情况下,响应于第四操作,触发执行符合用户意图的对话任务。
结合第一方面,在一种可能的实现方式中,在响应于第四操作,触发执行符合用户意图的对话任务之后,根据执行符合用户意图的对话任务得到的结果,在概念视图中更新关键信息。
结合第三方面,在一种可能的实现方式中,当识别到与历史对话数据中的语义实体在知识图谱中存在语义关系的新的语义实体,并且,所述新的语义实体不存在于历史对话数 据中,根据历史对话数据中的语义实体和新的语义实体发起对话。
第四方面,提供一种终端设备,该终端设备可包括显示屏、存储器以及一个或多个处理器,上述一个或多个处理器被用于执行存储在上述存储器中的一个或多个计算机程序,上述一个或多个处理器在执行该一个或多个计算机程序时,使得终端设备实现上述第一方面或第一方面的实现方式中的任意一种方法。
第五方面,提供另一种终端设备,该终端设备可包括一种装置,该装置可实现上述第一方面或第一方面的实现方式中的任意一种方法。
第六方面,提供一种网络设备,该网络设备可包括存储器以及一个或多个处理器,上述一个或多个处理器被用于执行存储在上述存储器中的一个或多个计算机程序,上述一个或多个处理器在执行该一个或多个计算机程序时,使得网络设备实现上述第二方面或第二方面的实现方式中的任意一种方法。
第七方面,提供另一种网络设备,该网络设备可包括一种装置,该装置可实现上述第二方面或第二方面的实现方式中的任意一种方法。
第八方面,提供一种包含指令的计算机程序产品,当上述计算机程序产品在终端设备上运行时,使得上述终端设备执行如第一方面或第一方面的实现方式中的任意一种方法。
第九方面,提供一种包含指令的计算机程序产品,当上述计算机程序产品在网络设备上运行时,使得上述网络设备执行如第二方面或第二方面的实现方式中的任意一种方法。
第十方面,提供一种计算机可读存储介质,包括指令,当上述指令在终端设备上运行时,使得上述终端设备执行如第一方面或第一方面的实现方式中的任意一种方法。
第十一方面,提供一种计算机可读存储介质,包括指令,当上述指令在网络设备上运行时,使得上述网络设备执行如第一方面或第一方面的实现方式中的任意一种方法。
第十二方面,提供一种通信***,该通信***可包括终端设备,还可以包括网络设备。其中终端设备可以为如第四方面或第五方面的终端设备,网络设备可以为如第六方面或第七方面的网络设备。
附图说明
图1是本申请实施例提供的对话***的***架构示意图;
图2是本申请实施例提供的终端设备的结构示意图;
图3A-图3F是本申请实施例提供的进入目标对话用户界面过程中终端设备上实现的一些图形用户界面;
图4A-图4H是本申请实施例提供的进入目标对话用户界面后终端设备上实现的一些图形用户界面;
图5A-图5B是本申请实施例提供的对话交互方法的一种流程示意图;
图6A-图6E是本申请实施例提供的对话交互方法的另一种流程示意图;
图7A-图7B是本申请实施例提供的对话交互方法的又一种流程示意图;
图8A-图8B是本申请实施例提供的对话交互方法的又一种流程示意图;
图9A-图9B是本申请实施例提供的对话交互方法的又一种流程示意图;
图10A-图10B是本申请实施例提供的对话交互方法的又一种流程示意图;
图11A-图11B是本申请实施例提供的对话交互方法的又一种流程示意图;
图12A-图12C是本申请实施例提供的对话交互方法的又一种流程示意图;
图13A-图13B是本申请实施例提供的对话交互方法的又一种流程示意图;
图14是本申请实施例提供的一种网络设备的结构框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
本申请的技术方案可以应用于利用用户界面来展示对话数据的对话***。
本申请中,对话数据是对话***中的对话多方或对话双方就所处的对话场景或对话环境所发出的语音数据或文本数据等用于表达对话多方或对话双方各自的观点或思想或逻辑的数据,对话数据又可以称之为会话数据,聊天数据,问答数据,等等,本申请不做限制。
用户界面(user interface,UI)是应用程序或操作***与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。应用程序的用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在终端设备上经过解析,渲染,最终呈现为用户可以识别的内容,比如图片、文字、按钮等控件。控件(control)也称为部件(widget),是用户界面的基本元素,典型的控件有工具栏(toolbar)、菜单栏(menu bar)、文本框(text box)、按钮(button)、滚动条(scrollbar)、图片和文本。界面中的控件的属性和内容是通过标签或者节点来定义的,比如XML通过<Textview>、<ImgView>、<VideoView>等节点来规定界面所包含的控件。一个节点对应界面中一个控件或属性,节点经过解析和渲染之后呈现为用户可视的内容。此外,很多应用程序,比如混合应用(hybrid application)的界面中通常还包含有网页。网页,也称为页面,可以理解为内嵌在应用程序界面中的一个特殊的控件,网页是通过特定计算机语言编写的源代码,例如超文本标记语言(hyper text markup language,HTML),层叠样式表(cascading style sheets,CSS),java脚本(JavaScript,JS)等,网页源代码可以由浏览器或与浏览器功能类似的网页显示组件加载和显示为用户可识别的内容。网页所包含的具体内容也是通过网页源代码中的标签或者节点来定义的,比如HTML通过<p>、<img>、<video>、<canvas>来定义网页的元素和属性。用户界面常用的表现形式是GUI,是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在终端设备的显示屏中显示的一个图标、窗口、控件等界面元素,其中控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。
在一些实施例中,利用用户界面来展示对话数据的对话***可以为基于人机交互的对话***,基于人机交互的对话***中涉及的对话方可以为人与机器,即用户与设备,该设备可以为用户持有的设备。具体地,基于人机交互的对话***可以为面向个体用户,用于为个体用户提供服务的对话***,其可以为安装于终端设备上的各种辅助应用(application,APP),如Siri、Cortana、Alexa、Google Now或其他用于为独立的个体用户提供助理性质的服务的辅助APP。基于人机交互的对话***也可以为面向所有用户,用于为所有用户提供某种服务的对话***,其可以为各个企业或公司为解决员工或用户的问题而设计的各类客服助手、工作助手、智能机器人等,例如可以为阿里小蜜。
可选地,利用用户界面来展示对话数据的对话***也可以为基于即时通信的对话***,基于即时通信的对话***中涉及的对话方可以为两个或多个用户。基于即时通信的对话***为用于建立两个或多个用户之间的即时通信的通信***,具体可以为QQ、微信、钉钉、飞信等使用网络实时地传递对话数据的通信工具。
在一些可能的实施场景,该对话***的***架构可以如图1所示。该对话***10可以由终端设备101和网络设备102组成。终端设备101面向用户,可以与用户进行交互,终端设备101可以通过输入外设(如显示屏、麦克风等)获取用户发起的各种操作,并基于用户发起的操作向网络设备发起请求,以获取网络设备根据用户发起的操作所产生的响应,并通过输出外设(如显示屏、扬声器等)向用户输出该响应。例如,该终端设备为基于人机交互的对话***中的终端设备,则该终端设备可以获取用户输入的对话数据,将该对话数据发送给网络设备,然后接收该网络设备根据该对话数据产生的回复数据,并通过显示屏向用户显示该回复数据。具体地,该终端设备可以为手机、电脑、IPAD、电子阅读器等具备显示功能的设备。网络设备102用于为该对话***提供与对话相关的后台支持,网络设备102可以接收终端设备基于用户发起的操作所发起的请求,根据该请求执行相应操作并产生响应,将该响应返回给终端设备,以完成对话***与用户之间的交互。例如,该网络设备为基于即时通信的对话***,该网络设备可以接收第一终端设备发送的对话数据A,网络设备可以将该对话数据A发送给第二用户终端,该第二用户终端为该对话数据A的目的端,然后在接收到该第二用户终端发送给该第一用户终端的对话数据B的情况下,将该对话数据B发送给第一用户终端,以此完成第一用户终端与第二用户终端之间的对话交互。具体地,网络设备102可包括实时通信服务器、数据库服务器等,实时通信服务器可以用于与终端设备101交互,数据库服务器用于存储用以实现对话***所实现的功能的各种数据。例如,该对话***为基于人机交互的对话***,该基于人机交互的对话***利用知识图谱产生回复数据,则该数据库服务器可用于存储对话数据和用以产生回复数据的知识图谱。又如,该对话***为基于即时通信的对话***,则该数据库服务器可用于存储该即时通信***中的各个即时通信账号,以及各个即时通信账号相互之间的即时通信关系(如好友关系)。
在另一些可能的实施场景中,在该对话***为基于人机交互的对话***的情况下,该对话***还可以由终端设备这一独立设备组成。终端设备除了可以执行上述图1所示的***架构中的终端设备101执行的操作外,还可以执行上述图1所示的***架构中的网络设备102所执行的全部或部分操作。
在上述对话***中,用于实现与用户之间的交互和信息交换的用户界面在终端设备中进行显示,为便于理解,首先介绍本申请所涉及的终端设备。参见图2,图2示例性地示出了终端设备200的结构示意图。
终端设备200可以包括处理器210,存储器220,显示屏230,音频模块240,扬声器240A,受话器240B,麦克风240C、传感器模块250、通信部件260等,其中,传感器模块250可以包括压力传感器250A,指纹传感器250B,触摸传感器250C等。可以理解的是,本申请实施例示意的结构并不构成对终端设备200的具体限定。
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括应用处理器 (application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。在一些实施例中,用户200也可以包括一个或多个处理器210。
处理器210中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器210中的存储器为高速缓冲存储器。该存储器可以保存处理器210刚用过或循环使用的指令或数据。如果处理器210需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器210的等待时间,因而提高了终端设备200的效率。
在一些实施例中,处理器210可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口等。
存储器220可以用于存储一个或多个计算机程序,该一个或多个计算机程序包括指令。处理器210可以通过运行存储在存储器220的上述指令,从而使得终端设备200执行本申请一些实施例中所提供的对话交互的方法,以及各种功能应用以及数据处理等。存储器220可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***;该存储程序区还可以存储一个或多个应用程序(比如图库、联系人等)等。存储数据区可存储终端设备200使用过程中所创建的数据(比如照片,联系人等)。此外,存储器220可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
终端设备200通过GPU,显示屏230,以及应用处理器等可以实现显示功能。GPU为图像处理的微处理器,连接显示屏230和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器210可包括一个或多个GPU,其执行指令以生成或改变显示信息。
显示屏230用于显示图像,视频等。显示屏230包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,终端设备200可以包括2个或N个显示屏230,N为大于2的正整数。
终端设备200可以通过音频模块240,扬声器240A,受话器240B,麦克风240C以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块240用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块240还可以用于对音频信号编码和解码。在一些实施例中,音频模块240可以设置于处理器210中,或将音频模块240的部分功能模块设置于处理器210中。扬声器240A,也称“喇叭”,用于将音频电信号转换为声音信号。终端设备200可以通过扬声器240A收听音乐,或收听免提通话。
受话器240B,也称“听筒”,用于将音频电信号转换成声音信号。当终端设备200接听电话或语音信息时,可以通过将受话器240B靠近人耳接听语音。
麦克风240C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风240C发声,将声音信号输入到麦克风240C。终端设备200可以设置至少一个麦克风240C。在另一些实施例中,终端设备200可以设置两个麦克风240C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,终端设备200还可以设置三个,四个或更多麦克风240C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
压力传感器250A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器250A可以设置于显示屏230。压力传感器250A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器250A,电极之间的电容改变。终端设备200根据电容的变化确定压力的强度。当有触摸操作作用于显示屏230,终端设备200根据压力传感器250A检测所述触摸操作强度。终端设备200也可以根据压力传感器250A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
指纹传感器250B用于采集指纹。终端设备200可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器250C,也可称触控面板或触敏表面。触摸传感器250C可以设置于显示屏230,由触摸传感器250C与显示屏230组成触摸屏,也称“触控屏”。触摸传感器250C用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏230提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器250C也可以设置于终端设备200的表面,与显示屏230所处的位置不同。
通信部件260可用于终端设备200与其他通信设备通信,其他通信设备例如可以为网络设备(如服务器)等。通信部件260可包括有线通信接口,例如为以太网口、光纤接口等。可选地,通信部件260还可以包括无线通信接口。具体实现中,该通信部件260以包括射频接口以及射频电路,以用于实现无线通信接口所实现的功能。射频电路可包括收发机以及用于在无线通信时在自由空间中发送和接收电磁波的部件(如导体、导线等)等。
应理解的是,终端设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
图2示例性所示的终端设备200可以通过显示屏230显示以下各个实施例中所描述的各个用户界面。终端设备200也可以通过触摸传感器230C在各个用户界面中检测触控操作,例如在各个用户界面中的点击操作(如在图标上的触摸操作、双击操作),又例如在各个用户界面中的向上或向下的滑动操作,或执行画圆圈手势的操作,等等。在一些实施例中, 终端设备也可以通过除触摸传感器之外的其他输入外设检测用户在用户界面中的操作,例如,终端设备可以通过麦克风240C检测用户在用户界面中的语音操作。又如,终端设备还可以通过图2中未示出的摄像头检测用户在用户界面中的非触控的手势操作或动作操作。又如,终端设备还可以通过图2中未示出的鼠标、触控板等输入外设检测手势操作,如移动鼠标、点击鼠标等操作,不限于这里的描述。
接下来结合本申请的应用场景来描述终端设备200上实现的与对话***有关的图形用户界面的一些实施例。
在对话***中,对话双方或对话多方中的其中一个对话方可通过进入目标对话用户界面以进行与对话数据有关的操作,如发送对话数据,查看对话数据,删除对话数据,等等。先介绍进入目标对话用户界面过程中终端设备上实现的一些图形用户界面,参见图3A-图3F。
本申请中,对话用户界面是指终端设备200上的用于展示对话双方或对话多方各自发出的对话数据的图形用户界面。对于基于人机交互的对话***来说,该对话用户界面可以是终端设备200上的用于展示对话***与用户各自发出的对话数据的用户界面。对于基于即时通信的对话***来说,该对话用户界面可以是终端设备200上的用于展示两个或多个用户各自发出的对话数据的用户界面。目标对话用户界面为目标对话的对话用户界面,目标对话是指具备关联关系的对话双方或对话多方之间的对话。对于基于人机交互的对话***来说,目标对话指终端设备的持有用户或使用用户与对话***之间的对话,也即终端设备的持有用户或使用用户与终端设备之间的对话。对于基于即时通信的对话***来说,目标对话指的是存在即时通信关系的两个或多个即时通信用户之间的对话。例如,即时通信用户1与即时通信用户2、即时通信用户3以及即时通信用户4之间均存在好友的关系,即时通信用户1组建了一个即时通信群组,该即时通信群组包括即时通信用户1、即时通信用户2、即时通信用户3,则目标对话可以为即时通信用户1与即时通信用户2之间的单独对话,也可以为即时通信用户1与即时通信用户3之间的单独对话,也可以为即时通信用户1与即时通信用户4之间的单独对话,也可以是即时通信1、即时通信用户2、即时通信用户3之间的即时通信群组对话。
在一些实施例中,用户可以从用于应用程序菜单的用户界面进入目标对话用户界面。以下介绍用户从用于应用程序菜单的用户界面进入目标对话用户界面的过程中,终端设备上的一些图形用户界面。
首先参见图3A,图3A示例性示出了终端设备上的用于应用程序菜单的示例性图形用户界面31。图形用户界面31可包括:状态栏301,具有常用应用程序图标的托盘302,其他应用程序图标303。其中:
状态栏301可包括:移动通信信号(又可称为蜂窝信号)的一个或多个信号强度指示符304、无线高保真(wireless fidelity,Wi-Fi)信号的一个或多个信号强度指示符305,电池状态指示符306、时间指示符307等。
具有常用应用程序图标的托盘302可以用于显示在终端设备200上使用频率较高的或用户自行设置的或***默认设置的应用程序图标,例如图3A所示的电话图标308、联系人图标309、短信图标310、相机图标311。在一些实施例中,该常用应用程序图标的托盘302 还可用于显示对话***对应的应用程序(以下将对话***对应的应用程序称之为目标应用程序)的图标,例如可以用于显示一些基于即时通信的聊天工具(如钉钉、飞信)的图标等。
其他应用程序图标303指终端设备200上安装的除常用应用程序之外的应用程序的图标,例如图3A所示的微信(Wechat)的图标312、QQ的图标313、推特(Twitter)的图标314、脸书(Facebook)的图标315、邮箱的图标316、云服务的图标317、备忘录的图标318、支付宝的图标319、图库的图标320、设置的图标321。在一些实施例中,该其他应用程序图标303可以包括目标应用程序的图标,该目标应用程序的图标例如可以为图3A所示的微信的图标312、QQ的图标313等。
其他应用程序图标303可分布在多个页面,图形用户界面31还可包括页面指示符322,页面指示符322可用于指示用户当前浏览的是哪一个页面中的应用程序。用户可以左右滑动其他应用程序图标的区域,来浏览其他页面中的应用程序图标。
在一些实施例中,图3A示例性所示的图形用户界面31可以为主界面(Home screen)。
在其他一些实施例中,终端设备200还可以包括主屏幕键323。该主屏幕键323可以是实体按键,也可以是虚拟按键。该主屏幕键可用于接收用户的指令,将当前显示的用户界面返回到主界面,这样可以方便用户随时查看主屏幕。用户的指令具体可以是用户单次按下主屏幕键的操作指令,也可以是用户在短时间内连续两次按下主屏幕键的操作指令,还可以是用户在预订时间内长按主屏幕键的操作指令。在本申请其他一些实施例中,主屏幕键还可以集成指纹识别器,以便用于在按下主屏幕键的时候,随之进行指纹采集和识别。
接着请参见图3B-图3C,图3B-图3C示例性示出了用户从用于应用程序菜单的用户界面进入终端设备200上的目标对话用户界面时终端设备实现上的图形用户界面。
如图3B所示,当检测到在目标应用程序的图标(如微信的图标312)上的点击操作时,响应于该点击操作,终端设备200显示该目标应用程序的图形用户界面41。
如图3B所示,图形用户界面41可包括:状态栏401,标题栏402,选项导航栏403、页面内容显示区域404。其中:
状态栏401可参考图3A所示的用户界面31中的状态栏301,这里不再赘述。
标题栏402可包括:返回键416,以及当前页面指示符417,返回键416可用于返回菜单上一级。本领域技术人员可以理解,一个页面的逻辑上一级是固定的,在应用程序设计时便已确定。当前页面指示符417可用于指示当前页面,例如文本信息“微信”,不限于文本信息,当前页面指示符还可以是图标。
选项导航栏403用于显示目标应用程序的多个应用选项,选项导航栏403包括应用选项405(“微信”),应用选项406(“通讯录”),应用选项407(“发现”)和应用选项408(“我”)。
页面内容显示区域404用于显示被用户选中的应用选项的下一级菜单或内容。选项内容显示区域404显示的内容可随着被用户选中的应用选项的变化而变化。当终端设备200检测到在选项导航栏403中的应用选项的点击操作时,可响应于该点击操作,终端设备200可以在选项内容显示区域404显示该应用选项的下一级菜单或内容,并在选项标题栏显示该应用选项的标题。
页面内容显示区域404中显示的内容为应用选项405(“微信”)对应的内容,包括选项 409(“QQ邮箱提醒”)、选项410(“订阅号”)、选项411(“XXX”)、选项412(“YYY”)、选项413(“小张”)选项414(“小李”)、选项415(“小赵”)。其中,选项411、选项412、选项413以及选项414为对话选中。本申请中,应用选项405(“微信”)可以被称之为对话应用选项,对话应用选项对应的页面内容显示区域可用于显示一个或多个对话选项。在即时通信场景中,一个对话选项对应一个即时通信会话。可选地,对话应用选项还可以称为“朋友”(如目标应用程序为支付宝)、“消息”(如目标应用程序为QQ、淘宝等)、“聊天”,等等,不限于这里的描述。
如图3C所示,当检测到在对话应用选项(“微信”)对应的页面内容显示区域中的对话选项(如对话选项411)上的点击操作时,响应于该点击操作,终端设备200显示目标对话用户界面51。目标对话用户界面51可包括:状态栏501,标题栏502、对话区域503以及对话输入区域504。其中:
状态栏501可参考图3A所示的图形用户界面31中的状态栏301,标题栏502可参考图3B所示的图形用户界面41中的标题栏402,这里不再赘述。
对话区域503可包括对话视图506和概念视图505。本申请中,对话视图506在目标对话用户界面51中占据的区域可称为第一区域,概念视图505在目标对话用户界面51中占据的区域可称为第二区域。对话视图506用于显示目标对话的对话数据。概念视图505用于显示目标对话对应的知识图谱子图。目标对话对应的知识图谱子图可包括多个语义实体,以及,这多个语义实体中的各个语义实体相互之间的语义关系,多个语义实体可包括目标对话的对话数据中存在的语义实体。关于目标对话对应的知识图谱子图的各种情况,可参考后续描述。
对话输入区域504为持有或使用该终端设备200的用户输入对话数据的区域,持有或使用该终端设备200的用户可以通过文字和/或语音的方式输入对话数据。
结合图3B-图3C可知,用户通过多次点击操作依次选择要显示的图形用户界面的方式实现了从用于应用程序菜单的图形用户界面进入目标对话用户界面。不限于通过多次点击操作依次选择要显示的图形用户界面的方式,在可选实施方式中,用户还可以通过其他方式依次选择要显示的图形用户界面的方式以实现从用于应用程序菜单的图形用户界面进入目标对话用户界面,例如还可以通过双击、画圈、语音等方式依次选择要显示的图形用户界面,本申请不做限制。另外,对于具体选择的次数,即通过几次才能进入目标对话用户界面,其与目标应用程序的用户界面设计有关,本申请不做限制。
再请参见图3D,图3D示例性示出了示出了用户从用于应用程序菜单的用户界面进入终端设备200上的目标对话用户界面时终端设备实现上的图形用户界面。
如图3D所示,当检测到用户在主屏幕键的按压操作并且该按压操作的持续的时间长度大于时间长度阈值时,响应于该按压操作,终端设备200显示目标对话用户界面51。目标对话用户界面51的介绍可参考图3C对应的介绍,这里不再赘述。
结合图3D可知,终端设备通过长按主屏幕键直接唤起目标对话用户界面的方式实现了从用于应用程序菜单的用户界面进入目标对话用户界面。不限于长按的方式,在可选实施方式中,用户还可以通过其他方式直接唤起对话用户界面以实现从用于应用程序菜单的用户界面进入对话用户界面,例如还可以通过在用于应用程序菜单的用户界面画圈唤起、 双击或三击主屏幕键唤起、语音唤起等方式唤起目标对话用户界面,本申请不做限制。
可选地,用户还可以通过终端设备200上显示的其他用户界面进入目标对话用户界面,本申请不做限制,该其他用户界面可以为终端设备上的其他不为目标应用程序的用户界面。例如,该其他用户界面可以为终端设备上的备忘录的用户界面。
在一些实施例中,在进入目标对话用户界面之前,已经有目标对话的对话数据在对话***中产生,对话***具有显示产生于本次对话之前的一次或多次对话的对话数据的功能。其中,一次对话的开始和结束可以通过是否进入目标对话用户界面或目标应用程序的开启和关闭来衡量,即,进入目标对话用户界面则代表一次对话的开始,退出目标对话用户界面则代表一次对话的结束;又或者,目标应用程序开启则代表一次对话的开始,目标应用程序关闭则代表一次对话的结束。终端设备200上显示的目标对话用户界面还可以参见图3E,图3E示例性示出了终端设备200实现上的一种示例性目标对话用户界面。如图3E所示,目标对话用户界面51可包括状态栏501,标题栏502,对话区域503以及对话输入区域504。其中,状态栏501、标题栏502以及对话输入区域504可参考图3C对应的描述。图3E所示的目标对话用户界面51与图3C或图3D所示的目标对话用户界面51的不同之处在于,对话区域503中的对话视图506显示有历史对话数据507(“深圳今天天气怎么样啊?”,“深圳今天晴转多云,温度为16-28摄氏度”),历史对话数据507为产生于进入目标对话用户界面之前的对话数据,对话区域504中的概念视图505显示有知识图谱子图508,知识图谱子图508包括历史对话数据507中存在的语义实体(“深圳”,“天气”,“温度”)。
在一些实施例中,对话***不具有显示产生于本次对话之前的一次或多次对话的对话数据的功能,或者,在进入目标对话用户界面之前,还未有目标对话的对话数据在对话***中产生,其中,一次对话的定义可参考前述描述。终端设备200上显示的目标对话用户界面还可以参见图3F,图3F示例性示出了终端设备200上实现的一种示例性目标对话用户界面。如图3F所示,目标对话用户界面51可包括状态栏501,标题栏502,对话区域503以及对话输入区域504。其中,状态栏501、标题栏502以及对话输入区域504可参考图3C对应的描述。图3F所示的目标对话用户界面51与图3E所示的目标对话用户界面51的不同之处在于,对话区域503中的对话视图506未显示有对话数据,对话区域503中的概念视图505显示有知识图谱子图509,知识图谱子图509可以称之为初始的知识图谱子图,初始的知识图谱子图可包括多个初始的语义实体(“深圳”,“天气”,“温度”,“深圳大学”,“马化腾”,“腾讯”,“华为”,“5G”)。其中,初始的语义实体可以为以下一种或多种:
初始的语义实体为产生于本次对话之前的一次或多次对话的对话数据中存在的语义实体;
初始的语义实体为对话***中热度较高的语义实体;
初始的语义实体为与用户日程中待办事项相关的语义实体;
初始的语义实体为基于用户的用户画像确定的语义实体。
不限于上述几种情况,在可选实施方式中,初始的语义实体还可以有其他的情况,本申请不做限制。有关于以上所述的几种初始的语义实体的具体介绍,请参见后续方法实施例的描述。
图3C-图3F示例性示出了目标对话用户界面的一些可能的情况。不限于上述情况,在 可选的实施方式中,目标对话用户界面还可以包括视图切换按钮,该视图切换按钮起的作用可以是切换对话区域中显示的视图种类,即通过该视图切换按钮,可以使对话区域中只显示对话视图,或者,只显示概念视图,或者,显示对话视图和概念视图。可选地,该视图切换按钮所起的作用也可以是开启或关闭概念视图,即通过该视图切换按钮,可以关闭概念视图,使对话区域中只显示对话视图,或者,开启概念视图,使对话区域中显示对话视图和概念视图。可选地,该视图切换按钮还可以是图标、选项栏、悬浮窗等界面元素。在可选的实施方式中,目标对话视图界面51也可以不含图3C-图3F所示的标题栏502。对于进入目标视图界面时目标对话用户界面具体是何种呈现,本申请不做限制。
应理解的是,上述图3A-图3F所示的图形用户界面仅为本申请为了解释进入目标对话用户界面过程中终端设备上实现的一些图形用户界面所举出的几种示例,不对本申请造成限制。
进入目标对话用户界面后,用户可以进行与对话数据有关的操作,目标对话用户界面显示的内容与用户操作相关。以下介绍进入目标对话用户界面后,终端设备上实现的一些图形用户界面。
在一些实施例中,对话视图和概念视图中显示的内容会随着新的对话数据在对话***中的产生而发生变化。参见图4A,图4A示例性示出了新的对话数据产生时终端设备200上实现的图形用户界面。
图4A中的A1是进入目标对话用户界面时终端设备200上实现的目标对话用户界面。如图4A中的A2所示,当新的对话数据511(“97-98赛季NBA的MVP是谁?”,“是迈克尔乔丹”)在对话***中产生时,终端设备200获取到该新的对话数据511,终端设备200更新对话视图506和概念视图505,更新后的对话视图506显示有该新的对话数据511,更新后的概念视图505显示有知识图谱子图510,知识图谱子图510包括新的对话数据511中存在的语义实体(“NBA”,“MVP”,“迈克尔乔丹”)。
结合图4A可知,在获取到新的对话数据的情况下,终端设备更新对话视图和概念视图,更新后的对话视图中显示有该新的对话数据,更新后的概念视图显示有根据新的对话数据更新的知识图谱子图,更新后的知识图谱子图包括新的对话数据中存在的语义实体。
在一些实施例中,概念视图中显示的知识图谱子图除了对话数据中存在的语义实体外,还可以包括与对话数据中存在的语义实体相关联的语义实体。参见图4B,图4B示例性示出了终端设备上实现的显示包含与对话数据中存在的语义实体相关联的语义实体的图形用户界面。
如图4B所示,对话视图506中显示有对话数据512(“97-98赛季NBA的MVP是谁?”,“是迈克尔乔丹”),概念视图505中显示有知识图谱子图513,知识图谱子图513包括第一语义实体(“NBA”,“MVP”,“迈克尔乔丹”),第一语义实体为对话数据512中存在的语义实体,和与第一语义实体相关联的第二语义实体(“体育”,“篮球”,“足球”,“西甲”,“梅西”,“詹姆斯哈登”)。
结合4B可知,概念视图中显示的知识图谱子图除了包括对话视图中显示的对话数据中存在的第一语义实体外,还可以包括第二语义实体,第二语义实体为与第一语义实体相关 联的语义实体。关于第二语义实体具体有哪些情况,请参见后续方法实施例的描述。
在一些实施例中,概念视图中显示的知识图谱子图包括的语义实体和语义实体的数量可随着对话数据的变化而变化,该知识图谱子图的形态、知识图谱子图中的语义实体的展示方式也可以随着知识图谱子图中的语义实体的数量的变化而发生变化。参见图4C,图4C示例性示出了概念视图中显示的知识图谱子图随着对话数据的变化而变化时终端设备200上实现的一些图形用户界面。
如图4C中的C1所示,对话视图506中显示有对话数据513(“97-98赛季NBA的MVP是谁”,“是迈克尔乔丹”),对话数据513较少,概念视图505中显示有知识图谱子图514,知识图谱子图514中的语义实体的数量也较少,知识图谱子图514中的语义实体以较为稀松舒展的方式呈现在概念视图505中。
如图4C中的C2所示,对话视图506中显示有对话数据515,相较C1中的对话数据513,增加了新的对话数据516(“哈登也得过MVP?”,“没错,哈登是本赛季的MVP”,“哈登和乔丹都效力于芝加哥公牛队”),对话数据增多,概念视图505中显示有知识图谱子图517,相较C1中的知识图谱子图514,知识图谱子图517中的语义实体的数量增加,增加了“芝加哥”和“芝加哥公牛”这两个语义实体,相较于知识图谱子图514中的语义实体的展示方式,知识图谱子图517中的语义实体以相对更为密集的方式呈现在概念视图505中。
如图4C中的C3所示,对话视图506中已经显示过多轮对话数据,对话视图506中显示有对话数据518(“好想去看巴萨的比赛”,“好啊,现在刚好是西班牙的旅游季节,11月3日有巴萨的比赛,需要我为您订一张球票么?”,“好的”,“已经帮您定好了11月3日的巴萨比赛球票,前排的位置已经订满,已经尽可能为您挑选好的座位”,“那就帮我把酒店和机票一起订了吧”,“好的,帮您预订了11月2日的机票,以及三天的酒店,就住在球场附近,酒店的名称是巴萨X酒店,联系方式是1234567。”),对话数据增多,概念视图505中显示有知识图谱519,相较于C2中的知识图谱子图517,知识图谱子图519中的语义实体的数量进一步增加,相较于知识图谱517中的语义实体的展示方式,知识图谱子图519中的语义实体以平行铺设的方式呈现在概念视图505中。
如图4C中的C4所示,对话视图506中显示有对话数据520,相较C3中的对话数据518,增加了新的对话数521(“巴塞罗那最近气候怎么样?”,“巴塞罗那最近气候不错,温度和湿度适宜,温度保持在8-17摄氏度”),对话数据进一步增多,概念视图505中显示有知识图谱子图522,相较于C3中的知识图谱子图519,删除了部分语义实体(“篮球”),新增了新的对话数据521中的语义实体(“气候”,“温度”,“湿度”),知识图谱子图522中的语义实体以平行铺设的方式呈现在概念视图505中。
结合图4C可知,随着对话数据的增多,概念视图中显示的语义实体的数量增多,当以稀松舒展的方式不能呈现数量较多的语义实体时,语义实体可以以密集紧凑的方式(如图4C中的C3和C4所示的平行铺设的方式)呈现在概念视图中,随着语义实体的进一步增多,当语义实体的数量较多时,可以在删除知识图谱子图中的一个或多个语义实体,以使得新的对话数据中存在的语义实体得以呈现在概念视图中。关于删除语义实体的具体逻辑和方式,请参见后续方法实施例的描述。
应理解的是,图4C仅为本申请为了解释念视图中显示的知识图谱子图包含的语义实体 和语义实体的数量可随着对话数据的变化而发生变化以及在语义实体的数量较多时以密集紧凑的方式呈现语义实体所举出的示例,不对本申请造成限制。在可选的实施方式中,还可以有其他的密集紧凑的方式,例如,随着知识图谱子图中的语义实体的数量的增多,在概念视图中显示该知识图谱子图时,可以缩小部分或全部语义实体在概念视图中占据的区域以达到显示更多语义实体的效果,或者,还可以缩小部分或全部语义实体在概念视图上的距离以达到显示更多语义实体的效果,本申请不做限制。
在一些实施例中,概念视图与对话视图可以协同交互。参见图4D-图4F,图4D-图4F示例性示出了概念视图与对话视图协同交互时终端设备200上实现的图形用户界面。
如图4D所示,当检测到在对话视图506中显示的第一对话数据523(如对话数据“那就帮我把酒店和机票一起订了吧”)上的点击操作时,响应于该点击操作,终端设备200在概念视图505中高亮显示第三语义实体524(“巴塞罗那”,“旅游”,“酒店”,“机票”),第三语义实体524为与第一对话数据523相关的语义实体,关于第三语义实体的具体定义,可参考后续方法实施例的描述。
如图4E所示,当检测到在概念视图505中显示的第四语义实体525(如语义实体“巴塞罗那”)上的点击操作时,响应于该点击操作,终端设备200在对话视图506中显示第二对话数据526(“好啊,现在刚好是西班牙的旅游季节,11月3日正好有巴萨的比赛,需要我为您订一张机票么?”,“已经帮您定好了11月3日的巴萨比赛球票,前排的位置已经订满,已经尽可能为您挑选好的座位”,“好的,帮您预订了11月2日的机票,以及三天的酒店,就住在球场附近,酒店的名称是巴萨X酒店,联系方式是1234567”、“巴塞罗那气候怎么样”、“巴塞罗那最近气候不错,温度和湿度适宜,温度保持在8-17摄氏度”),第二对话数据526为与第四语义实体相关的对话数据,关于第二对话数据的具体定义,可参考后续方法实施例的描述。
如图4F所示,当检测到在概念视图505中显示的第四语义实体525(如语义实体“酒店”)的点击操作时,响应于该点击操作,终端设备200在概念视图505显示第二对话数据(“好的,帮您预订了11月2日的机票,以及三天的酒店,就住在球场附近,酒店的名称是巴萨X酒店,联系方式是1234567”)的摘要信息526(酒店名称:巴萨X酒店,联系方式:1234567)。第二对话数据为与第四语义实体525相关的对话数据,关于第二对话数据的具体定义,可参考后续方法实施例的描述.
结合图4D-图4F可知,对话视图中显示的对话数据和概念视图中显示的语义实体可以具备联动关系,当检测到作用于对应有语义实体的对话数据或对应有对话数据的语义实体的用户操作时,终端设备会联动显示与之相对应的语义实体或对话数据。关于确定与对话数据相对应的语义实体或与语义实体相对应的对话数据的方式,可参见后续描述。
应理解的是,图4D-图4F所示的图形用户界面仅为本申请为了解释概念视图与对话视图协同交互所举出的几种示例,不对本申请造成限制。在可选实施方式中,概念视图与对话视图协同交互还可以有更多的实现方式。例如,上述图4D-图4F涉及的点击操作也可以为双击操作、长按操作、语音指示操作等用于进行选中某一视图元素的用户操作。又如,除了上述图4D所涉及的高亮显示的方式外,还可以通过弹窗显示、悬浮窗显示或者单独显示(即只在概念视图中显示与用户选中的对话数据相关的语义实体)等方式对与用户选 中的对话数据相关的语义实体进行突出显示。又如,概念视图中显示的知识图谱子图还可以随着对话视图中的对话数据的切换而切换,切换得到的知识图谱子图对应于对话视图中显示的对话数据;对话视图中的对话数据也可以随着概念视图中的知识图谱子图的切换而切换,切换得到的对话数据对应于概念视图中显示的知识图谱子图。对于概念视图与对话视图协同交互的具体方式,本申请不做限制。
在一些实施例中,概念视图中还可以显示用于触发对话任务的任务语义实体,每个任务语义实体可对应一种或多种对话任务。参见图4G,图4G示例性示出了触发对话任务时终端设备200上实现的图形用户界面。
如图4G中的G1和G2所示,当检测到在概念视图505中显示的任务语义实体527(如“机票”)的点击操作时,响应于该点击操作,终端设备200在概念视图505中显示关键信息528(“航班号:xx1起飞时间:h1时m1分座位:待选”)。
如图4G中的G3和G4所示,当检测到在概念视图505中显示的关键信息528的点击操作和获取到对话数据529(“我想订国航的航班”),响应于该点击操作,终端设备200触发执行预订国航航班的机票这一符合对话数据529的用户意图的对话任务。在触发执行预订国航航班的机票这一符合对话数据529的用户意图的对话任务之后,终端设备200在概念视图505中更新关键信息528(“北京-巴塞罗那航班号:国航xxx起飞时间:h2时m2分座位:待选”)。
结合图4G可知,概念视图中除了可以用于显示知识图谱子图外,还可以用于显示该知识图谱子图中用于触发对话任务的任务语义实体。根据用户作用于任务语义实体的关键信息的操作和用户意图,终端设备响应于用户的操作,触发执行符合用户意图的对话任务。应理解的是,图4G所示的图形用户界面仅为本申请为了解释功能语义实体和触发功能语义实体对应的***功能所举出的示例,不对本申请造成限制。在可选实施方式中,还可以有触发执行符合用户意图的对话任务的方式。例如,上述涉及的关键信息还可以以图标、按钮、悬浮窗、弹框等视图元素存在,点击该关键信息对应的视图元素触发显示该关键信息的下一级菜单或详细内容,进而通过依次点选的方式触发符合用户意图的对话任务。对于通过任务语义实体触发对话任务的具体方式,本申请不做限制。
在一些实施例中,在对话***为基于人机交互的对话***的情况下,对话***还可以主动发起。参见图4H,图4H示例性示出了对话***发起对话时终端设备200上实现的一些图形用户界面。如图4H所示,当识别到与历史对话数据530(“97-98赛季NBA的MVP是谁”,“是迈克尔乔丹”,“哈登也得过MVP?”)中的语义实体531(“哈登”、“乔丹”)在知识图谱子图中存在语义关系的新的语义实体532(“芝加哥公牛”),并且,新的语义实体532不存在于历史对话数据530中,终端设备根据历史对话数据530中存在的语义实体531和新的语义实体532发起对话,将发起的第三对话数据533(“哈登和乔丹都效力于芝加哥公牛队”)显示在对话视图506中。
结合图4H可知,当查询到历史对话中涉及的各个语义实体存在历史对话中涉及的路径联系外,还有历史对话中未涉及的路径联系时,终端设备还可以主动发起对话,从而生成引导话题的对话数据。
不限于上述图3A-图3F和图4A-图4H示出的图形用户界面实施例,在可选的实施方 式中,对话***对应的图形用户界面可以有其他的呈现方式,例如,目标对话用户界面中的概念视图和对话视图还可以以左右排列的方式呈现。又或者,目标对话用户界面中可不含有状态栏、标题栏等视图元素。对话***对应的图形用户界面具体是何种呈现方式,本申请不做限制。
结合图4A-图4H的图形用户界面实施例可知,本申请的方案基于知识图谱,在目标对话用户界面中增加了用于显示目标对话对应的知识图谱子图的概念视图,对话视图与概念视图之间的协同交互起到了回顾历史对话数据、引导话题走向、提示用户对话***的功能边界等作用,提高了用户的对话交互体验。以下介绍实现上述图形用户界面实施例的技术方案。
先对本申请技术方案中将涉及的一些概念进行说明。
1、知识图谱
知识图谱又可以称之为科学知识图谱,是以图的结构存储真实世界中存在的各种实体以及这些实体相互之间的关联的知识库。知识图谱由节点和边组成,其中,节点代表真实世界中存在的实体,边代表实体与实体之间的关联关系。本申请中,知识图谱可以是通用领域知识图谱,通用领域知识图谱又可以称之为开放领域知识图谱,是指包含有多种领域中的实体和关系,强调融合更多的实体,且侧重知识的广度的知识图谱,可应用于智能搜索等领域。本申请中,知识图谱也可以是垂直领域知识图谱,垂直领域知识图谱又可以称之为行业知识图谱,是指依靠特定行业的数据来构建的知识图谱,行业知识图谱侧重于知识的深度的知识图谱,可以理解为基于语义技术的行业知识库。
2、知识图谱子图
知识图谱子图是知识图谱的子图,即知识图谱子图是知识图谱的一部分,知识图谱子图中包含的节点和关系均来源于知识图谱,基于某种选择规则从知识图谱中选择一个或多个节点以及一个或多个关联关系进行组合即可形成知识图谱子图。本申请中,目标对话对应的知识图谱子图是根据预先建立的知识图谱和目标对话的对话数据确定的知识图谱子图,关于根据预先建立的知识图谱和目标对话的对话数据确定目标对话对应的知识图谱子图的具体实现方式,可参考后续描述。
3、语义实体和语义关系
语义实体,可以是指具有可区别性且独立存在的某个或某种事物,具体地,语义实体可以是指某一个人(如姚明),某一个城市(如深圳),某一本书(如名人传),某一种植物(如吊兰),等等,不限于这里的描述。也可以是指具有同种特性的实体的集合,为集合、类别、种类等的统称,如国家、民族、人物、地理等。还可以是指对具有可区别性且独立存在的某个或某种事物的描述或解释,或者,具有同种特性的实体的集合的描述或解释。一个语义实体在知识图谱或知识图谱子图中可以以一个节点的形式存在。
语义关系用于连接两个语义实体,用于描述两个实体之间的关联或内在特性,其表示了两个语义实体之间在真实世界中的关联。一种语义关系在知识图谱或知识图谱子图中可以以一条边的形式的存在。
接下来介绍本申请提供的对话交互方法,该对话交互方法可实现在前述介绍的对话***上,其中,对话交互方法在终端设备侧的总流程可以为:终端设备在目标对话用户界面 的第一区域中显示对话视图,并在目标对话用户界面的第二区域中显示概念视图。
这里,关于目标对话用户界面、第一区域、对话视图、第二区域以及概念视图的说明,可参见前述3C所示的图形用户实施例部分的有关描述,这里不再赘述。
目标对话用户界面的具体实现可参考前述图3C-图3F或图4A-图4H实施例中所示的目标对话用户界面51。
其中,对于终端设备上实现的目标对话用户界面的不同情况,对应具有不同实现流程的对话交互方法,以下展开描述。
一、用于实现终端设备在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图的对话交互方法的一些流程。终端设备在目标对话用户界面的概念视图中显示知识图谱子图的图形用户界面实施例可参见图3E-图3F、图4A-图4C以及图4H实施例。
1、图3F实施例对应的对话交互方法的实现流程,该实现流程可应用于进入目标对话用户界面时终端设备在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图的场景中。
图3F所示的实施例对应的一种对话交互方法的流程示意图可以如图5A所示,该流程可适用于由网络设备和终端设备组成的对话***,具体包括如下步骤:
S511,网络设备生成目标对话对应的知识图谱子图。
这里,关于目标对话的说明,可参考前述描述。示例性地,目标对话对应的知识图谱子图可以为图3F所示的知识图谱子图509。本申请实施例中,目标对话对应的知识图谱子图为初始的知识图谱子图,初始的知识图谱子图包括一个或多个初始的语义实体,初始的语义实体可以为前述图3F实施例中描述的初始的语义实体中的任意一种或多种。
具体地,产生于本次对话之前的一次或多次对话的对话数据中存在的语义实体具体可以为用户对话历史中经常提及的语义实体,也即经常出现在历史对话记录中的语义实体,此处的历史对话记录是指目标对话对应的产生于当前对话之前的对话记录。例如,目标对话为即时通信用户A和即时通信用户B之间的对话,那么,产生于本次对话之前的一次或多次对话的对话数据中存在的语义实体为经常出现在即时通信用户A和即时通信用户B的历史对话记录中的语义实体。又如,目标对话为用户与对话***之间的对话,那么,产生于本次对话之前的一次或多次对话的对话数据中存在的语义实体为经常出现在用户与对话***的历史对话记录中的语义实体。这里,“经常”的含义可以是指出现或被提及的频率超过预设的频率阈值,关于频率阈值的取值,本申请不做限制。
具体地,对话***中热度较高的语义实体具体可以为在对话***中被使用该对话***的大多数用户经常提及的语义实体,也即经常出现在大多数用户的历史对话记录中的语义实体,此处的历史对话记录是指目标对话对应的产生于当前对话之前的大多数用户的对话记录。例如,对话***为基于即时通信的对话***,那么,对话***中热度较高的语义实体为经常出现在使用该对话***的大多数即时通信用户的历史对话记录中的语义实体。又如,对话***为基于人机交互的对话***,那么,对话***中热度较高的语义实体为经常出现在使用该对话***的所有用户的历史对话记录中的语义实体。这里,“大多数用户”的含义可以是指与使用该对话***的所有用户的比例超过第一比例的用户,其中,第一比例为大于二分之一的比例值。“经常”的含义可以是指出现或被提及的频率超过预设的频率阈 值,关于频率阈值的取值,本申请不做限制。
在初始的语义实体为上述两种情况下,对话***具有保存产生于本次对话之前的一次或多次对话的对话数据的功能。具体实现中,网络设备可根据对话***中保存的产生于本次对话之前的一次或多次对话的对话数据,确定初始的语义实体,再根据对话***中保存的知识图谱和该初始的语义实体,生成目标对话对应的知识图谱子图。
具体地,与用户日程中待办事项相关的语义实体具体可以为记录在终端设备上的备忘录、便签、待办事项、记事本等用于记录用户的日程安排或计划的应用程序上的计划或安排中的语义实体。例如,终端设备上的待办事项记录有用户未来几天的日程安排,则与用户日程中待办事项相关的语义实体可以为该未来几天的日程安排中存在的语义实体,如会议时间、会议室、联系人等。
具体地,基于用户的用户画像确定的语义实体具体可以为根据与用户的日常行为(如购物行为、搜索行为、外出记录、运动记录等)有关的数据确定的符合用户某一方面特征的语义实体。例如,根据与用户的日常有关的数据确定用户经常去健身房,则基于用户的用户画像确定的语义实体可以为与健身有关的语义实体,如跑步机、健美操等。
在初始的语义实体为上述两种情况下,终端设备可以采集用户记录的日程安排或计划,或者与用户的日常行为有关的数据,并将采集用户记录的日程安排或计划,或者与用户的日常行为有关的数据发送给网络设备。网络设备可根据该日程安排或计划或者与用户的日常行为有关的数据,确定初始的语义实体,再根据对话***中保存的知识图谱和该初始的语义实体,生成目标对话对应的知识图谱子图。
在一种具体实现方式中,根据对话***中保存的知识图谱和初始的语义实体,生成目标对话对应的知识图谱子图的方式可以为:根据该初始的语义实体查询该知识图谱,以确定该初始的语义实体相互之间的语义关系,根据该初始的语义实体和该初始的语义实体相互之间的语义关系,生成目标对话对应的知识图谱子图。
S512,网络设备将目标对话对应的知识图谱子图发送给终端设备。
示例性地,网络设备可以将生成的知识图谱子图直接发送给终端设备,或者通过其他网络设备发送给终端设备,或者将知识图谱子图存储在存储器或其他设备中由终端设备读取。
S513,终端设备在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图。
示例性地,终端设备在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图可以如图3F所示。
图3F实施例对应的另一种对话交互方法的流程示意图可以如图5B所示,该流程可适用于仅由终端设备组成的对话***,具体包括如下步骤:
S521,终端设备生成目标对话对应的知识图谱子图。
这里,关于目标对话的说明,可参考前述描述。目标对话对应的知识图谱子图的一种示例可以为图3F所示的知识图谱子图509。本申请实施例中,目标对话对应的知识图谱子图可参考步骤S511所描述的目标对话对应的知识图谱子图,此处不再赘述。
终端设备生成目标对话对应的知识图谱子图的具体实现方式,可参考前述步骤S511中网络设备生成目标对话对应的知识图谱子图的具体实现方式,此处不再赘述。
S522,终端设备在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图。
示例性地,终端设备在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图可以如图3F所示。
结合图3F,以及,图5A-图5B对应的任一方法实施例可知,在对话***不具有显示产生于本次对话之前的一次或多次对话的对话数据的功能,或者,在进入目标对话用户界面之前,还未有目标对话的对话数据在对话***中产生的情况下,终端设备在目标对话用户界面中显示目标对话对应的知识图谱子图,该目标对话对应的知识图谱子图为初始的知识图谱子图,初始的知识图谱子图中的语义实体可以起到引导对话话题的作用,增强了用户体验。
2、图3E、图4A-图4C以及图4H实施例对应的对话交互方法的实现流程。该实现流程可应用于进入目标对话用户界面后终端设备在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图的场景中。
图3E、图4A以及图4C实施例对应的一种对话交互方法的流程示意图可以如图6A所示,该流程可适用于由网络设备和终端设备组成的基于人机交互的对话***,具体包括如下步骤:
S611,终端设备获取用户输入的输入对话数据。
这里,用户输入的输入对话数据可以为语音数据,也可以为文本数据。具体地,终端设备可以通过麦克风采集声音信号,以获取用户输入的输入对话数据。终端设备也可以通过触摸屏或键盘等获取用户输入文字的操作,以获取用户输入的输入对话数据。示例性地,用户输入的输入对话数据可以为图3E所示的对话数据“深圳天气怎么样啊?”。
S612,终端设备将输入对话数据发送给网络设备,网络设备接收输入对话数据。
S613,网络设备根据输入对话数据生成回复对话数据。
具体地,网络设备可以识别输入对话数据中存在的语义实体,根据识别得到的语义实体查询对话***中保存的知识图谱,以确定该识别得到的语义实体相互之间的语义关系,然后将识别得到语义实体和查询得到的语义关系输入预先训练得到的编码-解码(Encoder-Decoder)模型中,将该Encoder-Decoder模型输出的对话数据确定为回复对话数据。
其中,网络设备可以通过实体抽取的方式识别输入对话数据中存在的语义实体。实体抽取又可以称为命名实体学习(named entity learning)或命名实体识别(named entity recognition)。实体抽取的方式可以为基于规则与词典的方式、基于统计机器学习的方式或面向开放域的方式中的任意一种方式,本申请实施例不做限制。
示例性地,网络设备生成的回复对话数据可以为图3E所示的对话数据“深圳今天晴转多云,温度为16-28摄氏度”,或者,图4A-图4C所示的对话数据“是迈克尔乔丹”。
S614,网络设备根据输入对话数据和回复对话数据生成目标对话对应的知识图谱子图。
这里,目标对话对应的知识图谱子图包括输入对话数据和回复对话数据中存在的语义实体。网络设备可以识别输入对话数据和回复对话数据中存在的语义实体,然后根据识别得到的语义实体生成目标对话对应的知识图谱子图。根据识别得到的语义实体生成的目标对话对应的知识图谱子图的一种示例可以为图3E所示的知识图谱子图508。
网络设备识别输入对话数据和回复对话数据中存在的语义实体的具体实现方式,可参考步骤S613中网络设备可以通过实体抽取的方式识别输入对话数据中存在的语义实体的方式;网络设备根据识别得到的语义实体生成目标对话对应的知识图谱子图的具体实现方式可参考步骤S511中网络设备根据对话***中保存的知识图谱和初始的语义实体,生成目标对话对应的知识图谱子图的具体实现方式,此处不再赘述。
S615,网络设备将回复对话数据和目标对话对应的知识图谱子图发送给终端设备,终端设备接收回复对话数据和目标对话对应的知识图谱子图。
S616,终端设备在目标对话用户界面的对话视图中显示回复对话数据,并在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图。
示例性地,终端设备在目标对话用户界面的对话视图中显示回复对话数据,并在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图可以如图3E、图4A-图4C所示。
图3E、图4A以及图4C实施例对应的另一种对话交互方法的流程示意图可以如图6B所示,该流程可适用于仅由终端设备组成的基于人机交互的对话***,具体包括如下步骤:
S621,终端设备获取用户输入的输入对话数据。
这里,步骤S621的具体实现方式,可参考步骤S611的描述,此处不再赘述。
S622,终端设备根据输入对话数据生成回复对话数据。
S623,终端设备根据输入对话数据和回复对话数据生成目标对话对应的知识图谱子图。
这里,步骤S622~S623的具体实现方式,可参考步骤S613~S614的具体实现方式,此处不再赘述。
S624,终端设备在目标对话用户界面的对话视图中显示回复对话数据,并在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图。
示例性地,终端设备在目标对话用户界面的对话视图中显示回复对话数据,并在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图可以如图3E、图4A或图4C所示。
图3E、图4A以及图4C实施例对应的又一种对话交互方法的流程示意图可以如图6C所示,该流程可适用于由终端设备和网络设备组成的基于即时通信的对话***,具体包括如下步骤:
S631,终端设备获取用户输入的输入对话数据。
这里,步骤S621的具体实现方式,可参考步骤S611的描述,此处不再赘述。
S632,终端设备将输入对话数据发送给网络设备,网络设备接收输入对话数据。
S633,网络设备根据输入对话数据生成目标对话对应的知识图谱子图。
这里,网络设备根据输入对话数据生成目标对话对应的知识图谱子图的具体实现方式,可参考步骤S614中网络设备根据输入对话数据和回复对话数据生成目标对话对应的知识图谱子图的描述,此处不再赘述。
S634,网络设备将目标对话对应的知识图谱子图发送给终端设备,终端设备接收目标对话对应的知识图谱子图。
S635,终端设备在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图。
示例性地,终端设备在目标对话用户界面的对话视图中显示回复对话数据,并在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图可以如图3E或图4A-图4C所示。
在一些可能的实施例中,在对话***为基于人机交互的对话***的情况下,对话***除了根据用户输入的输入对话数据生成对话回复数据外,对话***还可以主动生成发起对话。示例性地,对话***主动发起对话的图形用户界面实施例可以参考图4H。图4H实施例对应的一种对话交互方法的流程示意图可以如图6D所示,该流程可适用于由网络设备和终端设备组成的基于人机交互的对话***,具体包括如下步骤:
S641,网络设备生成第三对话数据。
这里,第三对话数据为网络设备主动发起的对话数据,也即对话***主动发起的对话数据。网络设备生成第三对话数据的具体实现方式,将在后续方法实施例中进行详细介绍,此处不做过多描述。
S642,网络设备根据第三对话数据生成目标对话的对应的知识图谱子图。
这里,网络设备根据第三对话数据生成目标对话对应的知识图谱子图的具体实现方式,可参考步骤S614中网络设备根据输入对话数据和回复对话数据生成目标对话对应的知识图谱子图的描述,此处不再赘述。
S643,网络设备将目标对话对应的知识图谱子图和第三对话数据发送给终端设备,终端设备接收目标对话对应的知识图谱子图和第三对话数据。
S644,终端设备在目标对话用户界面的对话视图中显示第三对话数据,并在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图。
图4H实施例对应的另一种对话交互方法的流程示意图可以如图6E所示,该流程可适用于仅终端设备组成的基于人机交互的对话***,具体包括如下步骤:
S651,终端设备生成第三对话数据。
这里,终端设备生成第三对话数据的具体实现方式,将在后续方法实施例中进行详细介绍,此处不做过多描述。
S652,终端设备根据第三对话数据生成目标对话的对应的知识图谱子图。
这里,终端设备根据第三对话数据生成目标对话对应的知识图谱子图的具体实现方式,可参考步骤S614中网络设备根据输入对话数据和回复对话数据生成目标对话对应的知识图谱子图的描述,此处不再赘述。
S653,终端设备在目标对话用户界面的对话视图中显示第三对话数据,并在目标对话用户界面的概念视图中显示目标对话对应的知识图谱子图。
上述图6A-图6B实施例涉及的输入对话数据和回复对话数据、上述图6C实施例涉及的对话数据以及上述图6D-图6E实施例涉及的第三对话数据,可统称为对话数据。结合图3E、图4A、图4H以及图6A-图6E对应的任一方法实施例可知,目标对话对应的知识图谱子图中包括第一语义实体,第一语义实体为对话数据中存在的语义实体,第一语义实体相当于是对目标对话的对话数据的摘要和概括,有助于快读了解历史对话内容的概要,从而达到回复历史对话内容的目的。在上述图6A-图6B实施例涉及的输入对话数据和回复对话数据、上述图6C的涉及的对话数据以及上述图6D-图6E涉及的第三对话数据为新的对 话数据的情况下,结合图4C以及图6A-图6E对应的任一方法实施例可知,当获取到新的对话数据时,终端设备会更新概念视图,更新后的概念视图用于显示根据新的对话数据中更新的知识图谱子图,更新后的知识图谱子图包括该新的对话数据中存在的语义实体。
在一些可能的实施例中,目标对话对应的知识图谱子图还可以包括与第一语义实体相关联的一个或多个第二语义实体。包括第二语义实体的目标对话对应的知识图谱子图的一种示例可以为图4B所示的知识图谱子图511。
具体地,与第一语义实体相关联的第二语义实体可以有以下情况:
在一种可能的情况中,第二语义实体可以包括在知识图谱中与第一语义实体相邻的语义实体,即在知识图谱中与第一语义实体存在语义关系的语义实体。示例性地,在知识图谱中与第一语义实体相邻的语义实体例如可以为图4B所示的语义实体“詹姆斯哈登”、“NBA”、“西甲”和“梅西”,其中,“詹姆斯哈登”、“西甲”、“NBA”和“梅西”为与语义实体“MVP”存在语义关系的语义实体。
进一步地,第二语义实体可以包括在知识图谱中与第一语义实体相邻的部分语义实体。
在一种可行的实施方式中,该在知识图谱中与第一语义实体相邻的部分语义实体可以为在对话过程中使用频率高于第一频率阈值的并且在知识图谱中与第一语义实体相邻的语义实体。其中,该使用频率可以是指在该目标对话中的使用频率,该在对话过程中使用频率高于第一频率阈值的并且在知识图谱中与第一语义实体相邻的语义实体是指与经常出现在目标对话对应的历史对话记录中的并且在知识图谱中与第一语义实体相邻的语义实体。该使用频率也可以是指在对话***中的所有对话中的使用频率,该在对话过程中使用频率高于第一频率阈值的并且在知识图谱中与第一语义实体相邻的语义实体是指经常出现在对话***中所有对话对应的历史对话记录中的并且在知识图谱中与第一语义实体相邻的语义实体。此处的历史对话记录为可以为目标对话对应的本次对话的历史对话记录,也可以为目标对话对应的所有历史对话记录(即本次对话的历史对话记录和产生于本次对话之前的历史对话记录)。
举例来对在对话过程中使用频率高于第一频率阈值的并且在知识图谱中与第一语义实体相邻的语义实体进行说明。例如,在知识图谱中与第一语义实体相邻的语义实体分别有“任正非”、“手机”、“5G”、“网络设备”、“荣耀”、“海思”。
(1)使用频率为在目标对话中的使用频率,第一频率阈值为20次/周。“任正非”在目标对话的历史对话记录中出现的频率为1次/周,“手机”在目标对话的历史对话记录中出现的频率为25次/周,“5G”在目标对话的历史对话记录中出现的频率为18次/周,“荣耀”在目标对话的历史对话记录中出现的频率为10次/周,“海思”在目标对话的历史对话记录中出现的频率为3次/周,则将语义实体“手机”确定为在对话过程中使用频率高于第一频率阈值的并且在知识图谱中与第一语义实体相邻的语义实体。
(2)使用频率为对话***中的所有对话中的使用频率,第一频率阈值为200次/天。“任正非”在目标对话的历史对话记录中出现的频率为10次/天,“手机”在目标对话的历史对话记录中出现的频率为250次/天,“5G”在目标对话的历史对话记录中出现的频率为300次/天,“荣耀”在目标对话的历史对话记录中出现的频率为220次/天,“海思”在目标对话的历史对话记录中出现的频率为30次/天,则将语义实体“手机”、“5G”和“荣耀”确定为在对话过 程中使用频率高于第一频率阈值的并且在知识图谱中与第一语义实体相邻的语义实体。
在另一种可行的实施方式中,该在知识图谱中与第一语义实体相邻的部分语义实体可以为基于用户画像确定的并且在知识图谱中与第一语义实体相邻的语义实体。关于基于用户画像确定的语义实体的定义,可参考前述步骤S511的描述,此处不再赘述。
在知识图谱中与第一语义实体相邻的部分语义实体不限于上述两种可行的实施方式,关于具体将在知识图谱中与第一语义实体相邻的语义实体中的哪部分语义实体作为第二语义实体,本申请实施例不做限制。
在另一种可能的情况中,第二语义实体也可以包括在知识图谱中与第一语义实体的语义关系路径距离小于第一距离阈值的语义实体。其中,两个语义实体之间的语义关系路径距离可以用两个语义实体在知识图谱中的语义关系路径所包括的语义实体的数量来衡量,语义关系路径可以等于两个语义实体在知识图谱子图中的最短语义关系路径所包含的语义实体的数量减一。
举例对语义关系路径距离进行说明。例如,知识图谱的一部分如图4B中的知识图谱子图513所示,则语义实体“篮球”与语义实体“迈克尔乔丹”之间存在两条语义关系路径,分别为“篮球—NBA—迈克尔乔丹”和“篮球—NBA—MVP—迈克尔乔丹”,最短语义关系路径为“篮球—NBA—迈克尔乔丹”,则确定语义实体“篮球”与语义实体“迈克尔乔丹”之间的语义关系路径距离为2。
进一步地,第二语义实体可以包括在知识图谱中与第一语义实体的语义关系路径距离小于第一距离阈值的部分语义实体。在一种可行的实施方式中,该在知识图谱中与第一语义实体的语义关系路径距离小于第一距离阈值的部分语义实体可以为在对话过程中使用频率高于第二频率阈值的并且在知识图谱中与第一语义实体的语义关系路径距离小于第一距离阈值的语义实体。关于使用频率的说明,可参考前述描述,此处不再赘述。在另一种可行的实施方式中,该在知识图谱中与第一语义实体的语义关系路径距离小于第一距离阈值的部分语义实体可以为基于用户画像确定的并且在知识图谱中与第一语义实体的语义关系路径距离小于第一距离阈值的语义实体。关于语义关系路径距离的说明,可参考前述描述,此处不再赘述。
与第一语义实体相关联的第二语义实体不限于上述情况,具体将知识图谱中的何种语义实体确定为与第一语义实体相关联的语义实体,本申请实施例不做限制。
结合图4B以及图6A-图6E对应的任一方法实施例可知,目标对话对应的知识图谱除了包括第一语义实体外,还包括根据与第一语义实体相关联的第二语义实体,第二语义实体起到了引导对话话题的作用,可以增强用户的对话体验。
二、用于实现终端设备上显示的对话视图和概念视图协同交互的对话交互方法的一些流程,这些流程可应用于目标对话用户界面中已经显示有对话数据和知识图谱子图的场景中,即已经进行了一轮或多轮对话的场景中。终端设备上显示的对话视图和概念视图协同交互的图形用户界面实施例可参见图4D-图4G实施例。
1、图4D实施例对应的对话交互方法的实现流程。
图4D实施例对应的一种对话交互方法的流程示意图可以如图7A所示,该流程可适用于由仅终端设备组成的对话***,具体包括如下步骤:
S711,终端设备检测到作用于第一对话数据的第一操作。
第一对话数据为目标对话用户界面中的对话视图中显示的任一对话数据。示例性地,第一对话数据例如可以为图4D所示的对话数据“那就帮我把酒店和机票一起订了吧”。
作用于第一对话数据的第一操作具体是指选中第一对话数据的操作,第一操作可以有多种形式。例如,第一操作可以是在对话视图中点击该第一对话数据的操作,第一操作也可以是在对话视图中双击该第一对话数据的操作,第一操作还可以是在对话视图中拖拽该第一对话数据的操作,等等,不限于这里的描述。对于第一操作具体是何种形式,本申请实施例不做限制。
S712,终端设备根据第一对话数据确定第三语义实体。
这里,第三语义实体为目标对话用户界面的概念视图中显示的并且与第一对话数据相关或对应的语义实体。
在一种可行的实施方式中,与第一对话数据相关或对应的语义实体可以包括第一对话数据中存在的语义实体。终端设备可以识别该第一对话数据中存在的语义实体,将其确定为第三语义实体。关于终端设备识别第一对话数据中存在的语义实体,可参考前述步骤S613中网络设备识别输入对话数据中存在的语义实体的方式,此处不再赘述。
在另一种可行的实施方式中,与第一对话数据相关或对应的语义实体也可以包括与第一对话数据中存在的语义实体相关联的语义实体。关于与第一对话数据中存在的语义实体相关联的语义实体的概念,可参考前述对与第一语义实体相关联的第二语义实体的描述,此处不再赘述。终端设备可以识别该第一对话数据中存在的语义实体,然后将概念视图中显示的知识图谱子图中与第一对话数据中存在的语义实体相关联的语义实体确定为第三语义实体。
在又一种可行的实施方式中,与第一对话数据相关或对应的语义实体可以包括第一对话数据中存在的语义实体和与第一对话数据中存在的语义实体相关联的语义实体。
在又一种可行的实施方式中,与第一对话数据相关或对应的语义实体还可以包括话题标签与第一对话数据对应的话题标签的相似度高于关联度阈值的语义实体,即第三语义实体对应的话题标签与第一对话数据对应的话题标签的相似度高于关联度阈值。终端设备可以确定第一对话数据对应的话题标签,以及,分别确定概念视图中显示的知识图谱子图中的各个语义实体对应的话题标签,然后将该各个语义实体对应的话题标签与第一对话数据对应的话题标签进行相似度匹配,以确定话题标签与第一对话数据对应的话题标签的相似度高于关联度阈值的语义实体,进而将话题标签与第一对话数据对应的话题标签的相似度高于关联度阈值的语义实体确定为第一语义实体。在一种具体实现方式中,终端设备可以通过预先训练得到的话题识别器确定第一对话数据对应的话题标签和对话视图中显示的知识图谱子图中的各个语义实体对应的话题标签。
与第一对话数据相关或对应的语义实体不限于上述描述,具体以概念视图中显示的何种语义实体作为与第一对话数据相关或对应的语义实体取决于对话***中的有关于概念视图中的语义实体与对话视图中的对话数据之间的对应关系的具体设计,本申请实施例不做限制。
S713,终端设备在目标对话用户界面的概念视图中突出显示第三语义实体。
这里,终端设备在目标对话用户界面的概念视图中突出显示第三语义实体可以是指以区别于目标对话用户界面中的概念视图中显示的其他语义实体的方式在目标对话用户界面的概念视图中显示该第三语义实体,该其他语义实体是指目标对话用户界面中的概念视图中显示的除去该第三语义实体以外的语义实体。
终端设备在目标对话用户界面的概念视图中突出显示第三语义实体的具体方式,可参考前述4D实施例的有关描述,此处不再赘述。
示例性地,终端设备在目标对话用户界面的概念视图中突出显示第三语义实体时的目标对话用户界面可以如图4D中的D2所示。
图4D实施例对应的另一种对话交互方法的流程示意图可以如图7B所示,该流程可适用于由终端设备和网络设备组成的对话***,具体包括如下步骤:
S721,终端设备检测到作用于第一对话数据的第一操作。
这里,步骤S721的相关描述可参考步骤S711,此处不再赘述。
S722,终端设备向网络设备发送语义实体确认请求,语义实体确认请求用于请求获取待突出显示的语义实体,语义实体确认请求包括第一对话数据,网络设备接收语义实体确认请求。
S723,网络设备根据第一对话数据确定第三语义实体。
这里,有关于第三语义实体以及网络设备根据第一对话数据确定第三语义实体的具体实现方式,可参考步骤S712的描述,此处不再赘述。
S724,网络设备将第三语义实体发送给终端设备,终端设备接收第三语义实体。
S725,终端设备在目标对话用户界面的概念视图中突出显示第三语义实体。
这里,步骤S725的相关描述,可参考步骤S715,此处不再赘述。
结合图4D,以及,图7A-图7B对应的任一方法实施例可知,在进行了一轮或多轮对话后,目标对话用户界面中已经显示有对话数据和知识图谱子图,当对话视图中的对话数据被选中时,终端设备会在目标对话用户界面的概念视图中突出显示与该对话数据相关的语义实体,实现了对话视图与概念视图之间的协同交互,有助于帮助用户定位到具体的语义实体,提升了用户体验。
2、图4E实施例对应的对话交互方法的实现流程。
图4E实施例对应的一种对话交互方法的流程示意图可以如图8A所示,该流程可适用于仅由终端设备组成的对话***,具体包括如下步骤:
S811,终端设备检测到作用于第四语义实体的第二操作。
第四语义实体为目标对话用户界面中的概念视图中显示的语义实体。示例性地,第四语义实体例如可以为图4E所示的语义实体“巴塞罗那”。
作用于第四语义实体的第二操作具体是指选中第四语义实体的操作,第二操作可以有多种形式。例如,第二操作可以是在概念视图中点击该第四语义实体的操作,第二操作也可以是在概念视图中双击第四语义实体的操作,第二操作还可以是在概念视图中以第四语义实体为中心画圈的操作,第二操作还可以是在概念视图中拖拽该第四语义实体的操作,第二操作还可以是是语音控制第四语义实体的操作(即用户说出查看第四语义实体的语音指令),等等,不限于这里的描述。对于第二操作具体是何种形式,本申请实施例不做限制。
S812,终端设备根据第四语义实体确定第二对话数据。
这里,第二对话数据为与第四语义实体相关或对应的历史对话数据。历史对话数据是指已经在对话***中产生的目标对话的对话数据。这里的历史对话数据为目标对话对应的本次对话的历史对话数据,其具体是指该已经进行的一轮或多轮对话的对话数据。
在一种可行的实施方式中,与第四语义实体相关或对应的历史对话数据可以为存在该第二语义实体的历史对话数据,即第四语义实体存在于第二对话数据中。终端设备可以在已经进行的一轮或多轮对话的对话数据中查找存在第四语义实体的对话数据,将其确定为第二对话数据。具体实现中,终端设备可以已经进行的一轮或多轮对话的对话数据对应的文本数据与第四语义实体进行比较,从而确定存在第二语义实体的历史对话数据。
在另一种可行的实施方式中,与第四语义实体相关或对应的历史对话数据也可以为存在与第四语义实体相关联的语义实体的历史对话数据,即与第四语义实体相关联的语义实体存在于第二对话数据中,关于与第四语义实体相关联的语义实体的概念,可参考前述对与第一语义实体相关联的第二语义实体的描述。终端设备可以在已经进行的一轮或多轮对话的对话数据中查找存在与第四语义实体相关联的语义实体的对话数据,将其确定为第二对话数据。
在又一种可行的实施方式中,与第四语义实体相关或对应的历史对话数据还可以为话题标签与第四语义实体对应的话题标签的相似度高于关联度阈值的历史对话数据,即第二对话数据对应的话题标签与第二语义实体对应的话题标签的相似度高于关联度阈值。终端设备可以确定第二语义实体的话题标签,和各历史对话数据对应的话题标签,然后将各历史对话数据的话题标签与第二语义实体的话题标签进行相似度匹配,以确定话题标签与第二语义实体对应的话题标签的相似度高于关联度阈值的历史对话数据,进而将话题标签与第二语义实体对应的话题标签的相似度高于关联度阈值的历史对话数据确定为第二对话数据。关于确定第二语义实体的话题标签和各历史对话数据对应的话题标签的具体实现方式,可参考前述步骤S712中确定第一对话数据对应的话题标签和对话视图中显示的知识图谱子图中的各个语义实体对应的话题标签的方式,此处不再赘述。
与第四语义实体相关或对应的历史对话数据不限于上述描述,具体以何种历史对话数据作为与第四语义实体相关或对应的对话取决于对话***中的有关于概念视图中的语义实体与对话视图中的对话数据之间的对应关系的具体设计,本申请实施例不做限制。
S813,终端设备在目标对话用户界面的对话视图中显示第二对话数据。
示例性地,终端设备在目标对话用户界面的对话视图中显示第二对话数据时的目标对话用户界面可以如图4E中的E2所示。
图4E实施例对应的另一种对话交互方法的流程示意图如图8B所示,该流程可适用于由终端设备和网络设备组成的对话***,具体包括如下步骤:
S821,终端设备检测到作用于第四语义实体的第二操作。
这里,步骤S821的相关描述可参考步骤S811,此处不再赘述。
S822,终端设备将第四语义实体发送给网络设备,网络设备接收第四语义实体。
S823,网络设备根据第四语义实体确定第二对话数据。
这里,有关于第二对话数据的描述以及网络设备根据第四语义实体确定第二对话数据 的具体实现方式,可参考步骤S812的描述,此处不再赘述。
S824,网络设备将第二对话数据发送给终端设备,终端设备接收第二对话数据。
S825,终端设备在目标对话用户界面的对话视图中显示第二对话数据。
示例性地,终端设备在目标对话用户界面的对话视图中显示第二对话数据时的目标对话用户界面可以如图4E中的E2所示。
结合图4E,以及,图8A-图8B对应的任一方法实施例可知,在进行了一轮或多轮对话后,目标对话用户界面中已经显示有对话数据和知识图谱子图,当概念视图中的语义实体被选中时,终端设备会显示与该语义实体相关的对话数据,实现了对话视图与概念视图之间的***交互,有助于帮助用户定位历史对话内容,提升了用户体验。
3、图4F实施例对应的对话交互方法的实现流程。
图4F实施例对应的一种对话交互方法的流程示意图可以如图9A所示,该流程可适用于仅由终端设备组成的对话***,具体包括如下步骤:
S911,终端设备检测到作用于第四语义实体的第二操作。
这里,步骤S911的相关描述可参考步骤S811,此处不再赘述。
S912,终端设备根据第二语义实体确定第二对话数据。
这里,有关于第二对话数据的描述,可参考步骤S812的描述,此处不再赘述。产生时间最晚的第二对话数据是指与第二语义实体的话题关联度高于关联度阈值的历史对话数据中最新的历史对话数据。该产生时间最晚的第二对话数据可以为一个,也可以为多个。
具体实现中,终端设备可以根据步骤S812描述的方式确定第二对话数据。
S913,终端设备在目标对话用户界面的概念视图中显示第二对话数据的摘要信息。
这里,第二对话数据的摘要信息为对第二对话数据的内容概括或内容总结,其用于简明扼要地描述第二对话数据,反映第二对话数据的主要内容。
具体实现中,终端设备可以识别第二对话数据的主要内容,以确定第二对话数据的摘要信息。其中,识别第二对话数据的主要内容的方法,本申请不做限制。例如,可通过预先训练得到的摘要信息提取模型识别第二对话数据的主要内容。
示例性地,终端设备在目标对话用户界面的概念视图中显示第二对话数据的摘要信息时的目标对话用户界面可以如图4F中的F2所示。
进一步地,终端设备可以在目标对话用户界面的概念视图中显示产生时间最晚的第二对话数据的摘要信息。
图4F实施例对应的另一种对话交互方法的流程示意图如图9B所示,该流程可适用于由终端设备和网络设备组成的对话***,具体包括如下步骤:
S921,终端设备检测到作用于第四语义实体的第二操作。
这里,步骤S911的相关描述可参考步骤S811,此处不再赘述。
S922,终端设备将第四语义实体发送给网络设备,网络设备接收第二语义实体。
S923,网络设备根据第二语义实体确定第二对话数据。
这里,有关于第二对话数据以及网络设备根据第四语义实体确定第二对话数据的具体实现方式,可参考步骤S912的描述,此处不再赘述。
S924,网络设备将第二对话数据发送给终端设备,终端设备接收第二对话数据。
S925,终端设备在目标对话用户界面的概念视图中显示第二对话数据的摘要信息。
这里,步骤S925的相关描述可参考步骤S913,此处不再赘述。
结合图4F,以及,图9A-图9B对应的任一方法实施例可知,在进行了一轮或多轮对话后,目标对话用户界面中已经显示有对话数据和知识图谱子图,当概念视图中的语义实体被选中时,终端设备会显示与该语义实体相关的对话数据的摘要信息,有助于帮助用户快速了解与语义实体相关的对话数据的主要内容。
4、图4G实施例对应的对话交互方法的实现流程。
图4G实施例对应的一种对话交互方法的流程示意图可以如图10A所示,该流程可适用于由终端设备和网络设备组成的对话***,具体包括如下步骤:
1)触发显示功能语义实体对应的功能选项的步骤,触发显示功能语义实体对应的功能选项的步骤包括步骤S1011~S1012。
S1011,终端设备检测到作用于任务语义实体的第三操作。
这里,任务语义实体为目标对话用户界面中的概念视图中显示的语义实体,一个任务语义实体可用于触发一个或多个对话任务,任务语义实体用于指示对话***的功能边界。具体地,任务语义实体可以为用于描述各种出行工具的语义实体,如飞机、火车、汽车等,也可以为与各种出行工具有关的语义实体,如机票、车票、船票等,用于描述各种出行工具的语义实体或与各种出行工具有关的语义实体可以用于指示对话***具备的与出行有关的对话任务,如预订机票/车票/船票,取消机票/车票/船票等。任务语义实体也可以为用于描述某种预期发生的事务的语义实体,如旅行、会议、餐饮等,也可以为与该预期发生的事务相关的语义实体,如酒店、会议室、各种旅游景点或餐饮店的名称等,用于描述某种预期发生的事务的语义实体或与该预期发生的事务相关的语义实体可以用于指示对话***具备的“计划”类对话任务,如预订酒店、预订会议室、预订门票、导航、预订酒店包间等。任务语义实体不限于这里的描述,具体哪些语义实体可以用作任务语义实体,以对话***中的一个或多个对话任务,本申请不做限制。
作用于任务语义实体的第三操作具体是指选中功能语义实体的操作。第三操作可以有多种形式。第三操作的具体形式可以参考参考作用于第二语义实体的第二操作的形式,此处不再赘述。对于第三操作具体是何种形式,本申请实施例不做限制。
S1012,终端设备在目标对话用户界面的概念视图中显示任务语义实体对应的关键信息。
任务语义实体对应的关键信息是指该任务语义实体对应的对话任务的各个槽位和槽位上的取值。其中,槽位是指对话任务对应的各种核心信息(如时间、地理位置)等,槽位上的取值为该核心信息的具体内容。例如,任务语义实体对应的对话任务为订机票,那么订机票这一对话任务的槽位可包括“航空公司”、“起飞时间”、“座位号”、“登机口”等核心信息,槽位上的取值可可包括航空公司的具体内容,起飞时间的具体内容、座位号的具体内容以及登机口的具体内容,等等。
示例性地,终端设备在目标对话用户界面的概念视图中显示任务语义实体对应的关键信息时的目标对话用户界面可以如图4G中的G2所示。
可选地,在触发显示任务语义实体对应的关键信息之后,还可以触发该任务语义实体 对应的对话任务。
2)触发功能语义实体对应的对话任务的步骤,触发功能语义实体对应的***功能的步骤包括步骤S1013~S1015。
S1013,终端设备检测到作用于任务语义实体对应的关键信息的第四操作,并获取针对任务语义实体的关键信息的用户意图。
作用于关键信息的第四操作是指选中任务语义实体对应的关键信息的操作。第四操作可以有多种形式。第四操作的具体形式可以参考参考作用于第二语义实体的第二操作的形式,此处不再赘述。对于第三操作具体是何种形式,本申请实施例不做限制。
在一种可能的实现方式中,可以通过在检测到第四操作之后获取用户输入的对话数据以获取针对该关键信息的用户意图,其中,该用户输入的对话数据可以为用户输入的语音数据,也可以为用户输入的文本数据。例如,图4G所示的点击操作即为第四操作,通过获取对话数据“我想订国航”即获取到针对关键信息的用户意图。
在另一种可能的实现方式中,也可以根据第四操作获取针对该关键信息的用户意图。例如,该第四操作为语音控制的操作(即用户说出与关键信息有关的语音指令),则可以获取该语音操作的操作对应的语音内容,以获取针对该关键信息的用户意图。
S1014,终端设备向网络设备发送对话任务执行请求,对话任务执行请求用于请求网络设备执行符合用户意图的对话任务。
具体地,终端设备可以将该针对任务语义实体的关键信息的用户意图所对应的对话数据发送给网络设备。
例如,该用户意图为“将会议时间从上午九点改为上午十点”,那么根据该用户意图确定的符合用户意图的对话任务为“修改会议时间”,对话任务的具体内容为“修改会议时间为上午十点”。
S1015,网络设备执行符合用户意图的对话任务。
具体地,网络设备根据该针对任务语义实体的关键信息的用户意图所对应的对话数据执行符合用户意图的对话任务。
可选地,在执行符合用户意图的对话任务后,终端设备还可以更新任务语义实体对应的关键信息。
3)更新任务语义实体对应的关键信息的步骤,更新任务语义实体对应的关键信息的步骤包括S1016~S1017。
S1016,网络设备将执行符合用户意图的对话任务的结果发送给终端设备,终端设备接收执行功能语义实体对应的对话任务的结果。
S1017,终端设备根据执行符合用户意图的对话任务的结果,在目标对话用户界面的概念视图中更新任务语义实体对应的关键信息。
这里,终端设备在目标对话用户界面的概念视图中更新任务语义实体对应的关键信息,是指终端设备根据执行符合用户意图的对话任务的结果,在该任务语义实体对应的关键信息中添加该执行符合用户意图的对话任务的结果,或者,利用该符合用户意图的对话任务的结果替换与该结果相对应的原始结果。
例如,如图4G中的G3所示,原始结果为“航班号:xx1起飞时间:h1时m1分”,经 过执行符合用户意图的对话任务(即航班改签)后,得到的结果为“航班号:国航xx2起飞时间:h2时m2分”,则利用执行该符合用户意图的对话任务的结果替换与该结果相对应的原始结果,即利用“航班号:国航xxx起飞时间:h2时m2分”替换“航班号:xx1起飞时间:h1时m1分”,替换后的目标对话用户界面如图4G中的G4所示。
图4G实施例对应的另一种对话交互方法的流程示意图如图10B所示,该流程可适用于仅由终端设备组成的对话***,具体包括如下步骤:
S1021,终端设备检测到作用于任务语义实体的第三操作。
S1022,终端设备在目标对话用户界面的概念视图中显示任务语义实体对应的关键信息。
S1023,终端设备检测到作用于任务语义实体对应的关键信息的第四操作,并获取针对任务语义实体的关键信息的用户意图。
这里,步骤S1021~S1023的相关描述可参考步骤S1011~S1013,此处不再赘述。
S1024,终端设备执行符合用户意图的对话任务。
这里,有关于符合用户意图的对话任务的具体描述,以及,终端设备执行符合用户意图的对话任务的具体实现方式,可参考步骤S1014~S1015的描述,此处不再赘述。
S1025,终端设备根据执行符合用户意图的对话任务的结果,在目标对话用户界面的概念视图中更新任务语义实体对应的关键信息。
这里,终端设备根据执行功能语义实体对应的对话任务的结果,在目标对话用户界面的概念视图中更新任务语义实体对应的关键信息的具体实现方式,可参考步骤S1017的描述,此处不再赘述。
结合图4G,以及,图10A-图10B对应的任一方法实施例可知,概念视图中显示的知识图谱子图除了包括对话视图中的对话数据中存在的语义实体外,还包括了用于触发对话任务的任务语义实体,这些任务语义实体起到了指示对话***的功能边界的作用,使得用户可以根据这些任务语义实体获知对话***具备的功能。
三、用于实现其他功能的对话交互方法的实现流程,这些流程可应用于目标对话用户界面中已经显示有对话数据和知识图谱子图的场景中,即已经进行了一轮或多轮对话的场景中。终端设备上实现的其他功能的图形用户界面实施例可参见图4C和图4H实施例。
1、用于实现对话***发起对话数据的对话交互方法的流程。对话***发起对话数据时终端设备上显示的图形用户界面实施例可参见图4H实施例。
图4H实施例对应的一种对话交互方法的示意图可以如图11A所示,该流程可适用于由网络设备和终端设备组成的基于人机交互的对话***,具体包括如下步骤:
S1111,网络设备检测到在知识图谱中存在语义关系的第五语义实体和第六语义实体,第五语义实体存在于历史对话数据中,第六语义实体不存在于历史对话数据中。
历史对话数据是指已经在对话***中产生的目标对话的对话数据。这里的历史对话数据可以是指目标对话对应的本次对话的历史对话数据,关于一次对话的定义和说明,可参考前述描述,此处不再赘述。这里的历史对话数据也可以是目标对话对应的所有历史对话数据(即本次对话的历史对话数据和产生于本次对话之前的历史对话数据)。
这里,在知识图谱中存在语义关系的第五语义实体和第六语义实体可以有如下情况:
第一种情况,在历史对话数据中存在一个语义实体,该语义实体与另一个不存在于历史对话数据中的语义实体存在语义关系,那么,该语义实体可称为第五语义实体,该另一个不存在于历史对话数据中的语义实体可称为第五语义实体。
第二种情况,在历史对话数据中存在至少两个语义实体,该至少两个语义实体与历史对话数据中的同一个语义实体存在语义关系,并且,该至少两个语义实体与另一个不存在于历史对话数据中的语义实体,那么,该至少两个语义实体可称为第五语义实体,该另一个不存在于历史对话数据中的语义实体可称为第六语义实体。
举例来对上述两种情况的第五语义实体和第六语义实体进行说明。参见图4C中的C2,历史对话数据为“97-98赛季NBA的MVP是谁?”和“是迈克尔乔丹”,存在于历史对话数据中的语义实体为“NBA”、“MVP”、“迈克尔乔丹”。
根据上述第一种情况的定义,语义实体“NBA”与语义实体“篮球”、“迈克尔乔丹”存在语义关系,语义实体“篮球”不存在于历史对话数据中,则语义实体“NBA”为第五语义实体,语义实体“篮球”为与语义实体“NBA”存在语义关系的第六语义实体;语义实体“MVP”与语义实体“詹姆斯哈登”、“迈克尔乔丹”、“梅西”、“西甲”存在语义关系,语义实体“詹姆斯哈登”、“梅西”、“西甲”不存在于历史对话数据中,则语义实体“MVP”为第五语义实体,语义实体“詹姆斯哈登”、“梅西”、“西甲”为与语义实体“MVP”存在语义关系的第六语义实体。
根据上述第二种情况的定义,语义实体“NBA”与“迈克尔乔丹”均与“MVP”存在语义关系,假设在知识图谱中,语义实体“NBA”与“迈克尔乔丹”还与语义实体“比尔卡赖特”存在语义关系,则语义实体“NBA”和“迈克尔乔丹”为第五语义实体,语义实体“比尔卡赖特”为与语义实体“NBA”和“迈克尔乔丹”存在语义关系的第六语义实体。
不限于上述情况,在知识图谱中存在语义关系的第五语义实体和第六语义实体还可以为其他情况,本申请实施例不做限制。
S1112,网络设备根据第五语义实体、第六语义实体以及第五语义实体与第六语义实体之间的语义关系,生成第三对话数据。
具体地,网络设备可以将第五语义实体、第六语义实体以及第五语义实体与第六语义实体之间的语义关系输入预先训练得到的Encoder-Decoder模型中,将Encoder-Decoder模型输出的数据确定为第三对话数据。第三对话数据为对话***主动发起的对话数据。
S1113,网络设备将第三对话数据发送给终端设备,终端设备接收第三对话数据。
S1114,终端设备在目标对话用户界面的对话视图中显示第三对话数据。
示例性地,终端设备在目标对话用户界面的对话视图中显示第三对话数据可参考图4C中的C2,其中,对话数据“哈登和乔丹都效力于芝加哥公牛队”为第三对话数据。
图4H实施例对应的另一种对话交互方法的示意图可以如图11B所示,该流程可适用于仅由终端设备组成的基于人机交互的对话***,具体包括如下步骤:
S1121,终端设备在知识图谱中检测到存在语义关系的第五语义实体和第六语义实体,第五语义实体存在于历史对话数据中,第六语义实体不存在于历史对话数据中。
这里,有关于第五语义实体、第六语义实体以及历史对话数据的定义和说明,可参考步骤S1111的有关描述,此处不再赘述。
S1122,终端设备根据第五语义实体、第六语义实体以及第五语义实体与第六语义实体 之间的语义关系,生成第三对话数据。
这里,步骤S1122的具体实现方式,可参考步骤S1112的具体实现方式,此处不再赘述。
S1123,终端设备在目标对话用户界面的对话视图中显示第三对话数据。
结合图4C,以及,图11A-图11B对应的任一方法实施例可知,基于人机交互的对话***还可以根据历史对话数据中各个概念之间的关联关系,主动发起对话数据,对话***主动发起的第三对话数据起到了引导话题的作用,使得对话内容更加丰富。
2、用于实现删除语义实体的对话交互方法的实现流程。
删除语义实体的一种对话交互方法的流程可以如图12A所示,该流程可适用于由网络设备和终端设备组成的对话***,具体包括如下步骤:
S1211,网络设备生成目标对话对应的知识图谱子图。
S1212,网络设备将目标对话对应的知识图谱子图发送给终端设备,终端设备接收目标对话对应的知识图谱子图。
这里,步骤S1211~S1212的具体实现方式可参考前述步骤S611~S615或步骤S631~S634或步骤S641~S643的描述,此处不再赘述。
S1213,在目标对话对应的知识图谱子图中的语义实体的数量大于第一数量的情况下,终端设备在目标对话对应的知识图谱子图中一个或多个语义实体。
这里,第一数量可以为终端设备上显示的概念视图中能够显示的语义实体的最大数量,第一数量的数值与终端设备上显示的概念视图的尺寸有关,其中,终端设备上显示的概念视图的尺寸越大,第一数量的数值则越大。
本申请实施例中,在目标对话对应的知识图谱子图中删除语义实体的过程中,终端设备可以删除目标对话对应的知识图谱子图中的以下一种或多种语义实体:
a1、未出现在历史对话数据中的语义实体,其中,历史对话数据可以是指本次对话的历史对话数据,也可以是指已经产生在对话***中的目标对话对应的所有历史对话数据。未出现在历史对话数据中的语义实体是指这些语义实体未存在于历史对话数据中,也即历史对话数据中未涉及的语义实体。
a2、在目标对话对应的知识图谱子图中与第七语义实体的语义关系路径距离大于第二距离阈值的语义实体,第七语义实体为对话视图中显示的最新的对话数据中存在的语义实体,该对话视图中显示的最新的对话数据为目标对话对应的在对话***中产生时间最晚的一个或多个对话数据。其中,有关于语义关系路径距离的相关定义,可参考前述步骤S614的有关描述,此处不再赘述。
a3、在对话过程中使用频率最低的一个或多个目标对话对应的知识图谱子图中的语义实体,关于对话过程中的使用频率的定义和说明,可参考前述步骤S614的有关描述,此处不再赘述。
不限于上述情况,对于在目标对话对应的知识图谱子图中删除语义实体的过程中,终端设备具体删除目标对话对应的知识图谱子图中的哪些语义实体,本申请实施例不做限制。
S1214,终端设备在目标对话用户界面的概念视图中显示删除语义实体后的目标对话对应的知识图谱子图。
示例性地,终端设备在目标对话用户界面的概念视图中显示删除语义实体后的目标对话对应的知识图谱子图时的目标对话用户界面可以如图4C中的C4所示。
删除概念视图中显示的语义实体的另一种对话交互方法的流程可以如图12B所示,该方法可适用于由网络设备和终端设备组成的对话***,具体包括如下步骤:
S1221,网络设备生成目标对话对应的知识图谱子图。
这里,步骤S1221的具体实现方式可参考前述步骤S611~S614或步骤S631~S634或步骤S641~S642的描述,此处不再赘述。
S1222,在目标对话对应的知识图谱子图中的语义实体的数量大于第一数量的情况下,网络设备目标对话对应的知识图谱子图中删除一个或多个语义实体。
这里,在目标对话对应的知识图谱子图中删除语义实体的过程中,网络设备可以删除的目标对话对应的知识图谱子图中的语义实体可参考步骤S1213的描述,此处不再赘述。
S1223,网络设备将删除语义实体后的目标对话对应的知识图谱子图发送给终端设备,终端设备接收删除语义实体后的目标对话对应的知识图谱子图。
S1224,终端设备在目标对话用户界面的概念视图中显示删除语义实体后的目标对话对应的知识图谱子图。
示例性地,终端设备在目标对话用户界面的概念视图中显示删除语义实体后的目标对话对应的知识图谱子图时的目标对话用户界面可以如图4C中的C4所示。
删除概念视图中显示的语义实体的又一种对话交互方法的流程可以如图11C所示,该流程可适用于仅由终端设备组成的对话***,具体包括如下步骤:
S1231,终端设备生成目标对话对应的知识图谱子图。
这里,步骤S1231的具体实现方式可参考前述步骤S621~S623或步骤S651~S652的描述,此处不再赘述。
S1232,在目标对话对应的知识图谱子图中的语义实体的数量大于第一数量的情况下,终端设备在目标对话对应的知识图谱子图中删除一个或多个语义实体。
S1233,终端设备在目标对话用户界面的概念视图中显示删除语义实体后的目标对话对应的知识图谱子图。
这里,步骤S1232~S1233的具体实现方式,可参考前述步骤S1213~S1214的描述,此处不再赘述。
结合图4C,以及,图12A-图12C对应的任一方法实施例可知,当目标对话对应的知识图谱子图中的语义实体的数量过多时,通过删除目标对话对应的知识图谱子图中的语义实体,实现了对语义实体的动态删除,保持了目标对话视图界面的简洁。
3、用于实现调整语义实体在概念视图中展示方式的对话交互方法的流程。
调整语义实体在概念视图中的展示方式的一种对话交互方法的流程可以如图13A所示,该流程可适用于由网络设备和终端设备组成的对话***,具体包括如下步骤:
S1311,网络设备生成目标对话对应的知识图谱子图。
S1312,网络设备将目标对话对应的知识图谱子图发送给终端设备,终端设备接收目标对话对应的知识图谱子图。
这里,步骤S1311~S1312的具体实现方式可参考前述步骤S611~S615或步骤S631~S634 或步骤S641~S643的描述,此处不再赘述。
S1313,在目标对话对应的知识图谱子图中的语义实体的数量大于第二数量的情况下,终端设备在目标对话用户界面的概念视图中以密集紧凑的方式显示目标对话对应的知识图谱子图中的语义实体。
这里,第二数量小于步骤S1213中的第一数量。在目标对话用户界面的概念视图中以密集紧凑的方式显示目标对话对应的知识图谱子图中的语义实体具体是指,通过改变语义实体在概念视图中占据的区域的尺寸、语义实体在概念视图中占据的区域的位置、两个语义实体在概念视图中的距离中的一种或多种方式,使得更多的语义实体能够得以完全显示在目标对话用户界面的概念视图中。
示例性地,在目标对话用户界面的概念视图中以密集紧凑的方式显示目标对话对应的知识图谱子图中的语义实体可以是如图4C中的C3或C4所示的以平行铺设的方式显示目标对话对应的知识图谱子图。
调整语义实体在概念视图中的展示方式的一种对话交互方法的流程可以如图13B所示,该流程可适用于仅由终端设备组成的对话***,具体包括如下步骤:
S1321,终端设备生成目标对话对应的知识图谱子图。
这里,步骤S1331的具体实现方式可参考前述步骤S621~S623或步骤S651~S652的描述,此处不再赘述。
S1322,在目标对话对应的知识图谱子图中的语义实体的数量大于第二数量的情况下,终端设备在目标对话用户界面的概念视图中以密集紧凑的方式显示目标对话对应的知识图谱子图中的语义实体。
这里,步骤S1322的具体实现方式,可参考步骤S1313的描述,此处不再赘述。
结合图4C,以及图13A-图13B对应的任一方法实施例可知,当目标对话对应的知识图谱子图中的语义实体的数量较多时,通过在目标对话用户界面的概念视图中以密集紧凑的方式显示目标对话对应的知识图谱子图中的语义实体,保证了概念视图的可视化效果。
上述详细描述了本申请的方法,为了更好地实施本申请的方法,下面还提供了本申请的其他装置。
参见图14,图14是本申请实施例提供的一种网络设备的结构框图,如图所示,该网络设1400可包括处理器1401、存储器1402以及通信接口1403和任意其他类似或合适的部件,这些部件可在一个或多个通信总线上通信,该总线可以为内存总线、外设总线,等等。
处理器1401可以是通用处理器,例如中央处理器(central processing unit,CPU),处理器302还可包括硬件芯片,上述硬件芯片可以是以下一种或多种的组合:专用集成电路(application specific integrated circuit,ASIC)、现场可编程逻辑门阵列(field programmable gate array,FPGA),复杂可编程逻辑器件(complex programmable logic device,CPLD)。处理器1401可处理通信接口1403接收到的数据,处理器1401还可处理将被发送到通信接口1403以通过有线传输介质传送的数据。
本申请实施例中,处理器1401可用于读取和执行计算机可读指令。具体的,处理器1401可用于调用存储于存储器1402中的程序,例如本申请的一个或多个实施例提供的对 话交互方法在网络设备侧的实现程序,并执行该程序包含的指令。
存储器1402与处理器1401耦合,用于存储各种软件程序和/或多组指令。具体实现中,存储器1402可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器1402中内置有操作***,例如Linux、Windows等操作***。存储器1402还可以内置网络通信程序,该网络通信程序可用于与其他设备进行通信。
在本申请的一些实施例中,存储器1402可用于存储本申请的一个或多个实施例提供的对话交互方法在网络设备侧的实现程序。关于本申请提供的对话交互方法的实现,请参考前述方法实施例。
通信接口1403可用于网络设备300与其他设备通信,例如终端设备等。通信接口1403可包括有线通信接口。例如可以为以太网接口、光纤接口等。可选地,通信接口1403还可以包括无线通信接口。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务、数据中心等数据存储设备。所述可用介质可以是半导体介质(例如SSD)等。
本领域普通技术人员可以意识到,结合本申请中所公开的实施例描述的各示例的模块及方法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
需说明,本申请实施例所涉及的第一、第二、第三以及各种数字编号仅为描述方便进行的区分,并不用来限制本申请实施例的范围。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (30)

  1. 一种对话交互方法,其特征在于,包括:
    终端设备在目标对话用户界面的第一区域中显示对话视图,并在所述目标对话用户界面的第二区域中显示概念视图,所述目标对话用户界面为目标对话对应的图形用户界面,所述对话视图用于显示所述目标对话的对话数据,所述概念视图用于显示目标对话对应的知识图谱子图,所述知识图谱子图包括多个语义实体,以及,所述多个语义实体中的各个语义实体相互之间的语义关系,所述多个语义实体包括第一语义实体,所述第一语义实体为所述对话数据中存在的语义实体。
  2. 根据权利要求1所述的方法,其特征在于,所述多个语义实体还包括与所述第一语义实体相关联的一个或多个第二语义实体。
  3. 根据权利要求2所述的方法,其特征在于,所述第二语义实体包括在知识图谱中与所述第一语义实体相邻的语义实体或在所述知识图谱中与所述第一语义实体的路径距离小于第一距离阈值的语义实体。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:
    在获取到新的对话数据的情况下,所述终端设备更新所述概念视图,更新后的所述概念视图用于显示根据所述新的对话数据更新的知识图谱子图,更新后的知识图谱子图包括所述新的对话数据中存在的语义实体,或,所述新的对话数据中存在的语义实体以及与所述新的对话数据中存在的语义实体相关联的语义实体。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    在所述知识图谱子图中的语义实体的数量大于第一数量的情况下,所述终端设备在所述知识图谱子图中删除一个或多个语义实体。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述方法还包括:
    在检测到作用于所述对话视图中显示的第一对话数据的第一操作的情况下,所述终端设备响应于所述第一操作,在所述概念视图中突出显示第三语义实体,所述第三语义实体包括所述第一对话数据中存在的语义实体,和/或,与所述第一对话数据中存在的语义实体相关联的语义实体。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述方法还包括:
    在检测到作用于所述概念视图中显示的第四语义实体的第二操作的情况下,所述终端设备响应于所述第二操作,在所述对话视图中显示第二对话数据,所述第四语义实体为所述第二对话数据中存在的语义实体,或,与所述第二对话数据中存在的语义实体相关联的语义实体。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述方法还包括:
    在检测到作用于所述概念视图中显示的第四语义实体的第二操作的情况下,所述终端设备响应于所述第二操作,在所述概念视图中显示第二对话数据的摘要信息,所述第四语义实体为所述第二对话数据中存在的语义实体,或,与所述第二对话数据中存在的语义实体相关联的语义实体。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述方法还包括:
    在检测到作用于所述概念视图中显示的任务语义实体的第三操作的情况下,所述终端设备响应于所述第三操作,在所述概念视图显示所述任务语义实体对应的关键信息。
  10. 根据权利要求9所述的方法,其特征在于,所述终端设备响应于所述第三操作,在所述概念视图显示所述任务语义实体对应的关键信息之后,还包括:
    在检测到作用于所述关键信息的第四操作并获取到针对所述关键信息的用户意图的情况下,所述终端设备响应于所述第四操作,触发执行符合所述用户意图的对话任务。
  11. 根据权利要求10所述的方法,其特征在于,所述终端设备响应于所述第四操作,触发执行符合所述用户意图的对话任务之后,还包括:
    所述终端设备根据执行所述符合所述用户意图的对话任务得到的结果,在所述概念视图中更新所述关键信息。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述方法还包括:
    当识别到与历史对话数据中的语义实体在知识图谱中存在语义关系的新的语义实体,并且,所述新的语义实体不存在于所述历史对话数据中,所述终端设备根据所述历史对话数据中的语义实体和所述新的语义实体发起对话。
  13. 一种对话交互方法,其特征在于,包括:
    网络设备根据目标对话的对话数据生成目标对话对应的知识图谱子图,所述知识图谱子图包括多个语义实体,以及,所述多个语义实体中的各个语义实体相互之间的语义关系,所述多个语义实体包括第一语义实体,所述第一语义实体为所述对话数据中存在的语义实体;
    所述网络设备将所述知识图谱子图发送给终端设备,所述知识图谱子图被所述终端设备用于在目标对话用户界面的第一区域中显示对话视图,并在所述目标对话用户界面的第二区域中显示概念视图,所述对话视图用于显示所述对话数据,所述概念视图用于显示所述知识图谱子图,所述目标对话用户界面为目标对话对应的图形用户界面。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    所述网络设备根据新的对话数据更新所述目标对话对应的知识图谱子图,并将更新后的知识图谱子图发送给所述终端设备,所述更新后的知识图谱子图被所述终端设备用于更新所述概念视图,所述更新后的知识图谱子图包括所述新的对话数据中存在的语义实体,或,所述新的对话数据中存在的语义实体以及与所述新的对话数据中存在的语义实体相关联的语义实体。
  15. 一种终端设备上的图形用户界面,所述终端设备具有显示屏、存储器以及一个或多 个处理器,所述一个或多个处理器用于执行存储在所述存储器中的一个或多个计算机程序,其特征在于,所述图形用户界面为目标对话对应的图形用户界面,所述图形用户界面包括:
    在所述图形用户界面的第一区域中显示对话视图,并在所述图形用户界面的第二区域中显示概念视图,所述对话视图用于显示所述目标对话的对话数据,所述概念视图用于显示目标对话对应的知识图谱子图,所述知识图谱子图包括多个语义实体,以及,所述多个语义实体中的各个语义实体相互之间的语义关系,所述多个语义实体包括第一语义实体,所述第一语义实体为所述对话数据中存在的语义实体。
  16. 根据权利要求15所述的图形用户界面,其特征在于,所述多个语义实体还包括与所述第一语义实体相关联的一个或多个第二语义实体。
  17. 根据权利要求16所述的图形用户界面,其特征在于,所述第二语义实体包括在知识图谱中与所述第一语义实体相邻的语义实体或在所述知识图谱中与所述第一语义实体的路径距离小于第一距离阈值的语义实体。
  18. 根据权利要求15-17任一项所述的图形用户界面,其特征在于,所述图形用户界面还包括:
    在获取到新的对话数据的情况下,在所述概念视图中更新所述概念视图,更新后的所述概念视图用于显示根据所述新的对话数据更新的知识图谱子图,更新后的知识图谱子图包括所述新的对话数据中存在的语义实体,或,所述新的对话数据中存在的语义实体以及与所述新的对话数据中存在的语义实体相关联的语义实体。
  19. 根据权利要求15-18任一项所述的图形用户界面,其特征在于,所述图形用户界面还包括:
    在所述知识图谱子图中的语义实体的数量大于第一数量的情况下,在所述更新后的知识图谱子图中删除一个或多个语义实体。
  20. 根据权利要求15-19任一项所述的图形用户界面,其特征在于,所述图形用户界面还包括:
    在检测到作用于所述对话视图中显示的第一对话数据的第一操作的情况下,响应于所述第一操作,在所述概念视图中突出显示第三语义实体,所述第三语义实体包括所述第一对话数据中存在的语义实体,和/或,与所述第一对话数据中存在的语义实体相关联的语义实体。
  21. 根据权利要求15-20任一项所述的图形用户界面,其特征在于,所述图形用户界面还包括:
    在检测到作用于所述概念视图中显示的第四语义实体的第二操作的情况下,响应于所述第二操作,在所述对话视图中显示第二对话数据,所述第四语义实体为所述第二对话数据中存在的语义实体,或,与所述第二对话数据中存在的语义实体相关联的语义实体。
  22. 根据权利要求15-21任一项所述的图形用户界面,其特征在于,所述图形用户界面还包括:
    在检测到作用于所述概念视图中显示的第四语义实体的第二操作的情况下,响应于所述第二操作,在所述概念视图中显示第二对话数据的摘要信息,所述第四语义实体为所述第二对话数据中存在的语义实体,或,与所述第二对话数据中存在的语义实体相关联的语义实体。
  23. 根据权利要求15-22任一项所述的图形用户界面,其特征在于,所述图形用户界面还包括:
    在检测到作用于所述概念视图中显示的任务语义实体的第三操作的情况下,响应于所述第三操作,在所述概念视图显示所述任务语义实体对应的关键信息。
  24. 根据权利要求23所述的图形用户界面,其特征在于,所述响应于所述第三操作,在所述概念视图显示所述任务语义实体对应的关键信息之后,还包括:
    在检测到作用于所述关键信息的第四操作并获取到针对所述关键信息的用户意图的情况下,响应于所述第四操作,触发执行符合所述用户意图的对话任务。
  25. 根据权利要求24所述的图形用户界面,其特征在于,所述响应于所述第四操作,触发执行符合所述用户意图的对话任务之后,还包括:
    根据执行所述符合所述用户意图的对话任务得到的结果,在所述概念视图中更新所述关键信息。
  26. 根据权利15-25任一项所述的图形用户界面,其特征在于,所述图形用户界面还包括:
    当识别到与历史对话数据中的语义实体在知识图谱中存在语义关系的新的语义实体,并且,所述新的语义实体不存在于所述历史对话数据中,所述终端设备根据所述历史对话数据中的语义实体和所述新的语义实体发起对话。
  27. 一种终端设备,其特征在于,包括显示屏、存储器以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述存储器中的一个或多个计算机程序,其中,所述一个或多个程序被存储在所述存储器中;所述一个或多个处理器在执行所述一个或多个程序时,使得所述终端设备实现如权利要求1-12任一项所述的方法。
  28. 一种网络设备,其特征在于,其特征在于,包括存储器以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述存储器中的一个或多个计算机程序,其中,所述一个或多个程序被存储在所述存储器中;所述一个或多个处理器在执行所述一个或多个程序时,使得所述网络设备实现如权利要求13或14所述的方法。
  29. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在终端设备上运行时,使得所述终端设备执行如权利要求1-12中任一项所述的方法。
  30. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在网络设备上运行时,使得所述网络设备执行如权利要求13所述的方法。
PCT/CN2020/070344 2019-03-29 2020-01-03 对话交互方法、图形用户界面、终端设备以及网络设备 WO2020199701A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20782878.1A EP3920043A4 (en) 2019-03-29 2020-01-03 DIALOG INTERACTION METHOD, GRAPHIC USER INTERFACE, TERMINAL DEVICE AND NETWORK DEVICE
US17/486,943 US20220012432A1 (en) 2019-03-29 2021-09-28 Dialog interaction method, graphical user interface, terminal device, and network device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910256287.7A CN110046238B (zh) 2019-03-29 2019-03-29 对话交互方法、图形用户界面、终端设备以及网络设备
CN201910256287.7 2019-03-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/486,943 Continuation US20220012432A1 (en) 2019-03-29 2021-09-28 Dialog interaction method, graphical user interface, terminal device, and network device

Publications (1)

Publication Number Publication Date
WO2020199701A1 true WO2020199701A1 (zh) 2020-10-08

Family

ID=67275725

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/070344 WO2020199701A1 (zh) 2019-03-29 2020-01-03 对话交互方法、图形用户界面、终端设备以及网络设备

Country Status (4)

Country Link
US (1) US20220012432A1 (zh)
EP (1) EP3920043A4 (zh)
CN (1) CN110046238B (zh)
WO (1) WO2020199701A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112650854A (zh) * 2020-12-25 2021-04-13 平安科技(深圳)有限公司 基于多知识图谱的智能答复方法、装置及计算机设备
CN113326367A (zh) * 2021-06-30 2021-08-31 四川启睿克科技有限公司 基于端到端文本生成的任务型对话方法和***

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046238B (zh) * 2019-03-29 2024-03-26 华为技术有限公司 对话交互方法、图形用户界面、终端设备以及网络设备
CN110580284B (zh) * 2019-07-31 2023-08-18 平安科技(深圳)有限公司 一种实体消歧方法、装置、计算机设备及存储介质
CN112328800A (zh) * 2019-08-05 2021-02-05 上海交通大学 自动生成编程规范问题答案的***及方法
CN111737481B (zh) * 2019-10-10 2024-03-01 北京沃东天骏信息技术有限公司 知识图谱的降噪方法、装置、设备和存储介质
CN110764671B (zh) * 2019-11-06 2022-07-12 北京字节跳动网络技术有限公司 信息展示方法、装置、电子设备和计算机可读介质
CN111177339B (zh) * 2019-12-06 2023-07-25 百度在线网络技术(北京)有限公司 对话生成方法、装置、电子设备及存储介质
CN111128184B (zh) * 2019-12-25 2022-09-02 思必驰科技股份有限公司 一种设备间的语音交互方法和装置
CN111639169A (zh) * 2020-05-29 2020-09-08 京东方科技集团股份有限公司 人机交互方法及装置、计算机可读存储介质及电子设备
CN111917708B (zh) * 2020-05-31 2023-04-18 上海纽盾科技股份有限公司 多目标协同的网络安全监控方法、客户端及***
CN111753100A (zh) * 2020-06-30 2020-10-09 广州小鹏车联网科技有限公司 一种针对车载应用的知识图谱生成方法和服务器
CN112988994B (zh) * 2021-03-04 2023-03-21 网易(杭州)网络有限公司 对话处理方法、装置及电子设备
CN113420125B (zh) * 2021-06-25 2023-09-19 深圳索信达数据技术有限公司 基于行业类型的问答对确定方法、***、存储介质及设备
CN113362191A (zh) * 2021-06-29 2021-09-07 中国平安财产保险股份有限公司 设备投保数据处理方法、装置、计算机设备和存储介质
CN114968034A (zh) * 2022-04-24 2022-08-30 上海传英信息技术有限公司 交互方法、智能终端及存储介质
CN116501285B (zh) * 2023-05-06 2024-01-05 祝语未来科技(北京)有限公司 基于虚拟数字形象交互的ai对话处理方法及数字化***

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609189A (zh) * 2012-01-13 2012-07-25 百度在线网络技术(北京)有限公司 一种移动终端的消息的内容的处理方法及客户端
US20120304089A1 (en) * 2011-05-26 2012-11-29 International Business Machines Corporation Method for tagging elements in a user interface
CN103699576A (zh) * 2013-11-29 2014-04-02 百度在线网络技术(北京)有限公司 一种用于提供搜索结果的方法与设备
CN106933809A (zh) * 2017-03-27 2017-07-07 三角兽(北京)科技有限公司 信息处理装置及信息处理方法
CN108846030A (zh) * 2018-05-28 2018-11-20 苏州思必驰信息科技有限公司 访问官方网站的方法、***、电子设备及存储介质
CN110046238A (zh) * 2019-03-29 2019-07-23 华为技术有限公司 对话交互方法、图形用户界面、终端设备以及网络设备

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2935855B1 (fr) * 2008-09-11 2010-09-17 Alcatel Lucent Procede et systeme de communication pour la determination d'une sequence de services lies a une conversation.
CN104102713B (zh) * 2014-07-16 2018-01-19 百度在线网络技术(北京)有限公司 推荐结果的展现方法和装置
US10769826B2 (en) * 2014-12-31 2020-09-08 Servicenow, Inc. Visual task board visualization
US9996532B2 (en) * 2016-06-17 2018-06-12 Microsoft Technology Licensing, Llc Systems and methods for building state specific multi-turn contextual language understanding systems
US10909441B2 (en) * 2017-06-02 2021-02-02 Microsoft Technology Licensing, Llc Modeling an action completion conversation using a knowledge graph
US11645314B2 (en) * 2017-08-17 2023-05-09 International Business Machines Corporation Interactive information retrieval using knowledge graphs
US10725982B2 (en) * 2017-11-20 2020-07-28 International Business Machines Corporation Knowledge graph node expiration
CN108000526B (zh) * 2017-11-21 2021-04-23 北京光年无限科技有限公司 用于智能机器人的对话交互方法及***
US10845937B2 (en) * 2018-01-11 2020-11-24 International Business Machines Corporation Semantic representation and realization for conversational systems
US10789755B2 (en) * 2018-04-03 2020-09-29 Sri International Artificial intelligence in interactive storytelling
CN109102809B (zh) * 2018-06-22 2021-06-15 北京光年无限科技有限公司 一种用于智能机器人的对话方法及***
CN109033223B (zh) * 2018-06-29 2021-09-07 北京百度网讯科技有限公司 用于跨类型对话的方法、装置、设备以及计算机可读存储介质
US11275791B2 (en) * 2019-03-28 2022-03-15 International Business Machines Corporation Automatic construction and organization of knowledge graphs for problem diagnoses

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120304089A1 (en) * 2011-05-26 2012-11-29 International Business Machines Corporation Method for tagging elements in a user interface
CN102609189A (zh) * 2012-01-13 2012-07-25 百度在线网络技术(北京)有限公司 一种移动终端的消息的内容的处理方法及客户端
CN103699576A (zh) * 2013-11-29 2014-04-02 百度在线网络技术(北京)有限公司 一种用于提供搜索结果的方法与设备
CN106933809A (zh) * 2017-03-27 2017-07-07 三角兽(北京)科技有限公司 信息处理装置及信息处理方法
CN108846030A (zh) * 2018-05-28 2018-11-20 苏州思必驰信息科技有限公司 访问官方网站的方法、***、电子设备及存储介质
CN110046238A (zh) * 2019-03-29 2019-07-23 华为技术有限公司 对话交互方法、图形用户界面、终端设备以及网络设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3920043A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112650854A (zh) * 2020-12-25 2021-04-13 平安科技(深圳)有限公司 基于多知识图谱的智能答复方法、装置及计算机设备
CN113326367A (zh) * 2021-06-30 2021-08-31 四川启睿克科技有限公司 基于端到端文本生成的任务型对话方法和***
CN113326367B (zh) * 2021-06-30 2023-06-16 四川启睿克科技有限公司 基于端到端文本生成的任务型对话方法和***

Also Published As

Publication number Publication date
EP3920043A1 (en) 2021-12-08
EP3920043A4 (en) 2022-03-30
CN110046238A (zh) 2019-07-23
CN110046238B (zh) 2024-03-26
US20220012432A1 (en) 2022-01-13

Similar Documents

Publication Publication Date Title
WO2020199701A1 (zh) 对话交互方法、图形用户界面、终端设备以及网络设备
US11099812B2 (en) Device and method for performing functions
JP6975304B2 (ja) 構造化された提案
JP7003170B2 (ja) タッチ感知デバイス上におけるインタラクティブ通知の表示
JP7037602B2 (ja) デジタルアシスタントサービスの遠距離拡張
KR102090918B1 (ko) 착신 통화를 거절하기 위한 지능형 디지털 어시스턴트
KR102104194B1 (ko) 자동화된 상태 리포트를 제공하는 디지털 어시스턴트
CN105320736B (zh) 用于提供信息的装置和方法
JP2019528492A (ja) メディア探索用のインテリジェント自動アシスタント
CN107608998A (zh) 具有数字助理的应用集成
CN107491295A (zh) 具有数字助理的应用集成
CN107491285A (zh) 智能设备仲裁和控制
CN107257950A (zh) 虚拟助理连续性
CN107195306A (zh) 识别提供凭据的语音输入
CN107949823A (zh) 零延迟数字助理
CN113190300A (zh) 分布式个人助理
CN110442319A (zh) 对语音触发进行响应的竞争设备
CN106233237B (zh) 一种处理与应用关联的新消息的方法和装置
CN107710131A (zh) 内容浏览用户界面
CN106662630A (zh) 使用通信耦接的电子设备来进行位置确定
KR102367132B1 (ko) 디바이스 및 디바이스의 기능 수행 방법
US12003659B2 (en) Interfaces and devices for dynamically-available media playback
CN118056172A (zh) 用于提供免提通知管理的数字助理
CN117170485A (zh) 基于上下文的任务执行

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20782878

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020782878

Country of ref document: EP

Effective date: 20210902

NENP Non-entry into the national phase

Ref country code: DE