CN111488444A - Dialogue method and device based on scene switching, electronic equipment and storage medium - Google Patents

Dialogue method and device based on scene switching, electronic equipment and storage medium Download PDF

Info

Publication number
CN111488444A
CN111488444A CN202010286144.3A CN202010286144A CN111488444A CN 111488444 A CN111488444 A CN 111488444A CN 202010286144 A CN202010286144 A CN 202010286144A CN 111488444 A CN111488444 A CN 111488444A
Authority
CN
China
Prior art keywords
information
task
conversation
target
dialogue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010286144.3A
Other languages
Chinese (zh)
Inventor
刘进步
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuiyi Technology Co Ltd
Original Assignee
Shenzhen Zhuiyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuiyi Technology Co Ltd filed Critical Shenzhen Zhuiyi Technology Co Ltd
Priority to CN202010286144.3A priority Critical patent/CN111488444A/en
Publication of CN111488444A publication Critical patent/CN111488444A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a dialogue method, a device, electronic equipment and a storage medium based on scene switching, which relate to the field of man-machine interaction, and the method comprises the following steps: receiving first dialogue information under a first task scene in a target dialogue task, and triggering to enter a second task scene based on the first dialogue information, wherein the target dialogue task comprises the first task scene and the second task scene; if the first session information triggers a specified event in the second task scene, triggering from the second task scene to enter the first task scene; receiving second dialogue information under the first task scene, and triggering to enter the second task scene based on the second dialogue information; and if the second dialogue information does not trigger the specified event in the second task scene, obtaining a dialogue result based on the second dialogue information and outputting the dialogue result. The method and the device can improve the flexibility of the conversation and expand the conversation service range.

Description

Dialogue method and device based on scene switching, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of human-computer interaction, and more particularly, to a dialog method and apparatus based on scene switching, an electronic device, and a storage medium.
Background
With the rapid development of science and technology, man-machine interaction technology has penetrated into the aspects of daily life. At present, many consulting and booking services provided to users are increasingly implemented in a robot dialogue mode.
However, in the current robot dialogue method, the dialogue scene is often single, and when the problem posed by the user is beyond the dialogue range of the current scene or the requirement information selected by the user conflicts with the actual situation, the dialogue cannot be continued, so that the dialogue requirement of the user cannot be met.
Disclosure of Invention
In view of the foregoing problems, the present application provides a dialog method, an apparatus, an electronic device, and a storage medium based on scene switching.
In a first aspect, an embodiment of the present application provides a dialog method based on scene switching, where the method includes: receiving first dialogue information under a first task scene in a target dialogue task, and triggering to enter a second task scene based on the first dialogue information, wherein the target dialogue task comprises the first task scene and the second task scene; if the first dialogue information triggers a specified event in the second task scene, triggering the first task scene from the second task scene; receiving second dialogue information in the first task scene, and triggering to enter the second task scene based on the second dialogue information; and if the second dialogue information does not trigger the specified event in the second task scene, obtaining a dialogue result based on the second dialogue information and outputting the dialogue result.
Further, obtaining a dialog result based on the second dialog information includes: acquiring shared information corresponding to the second dialogue information from a shared platform as target shared information, wherein the shared platform comprises a plurality of shared information, and the plurality of shared information is obtained through a plurality of dialogue tasks except the target dialogue task; and obtaining a conversation result based on the target shared information.
Further, the sharing platform presets an effective range corresponding to each piece of sharing information, and obtaining the conversation result based on the target sharing information includes: acquiring an effective range corresponding to the target shared information; judging whether the effective range corresponding to the target shared information is in the specified range; and if the effective range corresponding to the target shared information is within the specified range, obtaining a conversation result based on the target shared information.
Further, the sharing platform presets an effective duration corresponding to each piece of sharing information, and obtaining the conversation result based on the target sharing information includes: acquiring current time corresponding to the target sharing information and effective duration corresponding to the target sharing information; and if the current time is within the effective duration corresponding to the target shared information, obtaining a conversation result based on the target shared information.
Further, before receiving the first session information in the first task scenario of the target session task and triggering entry into the second task scenario based on the first session information, the method further includes: acquiring current input information; and determining the target conversation task according to the current input information.
Further, determining a target conversation task according to the current input information includes: acquiring a plurality of standard input information and a plurality of conversation tasks, wherein the plurality of standard input information and the plurality of conversation tasks are in one-to-one correspondence; respectively carrying out similarity comparison on the current input information and a plurality of standard input information to obtain similarity scores between the current input information and the plurality of standard input information; acquiring standard input information with the highest similarity score with current input information in the plurality of standard input information as target input information; a conversation task corresponding to the target input information is acquired from the plurality of conversation tasks as a target conversation task.
Further, determining a target conversation task according to the current input information includes: if the running conversation task is detected, the running conversation task is obtained as the current conversation task; acquiring historical input information corresponding to the current conversation task; and if the historical input information is matched with the current input information, determining that the current conversation task is the target conversation task.
Further, determining a target conversation task according to the current input information, further comprising: if the historical input information is not matched with the current input information, performing intention identification on the current input information to obtain an intention identification result; if the intention recognition result does not meet the preset condition, determining the current conversation task as the target conversation task; and if the intention recognition result meets the preset condition, acquiring a conversation task corresponding to the intention recognition result as a target conversation task.
Further, obtaining a dialog result based on the second dialog information, and outputting the dialog result, including: acquiring an initial dialogue result corresponding to the second dialogue information; triggering to enter a third task scene based on the initial conversation result; and if the confirmation information is received in the third task scene, determining that the initial conversation result is a conversation result, and outputting the conversation result.
Further, the first task scenario is a word slot collection scenario, the second task scenario is an information selection scenario, and the third task scenario is an information confirmation scenario.
In a second aspect, an embodiment of the present application provides a dialog apparatus based on scene switching, where the apparatus includes: the device comprises a first dialogue information receiving module, a scene switching module, a second dialogue information receiving module and a dialogue result output module. The first dialogue information receiving module is used for receiving first dialogue information in a first task scene in a target dialogue task and triggering to enter a second task scene based on the first dialogue information, and the target dialogue task comprises the first task scene and the second task scene. The scene switching module is used for triggering the first task scene from the second task scene if the first dialogue information triggers the specified event under the second task scene. The second dialogue information receiving module is used for receiving second dialogue information in the first task scene and triggering to enter a second task scene based on the second dialogue information. And the conversation result output module is used for obtaining a conversation result based on the second conversation information and outputting the conversation result if the second conversation information does not trigger the specified event in the second task scene.
Further, the dialog result output module includes: a target shared information acquisition unit and a conversation result acquisition unit. The target shared information acquiring unit is used for acquiring shared information corresponding to the second dialogue information from a shared platform as target shared information, wherein the shared platform comprises a plurality of pieces of shared information, and the plurality of pieces of shared information are obtained through a plurality of dialogue tasks except the target dialogue task. The conversation result acquisition unit is used for obtaining a conversation result based on the target sharing information.
Further, the dialog device based on scene switching further includes: the device comprises a current input information acquisition module and a target conversation task determination module. The current input information acquisition module is used for acquiring current input information. And the target conversation task determining module is used for determining a target conversation task according to the current input information.
In a third aspect, an embodiment of the present application provides an electronic device, which includes: memory, one or more processors, and one or more applications. Wherein the one or more processors are coupled with the memory. One or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of the first aspect as described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which program code is stored, and the program code can be called by a processor to execute the method according to the first aspect.
According to the conversation method and device based on scene switching, the electronic equipment and the storage medium, the first conversation information is received in the first task scene in the target conversation task, and the target conversation task is triggered to enter the second task scene based on the first conversation information, wherein the target conversation task comprises the first task scene and the second task scene, so that the conversation task can be completed by alternately operating a plurality of task scenes, and the completion degree of the conversation task is ensured. If the first session information triggers the specified event in the second task scene, the first task scene is triggered to enter the first task scene, wherein if the first session information triggers the specified event in the second task scene, the first session information can be proved to have conflict with the actual situation, for example, when the first session information aims to reserve airplane flights, but if the second task scene detects that the reserved shifts in the first session information have no seats, the first session information has conflict and can not be continued any more. Therefore, the second dialogue information can be received in the first task scene, and the second dialogue scene is triggered to enter based on the second dialogue information, so that the dialogue task is carried out again by the second dialogue information. And if the second dialogue information does not trigger the specified event in the second task scene, obtaining a dialogue result based on the second dialogue information and outputting the dialogue result. For example, if it is detected in the second task scenario that the reserved shift in the second session information is not full, the shift may be regarded as the reserved shift, and the session result, i.e., the information that the airplane reservation of the shift is successful, is output, so that the session task is effectively completed. Therefore, the method and the device switch a plurality of task scenes by introducing an event trigger mechanism, improve the flexibility of the conversation process, widen the application range of the conversation task, improve the completion degree of the conversation task and meet the conversation requirements of users.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application environment suitable for the embodiment of the present application.
Fig. 2 shows a flowchart of a dialog method based on scene change according to a first embodiment of the present application.
Fig. 3 shows a schematic diagram of a dialog interaction interface provided in the first embodiment of the present application.
Fig. 4 shows a flowchart of a dialog method based on scene change according to a second embodiment of the present application.
Fig. 5 shows a schematic diagram of a conversation process of a conversation robot provided in a second embodiment of the present application.
Fig. 6 is a flowchart illustrating a dialog method based on scene change according to a third embodiment of the present application.
Fig. 7 shows a flowchart of a dialog method based on scene change according to a fourth embodiment of the present application.
Fig. 8 is a flowchart illustrating a dialog method based on scene change according to a fifth embodiment of the present application.
Fig. 9 is a flowchart illustrating a dialog method based on scene change according to a sixth embodiment of the present application.
Fig. 10 is a diagram illustrating classification of trigger events according to a sixth embodiment of the present application.
Fig. 11 shows a flowchart of a dialog method based on scene change according to a seventh embodiment of the present application.
Fig. 12 is a flowchart illustrating a dialog method based on scene change according to an eighth embodiment of the present application.
Fig. 13 shows a block diagram of a dialog device based on scene change according to a ninth embodiment of the present application.
Fig. 14 is a block diagram of an electronic device for executing a dialog method based on scene cut according to a tenth embodiment of the present application.
Fig. 15 is a storage unit according to an eleventh embodiment of the present application, configured to store or carry program code for implementing a dialog method based on scene change according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the development of science and technology, artificial intelligence technology is more and more popular, and services such as appointments, consultations and the like in daily life are changed from artificial services to machine services. At present, a user can complete a plurality of operations such as airplane flight booking, meeting booking, weather inquiry and the like only by talking with a conversation robot in most cases.
However, the current conversation robots have a single conversation scene, wherein the conversation process between the user and the conversation robot can be regarded as a word slot collection process, so most conversation robots only include a word slot collection scene, but in an actual scene, the conversation robot may include other different scenes during conversation interaction: for example, scenarios requiring the user to make a selection, such as requiring the user to select an appropriate subscription time in a selectable time period in a conference room subscription dialog. For another example, a scenario requiring the user to provide confirmation, such as in a telephone charge recharge or a ticket reservation, requires the user to confirm whether the phone number or flight is correct. The above-described scenarios cannot be realized simply by relying on the word slot collection process. Therefore, a single scene cannot meet the conversation needs of the user.
The inventor finds that if a plurality of scenes are provided in a conversation task and the scenes are used alternately, the completion degree of the conversation can be effectively improved, the diversity of the conversation is ensured, and the conversation requirement of a user is met.
However, the inventor finds in actual research that, for switching among multiple scenes, a preset flow manner is mostly adopted, for example, a task-driven conversation is performed according to a designed flow, and switching among scenes is performed according to a preset switching sequence, such a preset flow has low flexibility and is not suitable for processing complex service scenes, for example, a large number of conditional jumps and loops are involved in some scenario-like voice games, and the preset flow is not easy to process such scenes.
In order to improve the above problem, the inventor proposes a dialog method, an apparatus, an electronic device and a storage medium based on scene switching in the embodiments of the present application. The method can provide a plurality of task scenes in the conversation task, thereby improving the completion degree of the conversation and ensuring the diversity of the conversation, and in addition, the switching among a plurality of scenes is realized by introducing an event trigger mechanism, thereby improving the scene switching efficiency and the flexibility of the conversation.
The following describes in detail a dialog method, an apparatus, an electronic device, and a storage medium based on scene switching provided in embodiments of the present application with specific embodiments.
First embodiment
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment suitable for the embodiment of the present application. The dialog method based on scene switching provided by the embodiment of the present application may be applied to the interactive system 100 shown in fig. 1. The interactive system 100 comprises a terminal device 101 and a server 102, wherein the server 102 is in communication connection with the terminal device 101. The server 102 may be a conventional server or a cloud server, and is not limited herein.
The terminal device 101 may be various electronic devices that have a display screen, a data processing module, a camera, an audio input/output function, and the like, and support data input, including but not limited to a smart phone, a tablet computer, a laptop portable computer, a desktop computer, a self-service terminal, a wearable electronic device, and the like. Specifically, the data input may be inputting voice based on a voice module provided on the electronic device, inputting characters based on a character input module, and the like.
The terminal device 101 may have a client application installed thereon, and the user may be based on the client application (for example, APP, wechat applet, etc.), where the conversation robot in this embodiment is also a client application configured in the terminal device 101. A user may register a user account in the server 102 based on the client application program, and communicate with the server 102 based on the user account, for example, the user logs in the user account in the client application program, inputs information through the client application program based on the user account, and may input text information or voice information, and the like, after receiving information input by the user, the client application program may send the information to the server 102, so that the server 102 may receive the information, process and store the information, and the server 102 may also receive the information and return a corresponding output information to the terminal device 101 according to the information.
In some embodiments, the apparatus for processing the data to be recognized may also be disposed on the terminal device 101, so that the terminal device 101 can interact with the user without relying on the server 102 to establish communication, and in this case, the interactive system 100 may only include the terminal device 101.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a dialog method based on scene switching according to an embodiment of the present application. As shown in fig. 2, the method may include:
s110, receiving first dialogue information in a first task scene in a target dialogue task, and triggering to enter a second task scene based on the first dialogue information, wherein the target dialogue task comprises the first task scene and the second task scene.
As an example, for example, the target conversation task may be a conversation task reserved for a conference room, wherein the target conversation task may include a first task scenario, a second task scenario, and other task scenarios in addition to the first task scenario and the second task scenario. Specifically, the first task scenario may be a word slot collection scenario, and the second task scenario may be an information selection scenario. When the conversation robot starts to perform a conversation by invoking the target conversation task, the conversation robot may first enter a word slot collection scenario to receive first conversation information, so as to collect word slots from the first conversation information, for example, the first conversation information is "meeting room scheduled tomorrow afternoon" input by the user, and at this time, "tomorrow afternoon" in the information input by the user may be collected as word slots. After a certain word slot is collected, the intention of the conversation is more clear, but necessary information such as meeting place and meeting specific time needs to be determined, otherwise, the meeting room reservation cannot be completed. Since the first dialogue information lacks necessary information to complete the conference reservation task, the first dialogue information may trigger a necessary information absence event to enter a second task scenario, i.e., an information selection scenario, to supplement the necessary information still needed by the target dialogue task. Alternatively, the manner of receiving the first dialog information by the dialog robot may be a manner of inputting through a keyboard, a manner of inputting audio, and the like, and the specific receiving manner thereof is not limited herein.
It should be noted that the necessary information lack event may be associated with the entering of the second task scenario in advance, and when the necessary information lack event is triggered, the second task scenario is automatically entered.
In the information selection scenario, a conversation robot configured in the terminal device 101 (hereinafter, may be referred to as an electronic device) may prompt the user to select a reservation location of a conference room, and prompt the user to select a reservation time of the conference room. As an example, as shown in fig. 3, a conversation robot on an electronic device can provide selectable booking places: first meeting room, second meeting room, third meeting room, etc., and then, for example, provide a selectable schedule: 15:00, 16:00, 17:00, and so forth.
Specifically, the target conversation task may be determined by the way that the conversation robot performs intention recognition on information input by the user to obtain an intention recognition result, and then, according to the intention recognition result, whether the information input by the user accurately hits one of the conversation tasks is determined, and if yes, the hit conversation task is used as the target conversation task, so that the user continues to complete subsequent conversations based on the target conversation task.
Among the plurality of conversation tasks, different conversation tasks are used for completing different conversations according to different purposes of the user. For example, the conversation task may be a conversation task for booking an airplane flight, a conversation task for booking a meeting room, a conversation task for inquiring weather, and the like.
And S120, if the first session information triggers a specified event in the second task scene, triggering the first task scene from the second task scene.
Wherein the user may supplement the first dialogue information in the second task scenario by selecting necessary information provided in the second task scenario. If the necessary information selected by the user triggers a specified event, the first task scenario is triggered from the second task scenario to re-receive the dialog information.
It should be noted that, the specified event and the switching to the first task scenario may be associated in advance, and when the specified time is triggered, the conversation robot automatically switches the current scenario in the target conversation task to the first task scenario.
As an example, for example, the event is specified as a meeting reservation time or a meeting reservation location, which conflicts with the actual situation. For example, the user selected conference subscription time is 15:00, if the selected reserved meeting place is the second meeting room, the dialog robot detects that the second meeting room is in the second meeting room through the script 15:00 has been booked by others, there is a conflict between the conference booking time and the actual situation, triggering a specified event. The conversation robot may switch the target conversation task from the second task scenario to the first task scenario to re-collect conversation information.
S130, receiving second dialogue information in the first task scene, and triggering to enter the second task scene based on the second dialogue information.
As an example, the conversation robot may re-receive the conversation information, i.e., the second conversation information, in the first task scenario. And when the second session information lacks necessary information (such as a meeting schedule or a meeting reservation location), a lack of necessary information event is triggered to enter a second task scenario. So that the user can supplement the second dialogue information with necessary information in the second task scene.
And S140, if the second dialogue information does not trigger the specified event in the second task scene, obtaining a dialogue result based on the second dialogue information, and outputting the dialogue result.
In some embodiments, if the second dialog information does not trigger the specified event in the second task scenario, i.e. the conference reservation time in the second dialog information, none of the conference reservation locations conflict with the actual situation. The dialog result may be derived based on second dialog information, for example, the second dialog information including the conference subscription time: tomorrow 15:00, meeting booking location: a second conference room, then a dialog result may be generated based on the second dialog information: "successfully book the second meeting room of tomorrow 15: 00", and outputs the conversation result through the conversation robot. Alternatively, the output means may be displayed on a screen of the electronic device. Or the electronic equipment can play through an audio playing device.
In the embodiment, the first dialogue information is received in the first task scene in the target dialogue task, and the second task scene is triggered to enter based on the first dialogue information, wherein the target dialogue task comprises the first task scene and the second task scene, so that the dialogue task can be completed by alternately operating a plurality of task scenes, and the completion degree of the dialogue task is ensured. If the first session information triggers the specified event in the second task scene, the first task scene is triggered to enter the first task scene, wherein if the first session information triggers the specified event in the second task scene, the first session information can be proved to have conflict with the actual situation, for example, when the first session information aims to reserve airplane flights, but if the second task scene detects that the reserved shifts in the first session information have no seats, the first session information has conflict and can not be continued any more. Therefore, the second dialogue information can be received in the first task scene, and the second dialogue scene is triggered to enter based on the second dialogue information, so that the dialogue task is carried out again by the second dialogue information. And if the second dialogue information does not trigger the specified event in the second task scene, obtaining a dialogue result based on the second dialogue information and outputting the dialogue result. For example, if it is detected in the second task scenario that the reserved shift in the second session information is not full, the shift may be regarded as the reserved shift, and the session result, i.e., the information that the airplane reservation of the shift is successful, is output, so that the session task is effectively completed. Therefore, the method and the device switch a plurality of task scenes by introducing an event trigger mechanism, improve the flexibility of the conversation process, widen the application range of the conversation task, improve the completion degree of the conversation task and meet the conversation requirements of users.
Second embodiment
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a dialog method based on scene switching according to an embodiment of the present application. The method may comprise the steps of:
s210, receiving first dialogue information in a first task scene in a target dialogue task, and triggering to enter a second task scene based on the first dialogue information, wherein the target dialogue task comprises the first task scene and the second task scene.
And S220, if the first dialogue information triggers a specified event in the second task scene, triggering to enter the first task scene from the second task scene.
And S230, receiving second dialogue information in the first task scene, and triggering to enter the second task scene based on the second dialogue information.
The specific implementation of S210 to S230 can refer to S110 to S130, and therefore is not described herein.
S240, if the second session information does not trigger the specified event in the second task scenario, obtaining an initial session result corresponding to the second session information.
The manner of obtaining the initial session result corresponding to the second session information may refer to the manner of generating the session result based on the second session information in S140, for example, the target session task is to book a conference room, and the second session information includes a conference booking location: second conference room, and conference subscription time: the technical proposal 16 is as follows: 00. a conference reservation location and a conference reservation time may be extracted from the second session information, and an initial dialog result may be generated based on the conference reservation time and the conference reservation location, for example, the generated initial dialog result is "please determine whether to perform a conference call on tomorrow 16:00 subscribe to a second conference room ".
And S250, triggering to enter a third task scene based on the initial dialog result.
In some embodiments, the third task scenario may be an information confirmation scenario for the user to confirm whether the initial dialog result is correct.
And S260, if the confirmation information is received in the third task scene, determining that the initial conversation result is a conversation result, and outputting the conversation result.
In some embodiments, the dialog robot may output the initial dialog result in the third task scenario, so that the user may confirm whether the initial dialog result is the result intended by the user. And if the confirmation information of the user is received in the third task scene, determining that the initial conversation result is the final conversation result, and outputting the conversation result. If the confirmation information input by the user is not received or the rejection information of the user is received in the third task scene, the initial dialog result is not determined as the dialog result if the initial dialog result is not the result intended by the user, and the initial dialog result can be deleted.
In some embodiments, if no confirmation information input by the user is received in the third task scenario or a rejection information input by the user is received, the conversation robot may switch from the third task scenario to the first task scenario in response to a trigger event to re-collect the conversation information.
Alternatively, the confirmation information of the dialog receiving user may be fingerprint confirmation information, voiceprint confirmation information, short message confirmation information, gesture confirmation information, password confirmation information, and the like.
Optionally, the first task scenario is a word slot collection scenario, and the conversation robot performs word slot collection on the conversation information input by the user in the word slot collection scenario. The second task scene is an information selection scene, and an information selection interface can be provided for the information conversation robot so that a user can accurately supplement necessary information in the conversation information. The third task scenario is an information confirmation scenario, and whether the dialog result is correct or not can be confirmed based on the confirmation information of the user in the information confirmation scenario. The target conversation task can be flexibly processed by alternately using the word slot collection scene, the information selection scene and the information confirmation scene.
In this embodiment, by configuring the third task scenario, the user can confirm whether the dialog result provided by the dialog robot meets the user's requirement, and when the user's requirement is met, the dialog result is output, so that the accuracy of the dialog result can be ensured.
In some embodiments, entry into the second task scenario is not triggered if the first dialog information includes information necessary to complete the target dialog task. As an example, for example, the target conversation task is a task of booking a conference room, and the first conversation information input by the user is "booking a second conference room at 16:00 pm tomorrow", where the first conversation information includes two necessary information, i.e., a conference booking place and a conference booking time. The conversation robot may perform word slot collection on the two necessary information and detect whether there is a conflict between the two necessary information and the actual situation through the script, and if there is no conflict, may generate a conversation result directly based on the first conversation information.
As one way, when implementing the dialog method based on scene change according to the embodiment, the specific flow may be as shown in fig. 5, first, the dialog robot may receive current input information input by a user, then perform intent recognition on the current input, obtain an intent recognition result, and perform dialog task scheduling according to the intent recognition result, that is, perform a dialog by determining a target dialog task from a plurality of dialog tasks according to the intent recognition result. And then, in the running process of the target conversation task, switching a proper scene according to the conversation information input by the user to complete the conversation to obtain a conversation result, and finally, replying the conversation result to the user by the conversation robot.
It should be noted that the scene is switched to the capability of the conversation robot to process complex services. When the conversation robot is designed with a conversation process, a scene entered when the conversation starts, a next scene after the scene operation ends, a scene when the conversation ends, and the like can be configured. Because the switching of the scenes can be abstracted as the jumping of the states in the state machine, the finite state machine can be adopted to integrate each scene, and the arbitrary switching between the scenes is realized.
Specifically, the conversation robot can realize accurate control of the conversation process through events triggered in the corresponding conversation process. The events are automatically generated when the conversation process runs to a specific stage or the conversation meets a specific condition, and corresponding event handlers can be registered for different events in the conversation robot. When an event triggers, the dialog bot will check whether there is a corresponding event listener, and if so, the event handler will be invoked to handle the current event. At present, an event handler can be registered in a mode of writing a configuration file or writing a script, when the event handler is called, a conversation robot can transmit a current conversation state as a parameter to the event handler, the event handler determines an execution flow according to an input state and returns modification information of the conversation state and switching information of a next conversation scene, and a conversation platform selects a proper conversation execution flow according to the returned information. The configuration mode also greatly expands the control capability of a developer on the conversation process.
It should be noted that, in a dialog, different information may need to be presented using the same scene, for example, in a meeting room booking task, a suitable time period needs to be selected in an information selection scene, a meeting room size needs to be selected, and the information is dynamic, that is, cannot be configured in a configuration file. The dialogue robot can be configured to process in a way that the calculation unit and the data unit are separated and the data is dynamically loaded. Each scene only provides a standard operation execution flow, data required by operation can be read in through a configuration file (static data) or obtained by analyzing a return value of an event processor (dynamic data), so that the orthogonalization of calculation and data in conversation is realized, and the processing capacity of business is expanded under the condition of multiplexing a calculation unit (scene).
Third embodiment
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating a dialog method based on scene switching according to an embodiment of the present application. The method may comprise the steps of:
s310, receiving first dialogue information in a first task scene in the target dialogue task, and triggering to enter a second task scene based on the first dialogue information, wherein the target dialogue task comprises the first task scene and the second task scene.
And S320, if the first session information triggers a specified event in the second task scene, triggering the first task scene from the second task scene.
S330, receiving second dialogue information in the first task scene, and triggering to enter the second task scene based on the second dialogue information.
The specific implementation of S310 to S320 may refer to S210 to S230, and therefore, is not described herein.
And S340, if the second dialogue information does not trigger the specified event in the second task scene, acquiring shared information corresponding to the second dialogue information from the shared platform as target shared information, wherein the shared platform comprises a plurality of shared information, and the plurality of shared information is obtained through a plurality of dialogue tasks except the target dialogue task.
As an example, there may be a certain connection between different ones of the plurality of session tasks, such as weather information for a destination being required in the ticket booking task, and weather information for the destination being required in the weather query task. Therefore, the dialogue robot can input the weather information of the destination as the shared information into the sharing platform in the previously completed air ticket booking task, and when the dialogue robot executes the weather inquiry task and the second dialogue information is the weather of the destination, the weather information of the destination, namely the target shared information corresponding to the second dialogue information can be directly acquired from the sharing platform without inquiring the weather word slot of the destination from the user.
And S350, obtaining a conversation result based on the target shared information, and outputting the conversation result.
As an example, when the target sharing information is destination weather, the conversation robot may output a conversation result by using weather report information including the destination weather as the conversation result.
In the embodiment, the shared information in different conversation tasks is input to the sharing platform, so that the conversation robot has fine-grained information sharing capability, information sharing in the conversation tasks can be properly and differently processed, other conversation tasks can select appropriate content from the sharing platform as the shared information to assist operation of the current task, and interaction experience of a user is improved.
Fourth embodiment
Referring to fig. 7, fig. 7 is a flowchart illustrating a dialog method based on scene switching according to an embodiment of the present application. The method may comprise the steps of:
s410, receiving first dialogue information in a first task scene in a target dialogue task, and triggering to enter a second task scene based on the first dialogue information, wherein the target dialogue task comprises the first task scene and the second task scene.
And S420, if the first session information triggers a specified event in the second task scene, triggering to enter the first task scene from the second task scene.
S430, receiving second dialogue information in the first task scene, and triggering to enter the second task scene based on the second dialogue information.
And S440, if the second session information does not trigger the specified event in the second task scene, acquiring shared information corresponding to the second session information from the shared platform as target shared information, wherein the shared platform comprises a plurality of shared information, and the plurality of shared information is obtained through a plurality of session tasks except the target session task.
The specific implementation of S410 to S440 can refer to S310 to S340, and therefore will not be described herein.
The sharing platform is preset with an effective range corresponding to each sharing information.
If a certain shared information can only be applied to a specified conversation task in a plurality of conversation tasks, the effective range corresponding to the shared information is the specified conversation task. Optionally, the valid range corresponding to the shared information may also be a specified type of conversation task, and only the specified type of conversation task can apply the shared information.
S450, obtaining an effective range corresponding to the target sharing information.
In some embodiments, after each piece of shared information is input to the sharing platform, the sharing platform may respectively establish a corresponding conversation task relationship table for each piece of shared information, and specifically, may associate one or more conversation tasks according to an attribute of any piece of shared information, and use the associated one or more conversation tasks as an effective range corresponding to the shared information, thereby establishing the conversation task relationship table for the shared information. Therefore, when the conversation robot acquires the target shared information, the effective range corresponding to the target shared information, namely one or more conversation tasks associated with the target shared information can be obtained according to the target shared information and the conversation task relation table of the target shared information.
S460, determining whether the valid range corresponding to the target sharing information is within the designated range.
As an example, the specified range may be a plurality of preset conversation tasks, and when the plurality of conversation tasks corresponding to the specified range completely include one or more conversation tasks associated with the target shared information, that is, when any one of the conversation tasks associated with the target shared information can be found in the plurality of conversation tasks corresponding to the specified range, it may be determined that the effective range corresponding to the target shared information is within the specified range.
And S470, if the effective range corresponding to the target shared information is in the designated range, obtaining a conversation result based on the target shared information, and outputting the conversation result.
The specific implementation of obtaining the dialog result based on the target sharing information in S470 may refer to S350 and is not described herein.
In this embodiment, by obtaining the effective range corresponding to the target sharing information and determining whether the effective range corresponding to the target sharing information is within the specified range, when the effective range corresponding to the target sharing information is within the specified range, the session result is obtained based on the target sharing information, so that the target sharing information can be ensured to be used only in a proper session task, and the use efficiency of the sharing information is improved.
Fifth embodiment
Referring to fig. 8, fig. 8 is a flowchart illustrating a dialog method based on scene switching according to an embodiment of the present application. The method may comprise the steps of:
s510, receiving first dialogue information in a first task scene in a target dialogue task, and triggering to enter a second task scene based on the first dialogue information, wherein the target dialogue task comprises the first task scene and the second task scene.
S520, if the first dialogue information triggers the specified event in the second task scene, the first task scene is triggered to enter the first task scene from the second task scene.
S530, receiving second dialogue information in the first task scene, and triggering to enter the second task scene based on the second dialogue information.
And S540, if the second dialogue information does not trigger the specified event in the second task scene, acquiring shared information corresponding to the second dialogue information from the shared platform as target shared information, wherein the shared platform comprises a plurality of shared information, and the plurality of shared information is obtained through a plurality of dialogue tasks except the target dialogue task.
The specific implementation of S510 to S540 can refer to S410 to S440, and therefore is not described herein.
The sharing platform is preset with effective duration corresponding to each piece of sharing information.
In some embodiments, after each piece of shared information is input to the sharing platform, the sharing platform may respectively establish a corresponding valid duration relationship table for each piece of shared information, and specifically, may associate one valid duration according to an identifier of one piece of shared information. From the valid duration relation table, the corresponding valid duration can be found according to the identifier of the shared information.
And S550, acquiring the current time corresponding to the target sharing information and the effective duration corresponding to the target sharing information.
In some embodiments, the conversation robot may query the identifier of the target shared information, and then find the valid duration corresponding to the target shared information according to the identifier of the target shared information and the valid duration relation table.
In other embodiments, the valid duration relationship table may be established by associating the attribute of the target shared information with the valid duration, at this time, the conversation robot may query the attribute of the target shared information, for example, if the target shared information is a weather condition of a certain city, the attribute of the target shared information is weather information, and the valid duration corresponding to the weather information may be acquired from the valid duration relationship table according to the weather information.
And S560, if the current time is within the effective duration corresponding to the target shared information, obtaining a conversation result based on the target shared information, and outputting the conversation result.
As an example, for example, if the valid duration corresponding to the target sharing information is 15:00 to 16:00, and if the current time is 15:30, the current time is within the valid duration corresponding to the target sharing information, the dialog result may be obtained based on the target sharing information, and the dialog result may be output. Otherwise, the conversation robot may delete the target shared information.
It is considered that shared information like weather is updated in real time and is only valid for a certain period of time. In this embodiment, by acquiring the current time corresponding to the target sharing information and the effective duration corresponding to the target sharing information, if the current time is within the effective duration corresponding to the target sharing information, the session result is obtained based on the target sharing information, so that the validity and accuracy of the sharing information can be ensured.
Sixth embodiment
Referring to fig. 9, fig. 9 is a schematic flowchart illustrating a dialog method based on scene switching according to an embodiment of the present application. The method may comprise the steps of:
s610, acquiring current input information.
In some embodiments, the conversation robot may receive text information input by a user via a keyboard of the electronic device and treat the text information as current input information. Or receiving the voice information of the user, performing semantic recognition on the voice information, and taking the result of the semantic recognition as the current input information.
And S620, determining a target conversation task according to the current input information.
In some embodiments, the conversation robot may calculate a similarity score between the current input information and the standard input information of each conversation task using a deep neural network, and may also determine whether the current input information accurately hits the conversation task in combination with a text matching model. Thereby determining the hit conversation task as the target conversation task.
In some embodiments, for the openness expression of the user, the dialog robot may first determine whether the current input information of the user matches an existing dialog task with the purpose or an openness dialog task through the intention recognition model recognition, and if the current input information of the user matches the existing dialog task with the purpose, the dialog task may throw a specific scenario event, and the dialog robot may selectively process some events based on the specific scenario event, so that the dialog task has the ability to respond to some openness expression of the user.
It should be noted that, as shown in fig. 10, in the switching of different scenes, various events may be triggered, and the conversation robot may selectively respond to some of the events, so as to accurately control the flow of the conversation, such as whether a scene is triggered, the switching sequence of the scenes, and so on. Events may contain two levels of events according to the event level: the method includes a general event triggered by each scene and a specific event triggered in the running of different scenes, wherein the general event of the scene includes a scene entering event and a scene exiting time, and in the specific scene event, for example, the word slot collection scene includes an event before filling a corresponding word slot, an event refilling the corresponding word slot, and an event after filling the corresponding word slot. A corresponding information confirmation event is included in the information confirmation scenario.
S630, receiving the first dialogue information in the first task scene in the target dialogue task, and triggering to enter the second task scene based on the first dialogue information, wherein the target dialogue task comprises the first task scene and the second task scene.
And S640, if the first session information triggers a specified event in the second task scene, triggering to enter the first task scene from the second task scene.
S650, receiving second dialogue information in the first task scene, and triggering to enter the second task scene based on the second dialogue information.
And S660, if the second dialogue information does not trigger the specified event in the second task scene, obtaining a dialogue result based on the second dialogue information, and outputting the dialogue result.
The embodiments of S620 to S660 refer to S110 to S140, and therefore are not described herein.
In the embodiment, by acquiring the current input information and determining the target conversation task according to the current input information, a suitable conversation task can be called to process the current conversation.
Seventh embodiment
Referring to fig. 11, fig. 11 is a schematic flowchart illustrating a dialog method based on scene switching according to an embodiment of the present application. The method may comprise the steps of:
and S710, acquiring the current input information.
S720, acquiring a plurality of standard input information and a plurality of dialogue tasks, wherein the plurality of standard input information and the plurality of dialogue tasks are in one-to-one correspondence.
In some embodiments, the standard input information may be one or more sentences or one or more phrases. Each standard input information may correspond to a dialog task.
And S730, respectively comparing the similarity of the current input information with the plurality of standard input information to obtain similarity scores between the current input information and the plurality of standard input information.
In some embodiments, the dialog robot may obtain the similarity score between the standard input information and the previous input information by the number of phrases or sentences covering the current input information in the standard input information. As an example, for example, the plurality of standard input information includes first standard input information, second standard input information, and third standard input information, where the first standard input information includes a phrase: "Beijing", "weather"; the second standard input information includes a phrase: "weather"; the second standard input information includes a phrase: shenzhen and Shensing. If the current input information is "what weather is in the tomorrow of beijing", the covered phrases of the first standard input information are the most, and the similarity (hereinafter, referred to as the first similarity) with the current input information has the highest score. The similarity score between the second standard input information and the current input information is lower than the first similarity. The similarity score between the second standard input information and the current input information is zero. By analogy, a similarity score between each standard input and the previous input information can be found.
And S740, acquiring the standard input information with the highest similarity score with the current input information from the plurality of standard input information as the target input information.
As an example, when only the first standard input information, the second standard input information, and the third standard input information are included in the plurality of standard input information, the first standard input information may be used as the target input information.
S750, a dialog task corresponding to the target input information is acquired from the plurality of dialog tasks as a target dialog task.
As an example, if the conversation task corresponding to the first standard input information is a conversation task of a weather query, the conversation task of the weather query is taken as a target conversation task.
S760, receiving first dialogue information in a first task scene in the target dialogue task, and triggering to enter a second task scene based on the first dialogue information, wherein the target dialogue task comprises the first task scene and the second task scene.
And S770, if the first session information triggers a specified event in the second task scene, triggering to enter the first task scene from the second task scene.
S780, receiving the second session information in the first task scenario, and triggering to enter the second task scenario based on the second session information.
And S790, if the second dialogue information does not trigger the specified event in the second task scene, obtaining a dialogue result based on the second dialogue information, and outputting the dialogue result.
The embodiments of S760 to S790 may refer to S110 to S140, and therefore are not described herein.
In the embodiment, the standard input information closest to the current input information can be found according to the similarity between the current input information and the standard input information, and the conversation task corresponding to the standard input information is used as the target conversation task, so that the target conversation task can be accurately hit.
Eighth embodiment
Referring to fig. 12, fig. 12 is a schematic flowchart illustrating a dialog method based on scene switching according to an embodiment of the present application. The method may comprise the steps of:
and S810, acquiring the current input information.
And S820, if the running conversation task is detected, acquiring the running conversation task as the current conversation task.
In some implementations, the conversation robot can detect whether a conversation task is running and, if so, determine why the conversation task is running, e.g., a weather-enquiry conversation task. The conversation task inquiring weather is taken as the current conversation task.
And S830, acquiring historical input information corresponding to the current conversation task.
In some embodiments, the conversation robot may view input information that occurred in the current conversation task from a historical conversation database and use the historical input information as historical input information corresponding to the current conversation task.
And S840, if the historical input information is matched with the current input information, determining that the current conversation task is the target conversation task.
In some embodiments, the conversation robot may determine whether the historical input information includes current input information, determine that the historical input information matches the current input information if the historical input information includes the current input information, and determine a currently running weather-inquiring conversation task as a target conversation task.
In other embodiments, the conversation robot may obtain a similarity between the historical input information and the current input information, and determine that the historical input information matches the current input information when the similarity between the historical input information and the current input information exceeds a similarity threshold.
S850, receiving first dialogue information in a first task scene in the target dialogue task, and triggering to enter a second task scene based on the first dialogue information, wherein the target dialogue task comprises the first task scene and the second task scene.
And S860, if the first session information triggers a specified event in the second task scene, triggering to enter the first task scene from the second task scene.
S870, receiving the second session information in the first task scenario, and triggering to enter the second task scenario based on the second session information.
And S880, if the second dialogue information does not trigger the specified event in the second task scene, obtaining a dialogue result based on the second dialogue information, and outputting the dialogue result.
The specific implementation of S850 to S880 may refer to S110 to S140, and therefore, is not described herein.
In the embodiment, by checking whether the running conversation task is matched with the current input information in the conversation robot, and if the running conversation task is matched with the current input information, the conversation task is directly used as the target conversation task, so that the conversation efficiency can be effectively improved.
In some embodiments, if the historical input information does not match the current input information, performing intent recognition on the current input information to obtain an intent recognition result. Here, the intention recognition result may be obtained by inputting the current input information to the intention recognition model, wherein the intention recognition result may be an intention such as "inquire weather", "reserve air ticket", "purchase goods", and the like. And if the intention recognition result does not meet the preset condition, determining that the current conversation task is the target conversation task. As an example, if the intention recognition model does not recognize the current input information, the intention recognition result is not present, and the intention indicating the current dialogue information is ambiguous, it may be determined that the intention recognition result does not satisfy the preset condition. And if the intention recognition result meets the preset condition, acquiring a conversation task corresponding to the intention recognition result as a target conversation task. As an example, when the intention recognition model recognizes the current input information, the intention recognition result is present, which may indicate that the intention of the current dialog information is clear, and the corresponding dialog task may be found from the dialog task database as the target dialog character according to the intention recognition result.
In the present embodiment, if the history input information does not match the current input information, the intention recognition is performed on the current input information to determine whether the intention is clear, and if the intention is clear, the dialogue task corresponding to the intention can be found as the target dialogue task, and if the intention is not clear, the current dialogue task can be continued to further acquire the dialogue information, so that the dialogue efficiency of the dialogue robot can be improved.
Ninth embodiment
Referring to fig. 13, fig. 13 is a block diagram illustrating a dialog device based on scene switching according to an embodiment of the present application. The device 900 is applied to an electronic device with a display screen or other image output devices, and the electronic device may be an electronic device such as a smart phone, a tablet computer, a projector, a wearable intelligent terminal, and the like.
As will be explained below with respect to the block diagram of fig. 13, the dialog device 900 based on scene switching includes: a first conversation information receiving module 910, a scene switching module 920, a second conversation information receiving module 930, and a conversation result output module 940. The first dialog information receiving module 910 is configured to receive first dialog information in a first task scenario of a target dialog task, and trigger entry into a second task scenario based on the first dialog information, where the target dialog task includes the first task scenario and the second task scenario. The scene switching module 920 is configured to trigger to enter the first task scenario from the second task scenario if the first session information triggers the specified event in the second task scenario. The second session information receiving module 930 is configured to receive second session information in the first task scenario, and trigger to enter a second task scenario based on the second session information. The dialog result output module 940 is configured to obtain a dialog result based on the second dialog information and output the dialog result if the second dialog information does not trigger the specified event in the second task scenario.
Optionally, the dialog result output module 940 includes: a target shared information acquisition unit and a conversation result acquisition unit. The target shared information acquiring unit is used for acquiring shared information corresponding to the second dialogue information from a shared platform as target shared information, wherein the shared platform comprises a plurality of pieces of shared information, and the plurality of pieces of shared information are obtained through a plurality of dialogue tasks except the target dialogue task. The conversation result acquisition unit is used for obtaining a conversation result based on the target sharing information.
Optionally, the sharing platform presets an effective range corresponding to each piece of shared information, and the dialog result obtaining unit includes:
and the effective range acquiring subunit is used for acquiring an effective range corresponding to the target shared information.
And a judgment subunit, configured to judge whether the valid range corresponding to the target shared information is within the specified range.
And a first conversation result obtaining subunit, configured to obtain a conversation result based on the target shared information if the valid range corresponding to the target shared information is within the specified range.
Optionally, the sharing platform presets an effective duration corresponding to each piece of sharing information, and the session result obtaining unit includes:
and the effective duration obtaining subunit is used for obtaining the current time corresponding to the target sharing information and the effective duration corresponding to the target sharing information.
And the second conversation result acquisition subunit is used for acquiring a conversation result based on the target shared information if the current time is within the effective duration corresponding to the target shared information.
Optionally, the dialog apparatus 900 based on scene switching further includes:
and the current input information acquisition module is used for acquiring current input information.
And the target conversation task determining module is used for determining the target conversation task according to the current input information.
Optionally, the target conversation task determining module includes:
and the standard input information and conversation task acquisition unit is used for acquiring a plurality of standard input information and a plurality of conversation tasks, and the plurality of standard input information and the plurality of conversation tasks are in one-to-one correspondence.
And the similarity comparison unit is used for comparing the similarity of the current input information with the plurality of standard input information respectively to obtain similarity scores between the current input information and the plurality of standard input information respectively.
And a target input information determination unit configured to acquire, as target input information, standard input information having a highest similarity score with the current input information among the plurality of standard input information.
And a target dialogue task acquisition unit configured to acquire a dialogue task corresponding to the target input information from the plurality of dialogue tasks as a target dialogue task.
Optionally, the target conversation task determining module includes:
and the current conversation task acquiring unit is used for acquiring the running conversation task as the current conversation task if the running conversation task is detected.
And the historical input information acquisition unit is used for acquiring the historical input information corresponding to the current conversation task.
And the first target conversation task determining unit is used for determining that the current conversation task is the target conversation task if the historical input information is matched with the current input information.
Optionally, the target dialog task determination module further includes:
and the intention identification unit is used for identifying the intention of the current input information to obtain an intention identification result if the historical input information is not matched with the current input information.
And the second target conversation task determining unit is used for determining the current conversation task as the target conversation task if the intention recognition result does not meet the preset condition.
And the third target conversation task determining unit is used for acquiring a conversation task corresponding to the intention recognition result as the target conversation task if the intention recognition result meets the preset condition.
Optionally, the dialog result output module 940 includes:
and the initial dialogue result acquisition unit is used for acquiring an initial dialogue result corresponding to the second dialogue information.
And the third task scene triggering unit is used for triggering to enter a third task scene based on the initial dialog result.
And the conversation result determining unit is used for determining that the initial conversation result is the conversation result and outputting the conversation result if the confirmation information is received in the third task scene.
Optionally, the first task scenario is a word slot collection scenario, the second task scenario is an information selection scenario, and the third task scenario is an information confirmation scenario.
Optionally, the dialog apparatus 900 based on scene switching further includes: the device comprises a current input information acquisition module and a target conversation task determination module. The current input information acquisition module is used for acquiring current input information. And the target conversation task determining module is used for determining a target conversation task according to the current input information.
The dialog apparatus 900 based on scene switching provided in this embodiment of the application is used to implement the corresponding dialog method based on scene switching in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, and is not described herein again.
It can be clearly understood by those skilled in the art that the dialog device 900 based on scene switching according to the embodiment of the present application can implement each process in the foregoing method embodiment, and for convenience and brevity of description, the specific working processes of the device 900 and the module described above may refer to corresponding processes in the foregoing method embodiment, and are not described herein again.
In the embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the apparatus 900 or the modules may be in an electrical, mechanical or other form.
In addition, each functional module in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Tenth embodiment
Referring to fig. 14, a block diagram of an electronic device 1000 according to an embodiment of the present disclosure is shown. The electronic device 1000 may be an electronic device capable of running an application, such as a smart phone or a tablet computer. The electronic device 1000 in the present application may include one or more of the following components: a processor 1010, a memory 1020, and one or more applications, wherein the one or more applications may be stored in the memory 1020 and configured to be executed by the one or more processors 1010, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
The processor 1010 may be implemented in the form of at least one hardware of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), Programmable logic Array (Programmable logic Array, P L A), the processor 1010 may be implemented in the form of at least one of a Central Processing Unit (CPU), Graphics Processing Unit (GPU), and modem, wherein the CPU is primarily responsible for Processing operating systems, user interfaces, and applications, etc., the processor 1010 may be implemented in a single piece of hardware for rendering and rendering content, the modem may be implemented in a single piece of hardware for rendering and rendering content, and the modem 1010 may be implemented in a separate piece of wireless communication.
The Memory 1020 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 1020 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1020 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The data storage area may also store data created by the electronic device 1000 during use (e.g., phone book, audio-video data, chat log data), and the like.
Eleventh embodiment
Referring to fig. 15, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 1100 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments above.
The computer-readable storage medium 1100 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 1100 includes a non-volatile computer-readable storage medium. The computer readable storage medium 1100 has storage space for program code 1110 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 1110 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A method for dialog based on scene change, the method comprising:
receiving first dialogue information under a first task scene in a target dialogue task, and triggering to enter a second task scene based on the first dialogue information, wherein the target dialogue task comprises the first task scene and the second task scene;
if the first session information triggers a specified event in the second task scene, triggering from the second task scene to enter the first task scene;
receiving second dialogue information under the first task scene, and triggering to enter the second task scene based on the second dialogue information;
and if the second dialogue information does not trigger the specified event in the second task scene, obtaining a dialogue result based on the second dialogue information and outputting the dialogue result.
2. The method of claim 1, wherein obtaining the dialog result based on the second dialog information comprises:
acquiring shared information corresponding to the second dialogue information from a shared platform as target shared information, wherein the shared platform comprises a plurality of pieces of shared information, and the plurality of pieces of shared information are obtained through a plurality of dialogue tasks except the target dialogue task;
and obtaining the conversation result based on the target sharing information.
3. The method according to claim 2, wherein the sharing platform presets a valid range corresponding to each sharing information, and the obtaining the dialog result based on the target sharing information comprises:
acquiring an effective range corresponding to the target sharing information;
judging whether the effective range corresponding to the target shared information is in a specified range or not;
and if the effective range corresponding to the target sharing information is within the specified range, obtaining the conversation result based on the target sharing information.
4. The method according to claim 2, wherein the sharing platform presets an effective duration corresponding to each piece of sharing information, and the obtaining the dialog result based on the target sharing information comprises:
acquiring current time corresponding to the target sharing information and effective duration corresponding to the target sharing information;
and if the current time is within the effective duration corresponding to the target shared information, obtaining the conversation result based on the target shared information.
5. The method according to any of claims 1 to 4, wherein before receiving first session information in a first task scenario in a target session task and triggering entry into a second task scenario based on the first session information, further comprising:
acquiring current input information;
and determining a target conversation task according to the current input information.
6. The method of claim 5, wherein determining a target conversation task based on the current input information comprises:
acquiring a plurality of standard input information and a plurality of conversation tasks, wherein the plurality of standard input information and the plurality of conversation tasks are in one-to-one correspondence;
respectively carrying out similarity comparison on the current input information and the plurality of standard input information to obtain similarity scores between the current input information and the plurality of standard input information;
acquiring standard input information with the highest similarity score between the plurality of standard input information and the current input information as target input information;
and acquiring a conversation task corresponding to the target input information from the plurality of conversation tasks as the target conversation task.
7. The method of claim 5, wherein determining a target conversation task based on the current input information comprises:
if the running conversation task is detected, the running conversation task is obtained as the current conversation task;
acquiring historical input information corresponding to the current conversation task;
and if the historical input information is matched with the current input information, determining that the current conversation task is the target conversation task.
8. The method of claim 7, wherein determining a target conversation task based on the current input information further comprises:
if the historical input information is not matched with the current input information, performing intention identification on the current input information to obtain an intention identification result;
if the intention recognition result does not meet the preset condition, determining the current conversation task as the target conversation task;
and if the intention recognition result meets a preset condition, acquiring a conversation task corresponding to the intention recognition result as the target conversation task.
9. The method according to any one of claims 1 to 8, wherein the obtaining a dialog result based on the second dialog information and outputting the dialog result comprises:
acquiring an initial dialogue result corresponding to the second dialogue information;
triggering to enter a third task scene based on the initial dialog result;
and if the confirmation information is received in the third task scene, determining the initial conversation result as the conversation result, and outputting the conversation result.
10. The method of claim 9, wherein the first task scenario is a word slot collection scenario, the second task scenario is an information selection scenario, and the third task scenario is an information confirmation scenario.
11. A scene-switching based dialog device, comprising:
the device comprises a first dialogue information receiving module, a second dialogue information receiving module and a first dialogue processing module, wherein the first dialogue information receiving module is used for receiving first dialogue information in a first task scene in a target dialogue task and triggering to enter a second task scene based on the first dialogue information, and the target dialogue task comprises the first task scene and the second task scene;
a scene switching module, configured to trigger to enter the first task scene from the second task scene if the first session information triggers a specified event in the second task scene;
a second session information receiving module, configured to receive second session information in the first task scenario, and trigger entry into the second task scenario based on the second session information;
and the conversation result output module is used for obtaining a conversation result based on the second conversation information and outputting the conversation result if the second conversation information does not trigger the specified event in the second task scene.
12. The dialog device of claim 11 wherein the dialog result output module comprises:
a target shared information acquiring unit configured to acquire shared information corresponding to the second session information from a shared platform as target shared information, where the shared platform includes a plurality of shared information obtained by a plurality of session tasks other than the target session task;
and the conversation result acquisition unit is used for obtaining the conversation result based on the target sharing information.
13. The scene-cut based dialog device of claim 11, further comprising:
the current input information acquisition module is used for acquiring current input information;
and the target conversation task determining module is used for determining a target conversation task according to the current input information.
14. An electronic device, comprising:
a memory;
one or more processors coupled with the memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-10.
15. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 10.
CN202010286144.3A 2020-04-13 2020-04-13 Dialogue method and device based on scene switching, electronic equipment and storage medium Pending CN111488444A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010286144.3A CN111488444A (en) 2020-04-13 2020-04-13 Dialogue method and device based on scene switching, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010286144.3A CN111488444A (en) 2020-04-13 2020-04-13 Dialogue method and device based on scene switching, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111488444A true CN111488444A (en) 2020-08-04

Family

ID=71811753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010286144.3A Pending CN111488444A (en) 2020-04-13 2020-04-13 Dialogue method and device based on scene switching, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111488444A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002274A (en) * 2022-05-07 2022-09-02 Oppo广东移动通信有限公司 Control method and device, electronic equipment and computer readable storage medium
CN115082134A (en) * 2022-08-23 2022-09-20 深圳市人马互动科技有限公司 Marketing method, device, system, equipment and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886948A (en) * 2017-11-16 2018-04-06 百度在线网络技术(北京)有限公司 Voice interactive method and device, terminal, server and readable storage medium storing program for executing
WO2018157700A1 (en) * 2017-03-02 2018-09-07 腾讯科技(深圳)有限公司 Method and device for generating dialogue, and storage medium
CN109299320A (en) * 2018-10-30 2019-02-01 上海智臻智能网络科技股份有限公司 A kind of information interacting method, device, computer equipment and storage medium
CN109887483A (en) * 2019-01-04 2019-06-14 平安科技(深圳)有限公司 Self-Service processing method, device, computer equipment and storage medium
CN109902163A (en) * 2019-02-28 2019-06-18 百度在线网络技术(北京)有限公司 A kind of intelligent response method, apparatus, equipment and storage medium
CN109992655A (en) * 2019-03-29 2019-07-09 深圳追一科技有限公司 Intelligent customer service method, apparatus, equipment and storage medium
CN110096579A (en) * 2019-04-23 2019-08-06 南京硅基智能科技有限公司 A kind of more wheel dialogue methods
CN110472030A (en) * 2019-08-08 2019-11-19 网易(杭州)网络有限公司 Man-machine interaction method, device and electronic equipment
CN110750626A (en) * 2018-07-06 2020-02-04 ***通信有限公司研究院 Scene-based task-driven multi-turn dialogue method and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018157700A1 (en) * 2017-03-02 2018-09-07 腾讯科技(深圳)有限公司 Method and device for generating dialogue, and storage medium
CN107886948A (en) * 2017-11-16 2018-04-06 百度在线网络技术(北京)有限公司 Voice interactive method and device, terminal, server and readable storage medium storing program for executing
CN110750626A (en) * 2018-07-06 2020-02-04 ***通信有限公司研究院 Scene-based task-driven multi-turn dialogue method and system
CN109299320A (en) * 2018-10-30 2019-02-01 上海智臻智能网络科技股份有限公司 A kind of information interacting method, device, computer equipment and storage medium
CN109887483A (en) * 2019-01-04 2019-06-14 平安科技(深圳)有限公司 Self-Service processing method, device, computer equipment and storage medium
CN109902163A (en) * 2019-02-28 2019-06-18 百度在线网络技术(北京)有限公司 A kind of intelligent response method, apparatus, equipment and storage medium
CN109992655A (en) * 2019-03-29 2019-07-09 深圳追一科技有限公司 Intelligent customer service method, apparatus, equipment and storage medium
CN110096579A (en) * 2019-04-23 2019-08-06 南京硅基智能科技有限公司 A kind of more wheel dialogue methods
CN110472030A (en) * 2019-08-08 2019-11-19 网易(杭州)网络有限公司 Man-machine interaction method, device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002274A (en) * 2022-05-07 2022-09-02 Oppo广东移动通信有限公司 Control method and device, electronic equipment and computer readable storage medium
CN115002274B (en) * 2022-05-07 2024-02-20 Oppo广东移动通信有限公司 Control method and device, electronic equipment and computer readable storage medium
CN115082134A (en) * 2022-08-23 2022-09-20 深圳市人马互动科技有限公司 Marketing method, device, system, equipment and medium

Similar Documents

Publication Publication Date Title
CN108363593B (en) Application program preloading method and device, storage medium and terminal
CN106201424B (en) A kind of information interacting method, device and electronic equipment
CN109656512A (en) Exchange method, device, storage medium and terminal based on voice assistant
EP3862869A1 (en) Method and device for controlling data
CN108647052A (en) Application program preloads method, apparatus, storage medium and terminal
CN111105800B (en) Voice interaction processing method, device, equipment and medium
CN112201246A (en) Intelligent control method and device based on voice, electronic equipment and storage medium
US11721338B2 (en) Context-based dynamic tolerance of virtual assistant
CN110797022A (en) Application control method and device, terminal and server
US10831297B2 (en) Method, apparatus and computer-readable media for touch and speech interface
CN110765294B (en) Image searching method and device, terminal equipment and storage medium
CN111768783A (en) Voice interaction control method, device, electronic equipment, storage medium and system
CN111488444A (en) Dialogue method and device based on scene switching, electronic equipment and storage medium
CN111813912A (en) Man-machine conversation method, device, equipment and storage medium
CN112767916A (en) Voice interaction method, device, equipment, medium and product of intelligent voice equipment
CN113393842A (en) Voice data processing method, device, equipment and medium
CN112652302A (en) Voice control method, device, terminal and storage medium
CN112069830B (en) Intelligent session method and device
CN108762838A (en) Application program preloads method, apparatus, storage medium and terminal
CN117253478A (en) Voice interaction method and related device
CN110706691B (en) Voice verification method and device, electronic equipment and computer readable storage medium
CN107623620B (en) Processing method of random interaction data, network server and intelligent dialogue system
CN117112065A (en) Large model plug-in calling method, device, equipment and medium
CN113936655A (en) Voice broadcast processing method and device, computer equipment and storage medium
CN113505292A (en) Information pushing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination