CN116861861A - Text processing method and device, electronic equipment and storage medium - Google Patents

Text processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116861861A
CN116861861A CN202310828752.6A CN202310828752A CN116861861A CN 116861861 A CN116861861 A CN 116861861A CN 202310828752 A CN202310828752 A CN 202310828752A CN 116861861 A CN116861861 A CN 116861861A
Authority
CN
China
Prior art keywords
text
user
expression
input
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310828752.6A
Other languages
Chinese (zh)
Inventor
沈星辰
赵力
范敏虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu China Co Ltd
Original Assignee
Baidu China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu China Co Ltd filed Critical Baidu China Co Ltd
Priority to CN202310828752.6A priority Critical patent/CN116861861A/en
Publication of CN116861861A publication Critical patent/CN116861861A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/186Templates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)

Abstract

The disclosure provides a text processing method and device, electronic equipment and a storage medium, relates to the technical field of artificial intelligence, and particularly relates to the technical fields of natural language processing, deep learning and the like. The implementation scheme is as follows: acquiring a first text input by a user in an input interface; judging whether the user has a target intention for optimizing the expression mode of the first text or not based on the behavior information of the user in the input interface; in response to determining that the user has a target intention, optimizing the expression of the first text to obtain a second text; and presenting the second text in the input interface.

Description

Text processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to the technical fields of natural language processing, deep learning, and the like, and in particular, to a text processing method and apparatus, an electronic device, a computer readable storage medium, and a computer program product.
Background
Artificial intelligence (Artificial Intelligence, AI) is the discipline of studying the process of making a computer to simulate certain mental processes and intelligent behaviors of a person (e.g., learning, reasoning, thinking, planning, etc.), both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
Text editing is a common function of applications. For example, a user may edit an article and issue through an application having a content issue function, edit a message and send to others through an application having an instant messaging function, and so on.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a text processing method and apparatus, an electronic device, a computer readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided a text processing method including: acquiring a first text input by a user in an input interface; judging whether the user has a target intention for optimizing the expression mode of the first text or not based on the behavior information of the user in the input interface; in response to determining that the user has the target intent, optimizing the expression of the first text to obtain a second text, wherein the content of the second text is different from the content of the first text, and the semantics of the second text are the same as the semantics of the first text; and presenting the second text in the input interface.
According to an aspect of the present disclosure, there is provided a text processing apparatus including: the first acquisition module is configured to acquire a first text input by a user in the input interface; a judging module configured to judge whether the user has a target intention to optimize the expression of the first text based on behavior information of the user in the input interface; an optimization module configured to optimize an expression of the first text to obtain a second text in response to determining that the user has the target intent, wherein the content of the second text is different from the content of the first text and the semantics of the second text is the same as the semantics of the first text; and a first presentation module configured to present the second text in the input interface.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to an aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer program product comprising computer program instructions which, when executed by a processor, implement the above-described method.
According to one or more embodiments of the present disclosure, text editing efficiency can be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a text processing method according to an embodiment of the present disclosure;
3A-3H illustrate schematic diagrams of an input interface according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a text processing process according to an embodiment of the present disclosure;
FIG. 5 shows a block diagram of a text processing device according to an embodiment of the present disclosure; and
fig. 6 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another element. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items. "plurality" means two or more.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
When a user edits text, the same semantic meaning may have multiple different expressions. Different expression patterns often transmit different emotions and bring different feelings to readers. For example, user A wants to reject the meal invitation sent by user B, expression 1 "cannot go, overtime-! The sorry of the expression mode 2 is not known, and the overtime T_T can bring different feelings to the user B at night. The mood of expression 1 is relatively stiff and user B may feel that he is unwelcome or offensive. The expression 2 is more gentle in mood and expresses regrets and sorry with the symbolic expression t_t, and user B can feel honest and friendly. Compared with the expression mode 1, the expression mode 2 is an expression mode of a high-efficiency business and is more beneficial to communication with other people.
In order to accurately express own views, the reader is provided with good reading experience, misunderstanding is avoided, users can repeatedly push and modify the expression mode of the text, the time consumption is long, and the text editing efficiency is low.
In view of the above problems, the embodiments of the present disclosure provide a text processing method, which can automatically identify a text rendering requirement of a user and render a text input by the user, thereby assisting the user in editing the text, reducing the time for the user to edit the text, and improving the text editing efficiency. Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, client devices 101, 102, 103, 104, 105, and 106, and server 120 may run one or more services or software applications that enable execution of text processing methods.
In some embodiments, server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The client devices 101, 102, 103, 104, 105, and/or 106 may provide interfaces that enable a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, vehicle-mounted devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, appli os, UNIX-like operating systems, linux, or Linux-like operating systems; or include various mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays (such as smart glasses) and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, wi-Fi), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and/or 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and/or 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. Database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. Database 130 may be of different types. In some embodiments, the database used by server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
For purposes of embodiments of the present disclosure, in the example of FIG. 1, applications having text editing functionality (i.e., text editing applications) may be included in client devices 101-106, such as shopping applications capable of editing and posting merchandise reviews, instant messaging applications capable of editing and sending messages, office applications capable of editing documents, and so forth. Input method applications may also be included in client devices 101-106. The input method application is invoked when a user performs text editing in the text editing application. The user inputs the text into the text editing application by operating the input method application, so that the text editing is realized.
The server 120 may be a server corresponding to an input method application in the client device, accordingly. A service program may be included in the server 120 that may provide text input services to the user based on data (including word stock, expressions, symbols, etc.) stored in the database 130.
Fig. 2 shows a flow chart of a text processing method 200 according to an embodiment of the disclosure. The subject of execution of the various steps of method 200 is typically a client device, such as client devices 101-106 shown in FIG. 1. In some embodiments, the execution subject of method 200 may also be a server, such as server 120 shown in fig. 1.
As shown in fig. 2, the method 200 includes steps S210-S240.
In step S210, a first text input by a user in an input interface is acquired.
In step S220, it is determined whether the user has a target intention to optimize the expression of the first text based on the behavior information of the user in the input interface.
In step S230, in response to determining that the user has a target intention, the expression of the first text is optimized to obtain a second text. The content of the second text is different from the content of the first text and the semantics of the second text are the same as the semantics of the first text.
In step S240, a second text is presented in the input interface.
According to the embodiment of the disclosure, based on the behavior information of the user in the input interface, the color rendering requirement (i.e., target intention) of the user on the first text is automatically recognized, the first text is further automatically rendered, and the second text after color rendering is displayed to the user for the user to select. Therefore, the method can assist the user to edit the text, reduce the time for the user to edit the text and improve the text editing efficiency.
The steps of method 200 are described in detail below.
In embodiments of the present disclosure, an input interface refers to a text editing application interface that includes an input method application interface and a text input box. For example, the user uses the input method application to edit text in the text editing application, and then the current interface in the text editing application is the input interface. The text editing application may be any application having a text editing function, such as an instant messaging application, a shopping application, an office application, and the like. In the input interface, a user inputs corresponding first text into the text input box by operating in the input method application interface. In some embodiments, the first text may be entered by copy-and-paste or voice.
Fig. 3A shows a schematic diagram of an input interface according to an embodiment of the present disclosure. As shown in fig. 3A, the user currently performing text editing is user a, and the input interface is a chat interface with user B in the instant messaging application. The input interfaces include a text input box 302, an input method application interface 304, and a history chat window 306. The input method application interface 304 includes a plurality of keyboard buttons. User a enters a corresponding first text into text input box 302 by operating in input method application interface 304. By clicking the "send" button, the first text in text input box 302 may be sent to user B. After successful transmission, the first text will be displayed in the historic chat window 306.
In some cases, the expression of the first text entered by the user may not be good enough and therefore have the target intent to optimize the expression of the first text, i.e., have the need to color the first text. In step S220, it may be determined whether the user has a target intention to optimize the expression of the first text based on the behavior information of the user in the input interface.
According to some embodiments, the behavior information of the user in the input interface includes a dwell time after the user inputs the first text in the input interface. Accordingly, in step S220, it may be determined whether the user has a target intention to optimize the expression of the first text based at least on the stay time after the user inputs the first text.
According to the embodiment, whether the user has the text coloring intention can be judged according to the stay time after the user inputs the first text, and the intention recognition efficiency is improved.
According to some embodiments, in response to the dwell time being greater than a first threshold (e.g., 3 s), it is determined that the user has a target intent to optimize the expression of the first text.
The dwell time after the user enters the first text can reflect the difference in time from completion of text editing to sending (or posting) the text. According to the embodiment, the text coloring intention of the user can be quickly identified. If the dwell time is long (i.e., the dwell time is greater than the first threshold), this indicates that the user is not transmitting (or posting) late after entering the first text. The user may be making a tap of the first text during the dwell period, thinking about the way the first text is modified, and thus may determine that the user has a target intent to color the first text. On the contrary, if the stay time is short (i.e., the stay time is less than or equal to the first threshold), it means that the first text is transmitted (or published) soon after the user inputs the first text, and the user is confident and has a good copy of the first text, so that it is possible to determine that the user does not have the target intention to color the first text.
According to some embodiments, the behavior information of the user in the input interface further comprises information of the first text entered by the user, such as the length of the first text. Accordingly, a target intent of the user to optimize the expression of the first text may be determined in response to the dwell time being greater than a first threshold and the length of the first text being greater than a second threshold (e.g., 10). It is understood that the length of the first text refers to the number of characters included in the first text.
According to the embodiment, the text coloring intention of the user can be quickly and accurately identified. If the dwell time after the user inputs the first text is long and the length of the first text input is long (i.e., the length is greater than the second threshold), the user may be thinking about the modification of the first text and the first text already expresses complete semantics, so it may be determined that the user has a target intention to color the first text. Conversely, if the dwell time is shorter or the length of the first text is shorter (i.e., the length is less than or equal to the second threshold), the user may be more confident in the expression of the first text, or the first text is inconvenient to color because the first text does not express complete semantics, so that it may be determined that the user does not have a target intention to color the first text.
According to some embodiments, a first set of semantic types with target intent may be preset. The first set of semantic types may include, for example, a plurality of semantic types with high-sentiment expression requirements, e.g., refusal of others, thank others, refusal of overtime, etc. And determining the semantic type of the first text by carrying out semantic recognition on the first text. And responding to the fact that the semantic type of the first text belongs to a preset first semantic type set, and judging that the user has the target intention of rendering the first text.
There are a number of ways to identify the semantic type of the first text. According to some embodiments, the first text may be input into a trained semantic recognition model to obtain a semantic type of the first text output by the semantic recognition model. It will be appreciated that the semantic recognition model is a classification model for implementing semantic classification of text that classifies a first text into one or more semantic types in a pre-set second set of semantic types. It should be noted that the first set of semantic types is a subset of the second set of semantic types. The number of semantic types with target intent included in the first set of semantic types is less than or equal to the number of semantic types included in the second set of semantic types. The semantic recognition model may be implemented, for example, as a neural network.
According to some embodiments, the similarity of the first text to each semantic type in the first set of semantic types may be calculated separately. And determining that the first text belongs to the semantic type in response to the similarity of the semantic type with the maximum similarity being greater than a similarity threshold, and the user has a target intention to color the first text.
According to some embodiments, the similarity of each semantic type in the first text and the second set of semantic types may be calculated separately, and the semantic type with the greatest similarity is determined as the semantic type of the first text. If the semantic types belong to the first semantic type set at the same time, determining that the user has a target intention of rendering the first text.
The similarity of the first text and the semantic type may be a literal similarity of the two, such as an edit distance, a maximum number of consecutive matching characters, etc.; or the cosine distance of the embedded vector of the two. The embedded vectors of the first text and semantic type may be derived, for example, by a trained text representation model. That is, the first text or semantic type is input into the text representation model, and an embedded vector of the first text or semantic type output by the text representation model can be obtained.
According to some embodiments, the above-described embodiment of identifying a target intent based on a stay time after inputting a first text and the above-described embodiment of identifying a target intent using a preset first set of semantic types may be combined. For example, a semantic type of the first text may be identified. The specific manner of identifying the semantic type of the first text may refer to the above description, and will not be repeated here. Determining that the user has a target intent to color the first text in response to any of the following conditions being met:
condition 1: the semantic types of the first text belong to a preset first semantic type set. As described above, the first set of semantic types is preset, comprising a plurality of semantic types with target intent.
Condition 2: the semantic types of the first text do not belong to the first set of semantic types, the dwell time after the user inputs the first text is greater than a first threshold and the length of the first text is greater than a second threshold.
According to the embodiment, the text coloring intention of the user is identified by combining the semantics of the first text and the behavior information of the user in the input interface, so that the accuracy of intention identification can be improved.
According to some embodiments, a target intent of a user having a first text that is rendered may be determined in response to user interaction with a target component in an input interface. According to the embodiment, the user can actively trigger the text color rendering through the interactive operation on the target component, so that the text editing efficiency is improved.
For example, as shown in FIG. 3A, a target component 308 is included in the input interface. The user will pop up tab 310 in the input interface by clicking on the target component 308, as shown in FIG. 3B. If the user selects "high-score" in tab 310, the user is considered to actively express a need for rendering the first text, and the user is determined to have a target intention to render the first text. The dialog function panel (not shown in fig. 3B) will then be popped up in the input method application. The user can apply dialogue with the input method in the dialogue function panel to input own expression requirements, for example, "how to gently reject friends, and cannot go to Disney due to overtime". The expression requirement can be the first text. In step S230, the input method application obtains a response text corresponding to the expression requirement by calling the trained dialogue model, and uses the response text as the optimized second text.
In step S230, in response to determining that the user has a target intention to color the first text, the expression of the first text is optimized to obtain a second text. It should be noted that, the optimization in step S230 refers to the optimization of the text content that keeps the semantics unchanged. That is, the content of the second text is different from the content of the first text, but the semantics of the second text are the same as those of the first text. In the embodiment of the present disclosure, the semantics of two texts are the same, meaning that the two texts belong to the same semantic type.
It is appreciated that in response to determining that the user does not have a target intent to color the first text, no optimization of the expression of the first text is required.
According to some embodiments, step S230 may include steps S232 and S234.
In step S232, the semantic type of the first text is identified.
In step S234, the expression of the first text is optimized based on the semantic type obtained in step S232 to obtain the second text.
According to the embodiment, the expression mode of the first text is optimized based on the semantic type, so that the semantics of the text can be kept unchanged before and after optimization, and the meaning of a user is prevented from being distorted.
According to some embodiments, step S234 may include step S2341.
In step S2341, the first text and its semantic type are input into the trained dialog model to obtain a second text that is output by the dialog model.
A dialogue model is a large language model trained on large-scale natural corpus that can understand the words spoken by a user and generate reasonable answers based on context. The dialog model may be, for example, a text-to-speech, GPT (generated Pre-trained Transformer), chatGPT, or the like.
For example, the first text in FIG. 3A, "I don't go, tomorrow overtime-! "and semantic type" reject others "to splice, reject others" splice result: i can't go to the open day to overtime-! The second text output by the dialogue model is obtained by inputting the dialogue model, and I can't go to work because I want to go to work.
According to the embodiment, the text rendering efficiency can be improved by directly inputting the first text and the semantic type thereof into the dialogue model to obtain the second text.
According to some embodiments, step S234 may include steps S2342-S2344.
In step S2342, a first query template corresponding to the semantic type of the first text is acquired. The first query template includes a first slot to be filled. The first query template is used to guide the trained dialog model to optimize the expression of text belonging to the semantic type.
In step S2343, at least the first text is filled into the first slot to obtain a first query text.
In step S2344, the first query text is entered into the dialog model to obtain a second text that is output by the dialog model.
According to the embodiment, the first text is optimized by using the first query template guide dialogue model corresponding to the semantic type, so that the color effect of the first text can be improved.
The first query template corresponding to the semantic type is preset. The first query template may have one or more. In the case that there are a plurality of first query templates, at least one of them may be selected to generate the first query text.
For example, in the embodiment shown in FIG. 3A, the first text, "I don't go, tomorrow will overtake-! The semantic type of "is" reject others ". The first query template corresponding to the semantic type may be, for example, "how gently reject others, optimize the expression of" ____ "? "how does" modify "____" to be a high-intelligence business refused to others? "please express the rejection of" ____ "more gently", etc. The underline in the first query template indicates the first slot to be filled.
Selecting a first query template, filling a first text into a first slot of the template, generating a first query text which is used for 'how to gently refuse others, optimizing' I cannot go, overtime in the open! "is the expression of? "
The first query text is used for rejecting others in a gentle way, the best is that I can not go, the overtime is required to overtake-! "is the expression of? The "input dialogue model, get answer text of dialogue model" is very thank you for your invitation, but I get overtime on the day, and can not go to. I really regret. ". The answer text is taken as a second text.
According to some embodiments, the first text may be entered by the user in response to historical chat content of the target contact. Accordingly, in step S2343, the first text and the historical chat content to which the first text is responsive may be filled into the first slot to obtain the first query text. According to the embodiment, the semantic of the first text can be enhanced by the historical chat content, so that the first query text comprises more sufficient semantic information, and the color rendering effect of the first text is improved.
For example, in the embodiment shown in FIG. 3A, the first text, "I don't go, tomorrow will overtake-! "user A is in love in response to user B's historical chat content," tomorrow we go to Disney bar together ". The first query template may be, for example, "how gently reject (1), optimize (2) expression? ". Wherein (1) is a first slot for filling historical chat content; (2) Is the second first slot for filling the first text. Filling the first text and the historical chat content of the response thereof into the corresponding first slot to obtain a first query text which is "how to gently refuse" loving, and going to the Disney bar together in tomorrow, optimizing "I can not go to, and going to overtime-! "is the expression of? ". The first query text is input into the dialogue model, and the second text output by the dialogue model is obtained, namely the user is very thank you to invite me to Disney, but the user needs overtime because the latest work is busy, so that the user cannot participate in the activity. True sores, i want to go with your, but the situation is now not allowed. Hope your play with pleasure-! ".
According to some embodiments, only the historical chat content responded by the first text may be filled into the first slot, so as to obtain the first query text. For example, the first query template may be "how gently reject" ____ "? The first slot in the template is used to populate the historical chat content. Filling historical chat contents 'loving' and 'filling' into a first slot by using tomorrow together to get a first query text 'how to gently reject' loving 'and' going to the Dinning together? ".
According to some embodiments, step S234 may include steps S2345-S2347.
In step S2345, a plurality of expression texts corresponding to the semantic types of the first text are acquired.
In step S2346, the similarity between the first text and any one of the plurality of expression texts is calculated.
In step S2347, at least one expression text having the highest similarity to the first text is determined as the second text.
According to the embodiment, the second text is determined from the preset multiple expression texts, so that the controllability and the acquisition efficiency of the second text can be improved, and the quality of the second text is ensured.
According to some embodiments, the plurality of expressed text corresponding to a semantic type is text of a high-sentiment business that belongs to the semantic type and that can bring a good reading experience to the reader. It can be understood that a plurality of expression texts corresponding to semantic types are preset.
According to some embodiments, in step S2346, the similarity of the first text and the expressed text may be the literal similarity of both, such as edit distance, maximum number of consecutive matching characters, etc.; or the cosine distance of the embedded vector of the two. The first text and the embedded vector of expressed text may be derived, for example, by a trained text representation model. That is, the first text or the expression text is input into the text representation model, and the first text or the embedded vector of the expression text output by the text representation model can be obtained.
It is understood that there may be one or more of the second texts determined in step S2347.
The above steps S2341, S2342-S2344, S2345-S2347 give three ways of generating the second text. According to some embodiments, one or more of the three approaches described above may be selected to generate the second text. In the case that the plurality of ways are selected to generate the second text, in step S240, the second text generated in each way may be summarized and presented to the user through the input interface.
The user may select the second text presented in the input interface. In response to a user selection operation of the second text, the first text in the current text entry box is replaced with the user selected second text. Further, the user may edit the second text in the current text entry box through the input method application to obtain the target text for transmission or distribution.
According to some embodiments, in step S240, the second text may be directly presented in the input interface without additional operations by the user.
According to further embodiments, in step S240, a prompt may be issued to the user that the second text has been generated. And displaying the second text when the target interactive operation of the user in the input interface is monitored.
According to some embodiments, step S240 may include steps S241-S243.
In step S241, a prompt for guiding the user to acquire the second text is generated based on the semantic type of the first text. According to some embodiments, the prompt may include a semantic type of the first text.
In step S242, a prompt component including the prompt is displayed in the input interface.
In step S243, the second text is presented in response to the user' S interactive operation with the alert component.
According to the above embodiment, the user is prompted by the prompting component that the second text has been generated. When the user operates the prompt component, the second text is displayed again, so that the second text can be displayed only when the user needs, and adverse effects on the text editing efficiency of the user due to invalid display of the second text are avoided.
For example, the expression of the first text "i don't go to, tomorrow will overtake" as shown in fig. 3A is optimized to obtain two second texts:
1. and the users are very sorry, and the users cannot go to work.
2. You are very thanking you inviting me to go to Disney, but I need to overtake because the most recent work is busy, and therefore cannot take part in this activity. True sores, i want to go with your, but the situation is now not allowed. Hope your play with pleasure-!
The semantic type of the first text is "reject others", from which a prompt "how high-sentiment rejects others" is generated that directs the user to obtain the second text, and a prompt component 312 is displayed in the input interface that contains the prompt, as shown in FIG. 3C. After the user clicks the prompt component 312 in the input interface shown in fig. 3C, the dialog function panel 313 will pop up in the input method application, as shown in fig. 3D. The area 314 in the dialog function panel 313 shows two optimized second texts. The user may select a certain second text by clicking, double clicking, long pressing, etc. In response to a user selection operation of the second text, the first text in text input box 302 will be replaced with the second text selected by the user, as shown in FIG. 3E.
According to some embodiments, the method 200 further comprises steps S250-S270.
In step S250, a demand text input by the user in the input interface is acquired. The demand text indicates the direction of optimization of the second text by the user.
In step S260, the second text is optimized based on the demand text to obtain a third text.
In step S270, a third text is presented in the input interface.
According to the embodiment, the user can express the directional optimization requirement on the second text in a dialogue mode with the input method application, and further obtain the optimized third text, so that the text editing efficiency is further improved.
According to some embodiments, the demand text may be user-defined or may be selected from a preset demand text set by a user. The demand text may indicate a user's optimization demand for the second text. According to some embodiments, the demand text may indicate an expression style that the user desires the second text to have, such as "enthusiasm", "lovely", "formally", and so forth. According to other embodiments, the demand text may also indicate a user's demand for the length, content, etc. of the second text, e.g. "reduced some", "added symbol expression", etc. According to some embodiments, the desired text may be entered by text or speech.
According to some embodiments, the second text and the demand text may be input into a trained dialog model to obtain a third text output by the dialog model. According to the embodiment, the text editing efficiency can be improved by directly inputting the second text and the required text into the dialogue model to obtain the third text.
According to some embodiments, step S260 may include steps S262-S266.
In step S262, a second query template is acquired. Wherein the second query template includes a second slot for filling in the second text and the demand text, the second query template for directing the trained dialog model to optimize the second text along an optimization direction indicated by the demand text.
For example, the second query template may be "modify (1) to (2) style", "modify (1) to (2)", and so on. Wherein (1) is a first second slot for filling a second text; (2) And the second slot is used for filling the required text.
In step S264, the second text and the demand text are filled into the second slot to obtain a second query text.
In step S266, the second query text is input into the dialog model to obtain a third text output by the dialog model.
According to the embodiment, the second text is optimized by using the second query template to guide the dialogue model, so that the optimizing effect of the second text can be improved.
It should be noted that, the user may perform multiple rounds of dialogue with the input method application in the dialogue function panel of the input method application, so as to express multiple optimization requirements, that is, input multiple requirement texts. Accordingly, the second text may be optimized multiple times according to multiple demand texts.
For example, as shown in fig. 3D and 3E, a dialog box 316, a voice input component 318, and a plurality of preset required texts 320 are displayed in the dialog function panel 313. The preset demand text 320 indicates the expression style of the text, including "regular", "enthusiasm", and "lovely". The user may select one of the desired text 320 as the target expression style of the second text, or may input the customized desired text into the dialog box 316 by means of text or voice.
As shown in fig. 3F, the user selects a "enthusiasm" style among a preset plurality of demand texts 320. Accordingly, the input method application can thank you to invite me to dirtiny for the current second text based on the demand text "enthusiasm style" by calling the dialogue model, but i need overtime because the latest work is busy, and thus cannot participate in this activity. True sores, i want to go with your, but the situation is now not allowed. Hope your play with pleasure-! "optimize, get optimized third text 322" thank you for you inviting me to get Dishini with you, my true good happiness-! However, most recently, my work is very busy and overtime is needed, so that me cannot participate in the activity, and the real good sorry is. Hope your ability to spend a super pleasant time-out-! ", as shown in fig. 3G.
The user then enters the demand text "reduced some" in dialog box 316. The input method application uses the third text 322 as the second text to be optimized at present by calling the dialogue model, and "reduced some" based on the current demand text, and "thank you for the second text to invite you to get Disney with you, I really good happy you-! However, most recently, my work is very busy and overtime is needed, so that me cannot participate in the activity, and the real good sorry is. Hope your ability to spend a super pleasant time-out-! The third text 324 after optimization is obtained by optimizing, and the third text is very sorry and can not participate in the Disney activity, and the work is busy recently and needs to be overtime. Hope your play happy ha-! ", as shown in fig. 3H.
According to some embodiments, a user may initiate a dialog with an input method application through an interactive operation on a target component prior to entering a first text. The input method application interface is a dialogue function panel for the user and the input method application. The user can input his own expression requirements in the dialogue function panel, for example, "how to gently refuse friends, cannot go to discniy due to overtime". The expression requirement can be the first text. And the input method application obtains a response text corresponding to the expression requirement by calling the trained dialogue model, and the response text is used as the optimized second text.
Further, the user may further input his own optimization requirement for the second text, i.e. the requirement text, in the dialog function panel. And the input method application obtains a response text corresponding to the required text by calling the dialogue model, and the response text is used as the optimized third text.
Fig. 4 shows a schematic diagram of a text processing process 400 according to an embodiment of the disclosure. In the embodiment shown in fig. 4, the user may enter the dialog function panel in the input method application by means of passive or active touch. The user may then perform text optimization in the dialog function panel by conducting a dialog with the dialog model of the input method application.
As shown in fig. 4, the process of the user passively touching the dialog function panel is as follows:
in step S401, the user edits or copies the text to obtain a first text.
In step S402, intention recognition is performed on the first text, and it is determined whether the user has a target intention to optimize the expression of the first text.
In step S403, in response to the user having the target intent, a prompt component (e.g., prompt component 312 in fig. 3C) is generated in the input method application interface for guiding the user to obtain the optimized second text. The prompt component includes a prompt generated based on the semantic type of the first text.
In step S404, in response to the user' S interactive operation (e.g., click, long press, etc.) with the prompt component, the dialog function panel is opened.
The process of actively touching the dialogue function panel by the user is as follows:
in step S405, the user enters the dialog function panel by performing an interactive operation (e.g., clicking, long press, etc.) on a target component in the input method application interface (e.g., target component 308 in FIGS. 3A-3C). The user can perform a dialogue with the input method application in the dialogue function panel, actively input his own query as a first text in the dialogue box, and obtain a response text output by the dialogue model as a second text.
In step S406, the generated one or more second texts are presented in the dialog function panel (e.g., as shown in fig. 3D).
In step S407, the user may select a certain second text by a click operation and display the selection result in a text input box of the text editing application (for example, as shown in fig. 3E).
Steps S408-S412 may be further performed for the second text presented in the dialog function panel.
In step S408, the text generation process is stopped, i.e., the call to the dialog model is stopped, in response to the user' S interactive operation with the stop component in the dialog function panel.
In step S409, in response to the additional text input by the user, by calling the dialogue model, the second text is modified based on the additional text, and a third text is generated.
In step S410, in response to a user selection of a preset style (e.g., enthusiasm, lovely, formal, etc.) in the dialog function panel, the second text is modified based on the preset style, and a third text is generated.
In step S411, in response to the custom style (e.g., reduce some, add expressions, etc.) entered by the user in the dialog function panel, the second text is modified based on the custom style, generating a third text.
In step S412, a new second text is generated by recalling the dialog model in response to user interaction with the refresh component in the dialog function panel.
According to an embodiment of the present disclosure, there is also provided a text processing apparatus. Fig. 5 shows a block diagram of a text processing device 500 according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus 500 includes a first obtaining module 510, a judging module 520, a first optimizing module 530, and a first displaying module 540.
The first acquisition module 510 is configured to acquire a first text entered by a user in an input interface.
The determination module 520 is configured to determine whether the user has a target intention to optimize the expression of the first text based on the behavior information of the user in the input interface.
The first optimization module 530 is configured to optimize the expression of the first text to obtain the second text in response to determining that the user has a target intent. Wherein the content of the second text is different from the content of the first text and the semantics of the second text are the same as the semantics of the first text.
The first presentation module 540 is configured to present the second text in the input interface.
According to the embodiment of the disclosure, based on the behavior information of the user in the input interface, the color rendering requirement (i.e., target intention) of the user on the first text is automatically recognized, the first text is further automatically rendered, and the second text after color rendering is displayed to the user for the user to select. Therefore, the method can assist the user to edit the text, reduce the time for the user to edit the text and improve the text editing efficiency.
According to some embodiments, the determining module 520 is further configured to: and judging whether the user has the target intention or not at least based on the stay time after the user inputs the first text.
According to some embodiments, the determining module 520 is further configured to: in response to the dwell time being greater than a first threshold, it is determined that the user has the target intent.
According to some embodiments, the determining module 520 is further configured to: in response to the dwell time being greater than a first threshold and the length of the first text being greater than a second threshold, it is determined that the user has the target intent.
According to some embodiments, the determining module 520 includes: an identification unit configured to identify a semantic type of the first text; and a determining unit configured to determine that the user has the target intention in response to any one of the following conditions being satisfied: the semantic types belong to a preset first semantic type set, wherein the first semantic type set comprises a plurality of semantic types with target intents; or the semantic type does not belong to the first set of semantic types, the dwell time period is greater than a first threshold, and the length of the first text is greater than a second threshold.
According to some embodiments, the determining module 520 is further configured to: responsive to the user's interaction with a target component in the input interface, it is determined that the user has the target intent.
According to some embodiments, the first optimization module 530 includes: a first acquisition unit configured to acquire a semantic type of the first text; and an optimizing unit configured to optimize the expression of the first text based on the semantic type to obtain the second text.
According to some embodiments, the optimization unit is further configured to: the first text and the semantic type are input into a trained dialog model to obtain the second text output by the dialog model.
According to some embodiments, the optimization unit comprises: the first acquisition subunit is configured to acquire a first query template corresponding to the semantic type, wherein the first query template comprises a first slot to be filled, and the first query template is used for guiding a trained dialogue model to optimize the expression mode of the text belonging to the semantic type; a filling subunit configured to fill at least the first text into the first slot to obtain a first query text; and an input subunit configured to input the first query text into the dialog model to obtain the second text output by the dialog model.
According to some embodiments, the first text is entered by the user in response to historical chat content of the target contact, and wherein the filling subunit is further configured to: and filling the first text and the historical chat content of the first text response into the first slot so as to obtain the first query text.
According to some embodiments, the optimization unit comprises: the second acquisition subunit is configured to acquire a plurality of expression texts corresponding to the semantic types; a calculating subunit configured to calculate a similarity between the first text and any one of the plurality of expression texts, respectively; and a determining subunit configured to determine, as the second text, at least one expression text having the highest similarity to the first text.
According to some embodiments, the first presentation module 540 comprises: a generation unit configured to generate a prompt for guiding the user to acquire the second text based on the semantic type of the first text; a first display unit configured to display a prompt component containing the prompt in the input interface; and a second presentation unit configured to present the second text in response to an interactive operation of the prompt component by the user.
According to some embodiments, the apparatus 500 further comprises: the second acquisition module is configured to acquire a required text input by the user in the input interface, wherein the required text indicates the optimization direction of the user on the second text; the second optimizing module is configured to optimize the second text based on the required text to obtain a third text; and a second presentation module configured to present the third text in the input interface.
According to some embodiments, the second optimization module is further configured to: the second text and the required text are input into a trained dialogue model to obtain the third text output by the dialogue model.
According to some embodiments, the second optimization module comprises: a second obtaining unit configured to obtain a second query template, wherein the second query template includes a second slot for filling the second text and the demand text, and the second query template is used for guiding the trained dialogue model to optimize the second text along the optimization direction indicated by the demand text; a filling unit configured to fill the second text and the demand text into the second slot to obtain a second query text; and an input unit configured to input the second query text into the dialog model to obtain the third text output by the dialog model.
It should be appreciated that the various modules and units of the apparatus 500 shown in fig. 5 may correspond to the various steps in the method 200 described with reference to fig. 2. Thus, the operations, features and advantages described above with respect to method 200 are equally applicable to apparatus 500 and the modules and units comprising the same. For brevity, certain operations, features and advantages are not described in detail herein.
Although specific functions are discussed above with reference to specific modules, it should be noted that the functions of the various modules discussed herein may be divided into multiple modules and/or at least some of the functions of the multiple modules may be combined into a single module.
It should also be appreciated that various techniques may be described herein in the general context of software hardware elements or program modules. The various elements described above with respect to fig. 5 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the units may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, these units may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the modules 510-540 may be implemented together in a System on Chip (SoC). The SoC may include an integrated circuit chip including one or more components of a processor (e.g., a central processing unit (Central Processing Unit, CPU), microcontroller, microprocessor, digital signal processor (Digital Signal Processor, DSP), etc.), memory, one or more communication interfaces, and/or other circuitry, and may optionally execute received program code and/or include embedded firmware to perform functions.
There is also provided, in accordance with an embodiment of the present disclosure, an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the text processing methods of the embodiments of the present disclosure.
According to an embodiment of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the text processing method of the embodiment of the present disclosure.
There is also provided, in accordance with an embodiment of the present disclosure, a computer program product comprising computer program instructions which, when executed by a processor, implement the text processing method of the embodiments of the present disclosure.
Referring to fig. 6, a block diagram of an electronic device 600 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic device 600 can also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606, an output unit 607, a storage unit 608, and a communication unit 609. The input unit 606 may be any type of device capable of inputting information to the electronic device 600, the input unit 606 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 607 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 608 may include, but is not limited to, magnetic disks, optical disks. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth devices, 802.11 devices, wi-Fi devices, wiMAX devices, cellular communication devices, and/or the like.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the various methods and processes described above, such as method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. One or more of the steps of the method 200 described above may be performed when a computer program is loaded into RAM 603 and executed by the computing unit 601. Alternatively, in other embodiments, computing unit 601 may be configured to perform method 200 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely illustrative embodiments or examples and that the scope of the present disclosure is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (33)

1. A text processing method, comprising:
acquiring a first text input by a user in an input interface;
Judging whether the user has a target intention for optimizing the expression mode of the first text or not based on the behavior information of the user in the input interface;
in response to determining that the user has the target intent, optimizing the expression of the first text to obtain a second text, wherein the content of the second text is different from the content of the first text, and the semantics of the second text are the same as the semantics of the first text; and
the second text is presented in the input interface.
2. The method of claim 1, wherein the determining whether the user has a target intent to optimize the expression of the first text based on behavior information of the user in the input interface comprises:
and judging whether the user has the target intention or not at least based on the stay time after the user inputs the first text.
3. The method of claim 2, wherein the determining whether the user has the target intent based at least on a dwell time after the user entered the first text comprises:
in response to the dwell time being greater than a first threshold, it is determined that the user has the target intent.
4. The method of claim 2, wherein the determining whether the user has the target intent based at least on a dwell time after the user entered the first text comprises:
in response to the dwell time being greater than a first threshold and the length of the first text being greater than a second threshold, it is determined that the user has the target intent.
5. The method of claim 2, wherein the determining whether the user has the target intent based at least on a dwell time after the user entered the first text comprises:
identifying a semantic type of the first text; and
determining that the user has the target intent in response to any of the following conditions being met:
the semantic types belong to a preset first semantic type set, wherein the first semantic type set comprises a plurality of semantic types with target intents; or alternatively
The semantic type does not belong to the first set of semantic types, the dwell time period is greater than a first threshold, and the length of the first text is greater than a second threshold.
6. The method of claim 1, wherein the determining whether the user has a target intent to optimize the expression of the first text based on behavior information of the user in the input interface comprises:
Responsive to the user's interaction with a target component in the input interface, it is determined that the user has the target intent.
7. The method of any of claims 1-6, wherein optimizing the expression of the first text to obtain a second text comprises:
acquiring the semantic type of the first text; and
and optimizing the expression mode of the first text based on the semantic type to obtain the second text.
8. The method of claim 7, wherein optimizing the expression of the first text based on the semantic type to obtain the second text comprises:
the first text and the semantic type are input into a trained dialog model to obtain the second text output by the dialog model.
9. The method of claim 7, wherein optimizing the expression of the first text based on the semantic type to obtain the second text comprises:
acquiring a first query template corresponding to the semantic type, wherein the first query template comprises a first slot to be filled, and the first query template is used for guiding a trained dialogue model to optimize the expression mode of the text belonging to the semantic type;
Filling at least the first text into the first slot to obtain a first query text; and
the first query text is input into the dialogue model to obtain the second text output by the dialogue model.
10. The method of claim 9, wherein the first text is entered by the user in response to historical chat content of a target contact, and wherein the populating at least the first text into the first slot to obtain first query text comprises:
and filling the first text and the historical chat content of the first text response into the first slot so as to obtain the first query text.
11. The method of claim 7, wherein optimizing the expression of the first text based on the semantic type to obtain the second text comprises:
acquiring a plurality of expression texts corresponding to the semantic types;
respectively calculating the similarity between the first text and any one of the plurality of expression texts; and
and determining at least one expression text with highest similarity with the first text as the second text.
12. The method of any of claims 1-11, wherein presenting the second text in the input interface comprises:
generating a prompt for guiding the user to acquire the second text based on the semantic type of the first text;
displaying a prompt component containing the prompt in the input interface; and
and responding to the interactive operation of the user on the prompt component, and displaying the second text.
13. The method of any of claims 1-12, further comprising:
acquiring a required text input by the user in the input interface, wherein the required text indicates the optimization direction of the user on the second text;
optimizing the second text based on the required text to obtain a third text; and
and displaying the third text in the input interface.
14. The method of claim 13, wherein the optimizing the second text based on the demand text to obtain a third text comprises:
the second text and the required text are input into a trained dialogue model to obtain the third text output by the dialogue model.
15. The method of claim 13, wherein the optimizing the second text based on the demand text to obtain a third text comprises:
obtaining a second query template, wherein the second query template comprises a second slot for filling the second text and the required text, and the second query template is used for guiding a trained dialogue model to optimize the second text along an optimization direction indicated by the required text;
filling the second text and the required text into the second slot to obtain a second query text; and
and inputting the second query text into the dialogue model to obtain the third text output by the dialogue model.
16. A text processing apparatus, comprising:
the first acquisition module is configured to acquire a first text input by a user in the input interface;
a judging module configured to judge whether the user has a target intention to optimize the expression of the first text based on behavior information of the user in the input interface;
a first optimization module configured to optimize an expression of the first text to obtain a second text in response to determining that the user has the target intent, wherein the content of the second text is different from the content of the first text and the semantics of the second text is the same as the semantics of the first text; and
A first presentation module configured to present the second text in the input interface.
17. The apparatus of claim 16, wherein the determination module is further configured to:
and judging whether the user has the target intention or not at least based on the stay time after the user inputs the first text.
18. The apparatus of claim 17, wherein the determination module is further configured to:
in response to the dwell time being greater than a first threshold, it is determined that the user has the target intent.
19. The apparatus of claim 17, wherein the determination module is further configured to:
in response to the dwell time being greater than a first threshold and the length of the first text being greater than a second threshold, it is determined that the user has the target intent.
20. The apparatus of claim 17, wherein the means for determining comprises:
an identification unit configured to identify a semantic type of the first text; and
a determination unit configured to determine that the user has the target intention in response to any one of the following conditions being satisfied:
the semantic types belong to a preset first semantic type set, wherein the first semantic type set comprises a plurality of semantic types with target intents; or alternatively
The semantic type does not belong to the first set of semantic types, the dwell time period is greater than a first threshold, and the length of the first text is greater than a second threshold.
21. The apparatus of claim 16, wherein the determination module is further configured to:
responsive to the user's interaction with a target component in the input interface, it is determined that the user has the target intent.
22. The apparatus of any of claims 16-21, wherein the first optimization module comprises:
a first acquisition unit configured to acquire a semantic type of the first text; and
and the optimizing unit is configured to optimize the expression mode of the first text based on the semantic type so as to obtain the second text.
23. The apparatus of claim 22, wherein the optimization unit is further configured to:
the first text and the semantic type are input into a trained dialog model to obtain the second text output by the dialog model.
24. The apparatus of claim 22, wherein the optimizing unit comprises:
the first acquisition subunit is configured to acquire a first query template corresponding to the semantic type, wherein the first query template comprises a first slot to be filled, and the first query template is used for guiding a trained dialogue model to optimize the expression mode of the text belonging to the semantic type;
A filling subunit configured to fill at least the first text into the first slot to obtain a first query text; and
an input subunit configured to input the first query text into the dialog model to obtain the second text output by the dialog model.
25. The apparatus of claim 24, wherein the first text is entered by the user in response to historical chat content of a target contact, and wherein the populating subunit is further configured to:
and filling the first text and the historical chat content of the first text response into the first slot so as to obtain the first query text.
26. The apparatus of claim 22, wherein the optimizing unit comprises:
the second acquisition subunit is configured to acquire a plurality of expression texts corresponding to the semantic types;
a calculating subunit configured to calculate a similarity between the first text and any one of the plurality of expression texts, respectively; and
and a determining subunit configured to determine at least one expression text having the highest similarity to the first text as the second text.
27. The apparatus of any of claims 16-26, wherein the first display module comprises:
a generation unit configured to generate a prompt for guiding the user to acquire the second text based on the semantic type of the first text;
a first display unit configured to display a prompt component containing the prompt in the input interface; and
and the second display unit is configured to display the second text in response to the interaction operation of the user on the prompt component.
28. The apparatus of any of claims 16-27, further comprising:
the second acquisition module is configured to acquire a required text input by the user in the input interface, wherein the required text indicates the optimization direction of the user on the second text;
the second optimizing module is configured to optimize the second text based on the required text to obtain a third text; and
and a second presentation module configured to present the third text in the input interface.
29. The apparatus of claim 28, wherein the second optimization module is further configured to:
The second text and the required text are input into a trained dialogue model to obtain the third text output by the dialogue model.
30. The apparatus of claim 28, wherein the second optimization module comprises:
a second obtaining unit configured to obtain a second query template, wherein the second query template includes a second slot for filling the second text and the demand text, and the second query template is used for guiding the trained dialogue model to optimize the second text along the optimization direction indicated by the demand text;
a filling unit configured to fill the second text and the demand text into the second slot to obtain a second query text; and
and an input unit configured to input the second query text into the dialogue model to obtain the third text output by the dialogue model.
31. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-15.
32. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-15.
33. A computer program product comprising computer program instructions, wherein the computer program instructions, when executed by a processor, implement the method of any one of claims 1-15.
CN202310828752.6A 2023-07-06 2023-07-06 Text processing method and device, electronic equipment and storage medium Pending CN116861861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310828752.6A CN116861861A (en) 2023-07-06 2023-07-06 Text processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310828752.6A CN116861861A (en) 2023-07-06 2023-07-06 Text processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116861861A true CN116861861A (en) 2023-10-10

Family

ID=88218533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310828752.6A Pending CN116861861A (en) 2023-07-06 2023-07-06 Text processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116861861A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832275A (en) * 2020-09-21 2020-10-27 北京百度网讯科技有限公司 Text creation method, device, equipment and storage medium
CN112163075A (en) * 2020-09-27 2021-01-01 北京乐学帮网络技术有限公司 Information recommendation method and device, computer equipment and storage medium
CN112328892A (en) * 2020-11-24 2021-02-05 北京百度网讯科技有限公司 Information recommendation method, device, equipment and computer storage medium
CN113360001A (en) * 2021-05-26 2021-09-07 北京百度网讯科技有限公司 Input text processing method and device, electronic equipment and storage medium
CN114548110A (en) * 2021-12-29 2022-05-27 北京百度网讯科技有限公司 Semantic understanding method and device, electronic equipment and storage medium
CN115658747A (en) * 2022-10-20 2023-01-31 深圳市汇川技术股份有限公司 Operation guiding method and device, electronic equipment and readable storage medium
CN115729549A (en) * 2022-11-30 2023-03-03 网易(杭州)网络有限公司 Method and device for generating user interaction interface, storage medium and electronic device
CN115840802A (en) * 2022-11-28 2023-03-24 蚂蚁财富(上海)金融信息服务有限公司 Service processing method and device
CN115964462A (en) * 2022-12-30 2023-04-14 北京百度网讯科技有限公司 Dialogue content processing method, and training method and device of dialogue understanding model
CN116028606A (en) * 2023-01-04 2023-04-28 西安电子科技大学 Human-machine multi-round dialogue rewriting method based on transform pointer extraction

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832275A (en) * 2020-09-21 2020-10-27 北京百度网讯科技有限公司 Text creation method, device, equipment and storage medium
CN112163075A (en) * 2020-09-27 2021-01-01 北京乐学帮网络技术有限公司 Information recommendation method and device, computer equipment and storage medium
CN112328892A (en) * 2020-11-24 2021-02-05 北京百度网讯科技有限公司 Information recommendation method, device, equipment and computer storage medium
CN113360001A (en) * 2021-05-26 2021-09-07 北京百度网讯科技有限公司 Input text processing method and device, electronic equipment and storage medium
CN114548110A (en) * 2021-12-29 2022-05-27 北京百度网讯科技有限公司 Semantic understanding method and device, electronic equipment and storage medium
CN115658747A (en) * 2022-10-20 2023-01-31 深圳市汇川技术股份有限公司 Operation guiding method and device, electronic equipment and readable storage medium
CN115840802A (en) * 2022-11-28 2023-03-24 蚂蚁财富(上海)金融信息服务有限公司 Service processing method and device
CN115729549A (en) * 2022-11-30 2023-03-03 网易(杭州)网络有限公司 Method and device for generating user interaction interface, storage medium and electronic device
CN115964462A (en) * 2022-12-30 2023-04-14 北京百度网讯科技有限公司 Dialogue content processing method, and training method and device of dialogue understanding model
CN116028606A (en) * 2023-01-04 2023-04-28 西安电子科技大学 Human-machine multi-round dialogue rewriting method based on transform pointer extraction

Similar Documents

Publication Publication Date Title
US11322143B2 (en) Forming chatbot output based on user state
JP6625789B2 (en) Automatic Proposal Response to Received Image in Message Using Language Model
US20110301958A1 (en) System-Initiated Speech Interaction
CN113168305A (en) Expediting interaction with a digital assistant by predicting user responses
CN113728308B (en) Visualization of training sessions for conversational robots
CN116521841B (en) Method, device, equipment and medium for generating reply information
WO2023226914A1 (en) Virtual character driving method and system based on multimodal data, and device
CN116303962B (en) Dialogue generation method, training method, device and equipment for deep learning model
CN113160819B (en) Method, apparatus, device, medium, and product for outputting animation
CN116501960B (en) Content retrieval method, device, equipment and medium
CN112559715B (en) Attitude identification method, device, equipment and storage medium
CN115879469B (en) Text data processing method, model training method, device and medium
CN110324230A (en) Method for showing interface, client and computer storage medium
CN116361547A (en) Information display method, device, equipment and medium
CN116843795A (en) Image generation method and device, electronic equipment and storage medium
CN116861861A (en) Text processing method and device, electronic equipment and storage medium
CN109002498A (en) Interactive method, device, equipment and storage medium
CN114201043A (en) Content interaction method, device, equipment and medium
CN116861860A (en) Text processing method and device, electronic equipment and storage medium
CN116842156B (en) Data generation method, device, equipment and medium
CN115002053B (en) Interaction method and device and electronic equipment
CN116127035B (en) Dialogue method, training method and training device for dialogue model
CN116450917B (en) Information searching method and device, electronic equipment and medium
CN116628153B (en) Method, device, equipment and medium for controlling dialogue of artificial intelligent equipment
CN116841506B (en) Program code generation method and device, and model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination