CN117424956A - Setting item processing method and device, electronic equipment and storage medium - Google Patents

Setting item processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117424956A
CN117424956A CN202311378419.6A CN202311378419A CN117424956A CN 117424956 A CN117424956 A CN 117424956A CN 202311378419 A CN202311378419 A CN 202311378419A CN 117424956 A CN117424956 A CN 117424956A
Authority
CN
China
Prior art keywords
text
processed
setting item
setting
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311378419.6A
Other languages
Chinese (zh)
Inventor
陈科鑫
张晓帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202311378419.6A priority Critical patent/CN117424956A/en
Publication of CN117424956A publication Critical patent/CN117424956A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the application discloses a setting item processing method, a setting item processing device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a text to be processed; acquiring a text analysis result corresponding to the text to be processed; determining a target prompt word corresponding to the text to be processed based on the text to be processed, the text analysis result and a tree-shaped setting knowledge base structure corresponding to the electronic equipment; inputting the text to be processed and the target prompt word into a pre-trained setting item recommendation model, and acquiring a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model; and executing the recommended setting item to finish the setting corresponding to the recommended setting item. According to the method, the target prompt words of the text to be processed are determined based on the tree-shaped setting knowledge base structure and the text analysis result, so that the set item recommendation model can be helped to better understand the user intention, and more accurate, humanized and intelligent service is provided for the user.

Description

Setting item processing method and device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of intelligent voice, and particularly relates to a setting item processing method, a setting item processing device, electronic equipment and a storage medium.
Background
With the continuous development of science and technology, smart phones become an important ring of life of modern people, and can almost help us to complete any task, such as: communication, entertainment, work, life, and learning, etc. The functions of the mobile phone are more and more abundant, so that in order to facilitate the user to use the mobile phone, the mobile phone manufacturer can push out a voice assistant of the smart mobile phone, and the voice assistant can help the user to complete various tasks such as making a call, sending a short message, inquiring weather, setting a reminder and the like through a voice recognition technology. When some settings are completed by the smartphone voice assistant, the smartphone voice assistant may not accurately recognize the entered user description, resulting in an inability to accurately match to the corresponding setting item.
Disclosure of Invention
In view of the above, the present application proposes a setting item processing method, apparatus, electronic device, and storage medium to achieve improvement of the above problems.
In a first aspect, an embodiment of the present application provides a setting item processing method, which is applied to an electronic device, where the method includes: acquiring a text to be processed; acquiring a text analysis result corresponding to the text to be processed; determining a target prompt word corresponding to the text to be processed based on the text to be processed, the text analysis result and a tree-shaped setting knowledge base structure corresponding to the electronic equipment; inputting the text to be processed and the target prompt word into a pre-trained setting item recommendation model, and acquiring a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model; and executing the recommended setting item to finish the setting corresponding to the recommended setting item.
In a second aspect, an embodiment of the present application provides a setting item processing apparatus, which is running in an electronic device, where the apparatus includes: the text acquisition unit is used for acquiring a text to be processed; the result acquisition unit is used for acquiring a text analysis result corresponding to the text to be processed; the determining unit is used for determining a target prompt word corresponding to the text to be processed based on the text to be processed, the text analysis result and a tree-shaped setting knowledge base structure corresponding to the electronic equipment; the output unit is used for inputting the text to be processed and the target prompt word into a pre-trained setting item recommendation model and obtaining a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model; and the execution unit is used for executing the recommended setting items to finish the setting corresponding to the recommended setting items.
In a third aspect, embodiments of the present application provide an electronic device including one or more processors and a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods described above.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having program code stored therein, wherein the above-described method is performed when the program code is run.
The embodiment of the application provides a setting item processing method, a setting item processing device, electronic equipment and a storage medium. Acquiring a text to be processed and a text analysis result corresponding to the text to be processed, determining a target prompt word corresponding to the text to be processed based on the text to be processed, the text analysis result and a tree-shaped setting knowledge base structure corresponding to the electronic equipment, inputting the text to be processed and the target prompt word into a pre-trained setting item recommendation model, acquiring a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model, and finally executing the recommendation setting item to complete setting corresponding to the recommendation setting item. By the method, the setting items are managed through the tree-shaped setting knowledge base structure, and the target prompt words of the text to be processed are determined based on the tree-shaped setting knowledge base structure and the text analysis result, so that the setting item recommendation model can be helped to better understand the intention of the user, and more accurate, humanized and intelligent service is provided for the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for processing setting items according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for setting item processing according to another embodiment of the present application;
FIG. 3 is a schematic diagram showing a process of outputting text analysis results according to another embodiment of the present application;
FIG. 4 is a flowchart of a method for setting item processing according to still another embodiment of the present application;
FIG. 5 is a schematic diagram of a tree-like configuration knowledge base in accordance with a further embodiment of the present application;
fig. 6 is a schematic diagram illustrating the generation principle of the tree-like setting knowledge base structure in step S330-step S370 according to another embodiment of the present application.
FIG. 7 is a schematic diagram of a preset hint word template according to yet another embodiment of the present application;
FIG. 8 is a flowchart of a setting item processing method according to still another embodiment of the present application;
FIG. 9 shows a schematic view of a scenario in a further embodiment of the present application;
FIG. 10 shows a schematic structural diagram of a process application described in steps S410-S480 in a further embodiment of the present application;
fig. 11 is a block diagram showing a configuration of a setting item processing apparatus according to an embodiment of the present application;
fig. 12 shows a block diagram of an electronic device for executing the setting item processing method according to the embodiment of the present application in the embodiment of the present application;
fig. 13 shows a storage unit for storing or carrying program codes for implementing the setting item processing method according to the embodiment of the present application in the embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
With the increasing maturity of voice interaction technology, the application scene of voice assistant is more and more wide. The voice assistant can conduct intelligent dialogue and instant question-answering intelligent interaction with the user, and can also recognize voice commands of the user, so that the intelligent terminal can execute events corresponding to the voice commands. Taking an intelligent terminal as an example of a mobile phone, when a user wants to set the volume of the mobile phone, the user only needs to say that the volume is adjusted to 50 percent with respect to the mobile phone, and if the voice assistant receives and recognizes a voice command input by the user, the mobile phone can immediately execute the command to help the user complete the setting. Therefore, the user does not need to manually adjust the volume of the mobile phone, and the mobile phone can be used more conveniently and rapidly.
The principle of the mobile phone volume setting is as follows: firstly, user description converted into characters is processed, and central description words in the user description are extracted, for example, the mobile phone please help me is set to be mute, and the central words are set to be mute. Meanwhile, in the related setting scheme, a setting item knowledge base is designed, and description words of the existing supported setting items, such as 'mobile phone mute', 'open flashlight', and the like, are stored in the setting item knowledge base. The voice assistant then matches the center word with the knowledge base of the set item through a conventional text matching algorithm to obtain a matching result and execute the matching result. The foregoing method is implemented using rules when extracting the center word, so that the center word may not be accurately extracted for some complicated user descriptions, resulting in a failure of matching or a wrong setting item of matching. For example, if the user uses a very spoken description, the voice assistant pit cannot be accurately identified and executed.
Accordingly, the inventors propose a setting item processing method, apparatus, electronic device, and storage medium in the present application. Acquiring a text to be processed and a text analysis result corresponding to the text to be processed, determining a target prompt word corresponding to the text to be processed based on the text to be processed, the text analysis result and a tree-shaped setting knowledge base structure corresponding to the electronic equipment, inputting the text to be processed and the target prompt word into a pre-trained setting item recommendation model, acquiring a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model, and finally executing the recommendation setting item to complete setting corresponding to the recommendation setting item. By the method, the setting items are managed through the tree-shaped setting knowledge base structure, and the target prompt words of the text to be processed are determined based on the tree-shaped setting knowledge base structure and the text analysis result, so that the setting item recommendation model can be helped to better understand the intention of the user, and more accurate, humanized and intelligent service is provided for the user.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a method for processing a setting item, provided in an embodiment of the present application, is applied to an electronic device, and includes:
step S110: and acquiring a text to be processed.
In the embodiment of the application, the text to be processed is a description related to a setting item input by a user and used for setting the electronic device, for example, the text to be processed may be "volume of the electronic device is adjusted to the maximum".
As one way, an intelligent voice assistant is provided in the electronic device, so when a user needs to make some settings on the electronic device, the intelligent voice assistant can convert the related description of the setting item input by the user into text through the related description of the setting item input by the intelligent voice assistant, so as to obtain the text to be processed. Wherein the intelligent voice assistant can collect the related description of the setting input by the user by calling the sound collection device in the electronic device.
Alternatively, the text to be processed is acquired in response to the setting instruction. The setting instruction is an instruction for starting triggering of the intelligent voice assistant. When the intelligent voice assistant is detected to be started, the setting instruction can be determined to be triggered, and the text to be processed starts to be acquired.
Step S120: and obtaining a text analysis result corresponding to the text to be processed.
In the embodiment of the application, the text analysis result refers to an analysis result corresponding to the text to be processed, which is output through a pre-trained large language model. The pre-trained large language model can understand fuzzy semantics of the user and the user does not know how to describe the set item functions.
It is appreciated that the large language model (Large Language Model, LLM) in embodiments of the present application is an artificial intelligence technique that simulates a person's thinking patterns and language abilities through extensive data training. Large language models typically employ deep learning techniques to learn complex relationships and patterns in data through a multi-layer neural network. During the training process, the large language model continually adjusts the network parameters to better fit the data. The large language model has strong logic analysis capability and language understanding capability, and can carry out deep analysis on complex problems and give reasonable solutions. Compared to traditional rule-based approaches, large language models can better handle fuzzy uncertainties and complexities. In addition, the large language model has good generalization capability and is quickly adapted to new fields and new tasks. The mode of interacting with the large language model is natural language interaction, corresponding prompt words (prompt) are prepared to ask questions to the large language model, and then the large language model can answer correspondingly according to the prompt words.
That is, in this embodiment of the present application, when the text to be processed is input into the pre-trained large language model, the prompt word corresponding to the text to be processed needs to be input, so that the large language model may output the text analysis result corresponding to the text to be processed according to the input corresponding prompt word.
Step S130: and determining target prompt words corresponding to the text to be processed based on the text to be processed, the text analysis result and the tree-shaped setting knowledge base structure corresponding to the electronic equipment.
In this embodiment of the present application, the tree-like setting repository structure corresponding to the electronic device is a tree-like organization structure, where the tree-like setting repository structure includes related setting items of all application programs in the electronic device and inheritance relationships between different setting items.
The target prompt word is the set prompt word input into the pre-trained set item recommendation model. The target prompt word is used for indicating how to output an output result meeting the requirements of the user by the setting item recommendation model.
When determining the target prompt words corresponding to the text to be processed based on the text to be processed, the text analysis result and the tree-shaped setting knowledge base structure corresponding to the electronic equipment, the text to be processed, the text analysis result and the tree-shaped setting knowledge base structure can be assembled according to a preset prompt word template to obtain the target prompt words corresponding to the text to be processed. The preset prompting word template is a preset template for assembling the target prompting word.
Step S140: inputting the text to be processed and the target prompt word into a pre-trained setting item recommendation model, and acquiring a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model.
In the embodiment of the application, the pre-trained setting item recommendation model is also a large language model. The recommended setting items are the setting items which are output by the setting item recommendation model and are related to the text to be processed. Wherein the number of recommended setting items may be at least one.
After the text to be processed and the target prompt word are obtained, the text to be processed and the target prompt word can be input into a pre-trained setting item recommendation model, so that the setting item recommendation model can recommend corresponding setting items according to natural language description of a user, namely the pre-trained setting item recommendation model can output corresponding setting items based on the input text to be processed and the target prompt word.
Step S150: and executing the recommended setting item to finish the setting corresponding to the recommended setting item.
In the embodiment of the application, after obtaining the recommended setting item, the electronic device may directly execute the setting instruction corresponding to the recommended setting item to complete the setting corresponding to the recommended setting item.
As one way, the recommendation setting item is executed in response to a confirmation instruction. Wherein the determining instruction may be an instruction triggered when a preset operation acting on the recommended setting item is detected. The preset operation acting on the recommended setting item may be any one of a click operation, a slide operation, or a long press operation, and is not particularly limited herein.
According to the setting item processing method, a text to be processed and a text analysis result corresponding to the text to be processed are obtained, then a target prompt word corresponding to the text to be processed is determined based on the text to be processed, the text analysis result and a tree-shaped setting knowledge base structure corresponding to the electronic equipment, the text to be processed and the target prompt word are input into a pre-trained setting item recommendation model, a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model, is obtained, and finally the recommendation setting item is executed to complete setting corresponding to the recommendation setting item. By the method, the setting items are managed through the tree-shaped setting knowledge base structure, and the target prompt words of the text to be processed are determined based on the tree-shaped setting knowledge base structure and the text analysis result, so that the setting item recommendation model can be helped to better understand the intention of the user, and more accurate, humanized and intelligent service is provided for the user.
Referring to fig. 2, a method for processing a setting item, provided in an embodiment of the present application, is applied to an electronic device, and includes:
step S210: and acquiring a text to be processed.
Step S220: and acquiring a plurality of prompt words corresponding to the text to be processed.
It can be known that the large language model can naturally understand the description language of the user, however, due to the excellent understanding capability, if not limited, the large language model may not be limited to generate the analysis result of the text to be processed, and the generated analysis result may not even be related to the current task, so that a corresponding prompt word needs to be set to limit the analysis result of the text to be processed generated by the large language model.
In the embodiment of the application, the plurality of prompt words are respectively used for indicating the large language model to output different analysis results of the text to be processed. The method comprises the steps of generating a large language model, wherein a prompt word is used for indicating the large language model to output an analysis result of a text to be processed.
As one way, the intention analysis prompt word, the text analysis prompt word, the emotion analysis prompt word, the scene analysis prompt word and the personalized analysis prompt word corresponding to the text to be processed are obtained.
Wherein the intent analysis prompt is used for indicating the large language model to infer what task the user wants to complete; the text analysis prompt word is used for indicating the large language model to extract key information in the text to be processed; the emotion analysis prompt word is used for indicating the large language model to infer the current emotion state of the user; the scene analysis prompt word is used for indicating the large language model to infer the scene where the user is currently located; the personalized analysis prompt word is used for indicating the large language model to infer the personalized requirements of the user.
The multiple prompting words in the embodiment of the application are all standard single-round prompting word templates, and the single-round prompting word templates are: the background of the current task is xxx, the known user information is xxx, the query of the user is xxx, please answer the following questions xxx according to known conditions. The "xxx" part is content which needs to be filled in by a user according to the acquired text to be processed. For example, the hint word may be set to: the background of the current task is to infer the user's intention, the known user information is "i want to turn off the phone sound", the user's query is "i want to turn off the phone sound", please answer the following questions "please ask according to the known conditions, what task the user currently wants to complete".
Step S230: inputting the text to be processed and the plurality of prompt words into a pre-trained text analysis model, and obtaining a text analysis result corresponding to the text to be processed, which is output by the text analysis model.
In this embodiment of the present application, the pre-trained text analysis model is the foregoing large language model, and is configured to output, according to a text to be processed and a plurality of prompt words, a text analysis result corresponding to the text to be processed.
As one way, the text analysis results include a user intent analysis result, a user text analysis result, a user emotion analysis result, a user scene analysis result, and a user individuation analysis result; inputting the text to be processed and the plurality of prompt words into a pre-trained text analysis model, and obtaining a text analysis result corresponding to the text to be processed, which is output by the text analysis model, wherein the text analysis result comprises: inputting the text to be processed and the intention analysis prompt word into the text analysis model to obtain a user intention analysis result output by the text analysis model; inputting the text to be processed and the text analysis prompt word into the text analysis model to obtain a user text analysis result output by the text analysis model; inputting the text to be processed and the emotion analysis prompt word into the text analysis model to obtain a user emotion analysis result output by the text analysis model; inputting the text to be processed and the scene analysis prompt word into the text analysis model to obtain a user scene analysis result output by the text analysis model; inputting the text to be processed and the personalized prompt word into the text analysis model, and obtaining a user personalized analysis result output by the text analysis model.
Wherein, user intent analysis: the text analysis model can analyze the prompt words according to the input text to be processed and the intention to infer what task the user wants to complete. For example: when the user speaks "I want to turn off the phone sound," the text analysis model may infer that the user wants to set the phone to mute.
User text analysis: the text analysis model can deeply analyze the text to be processed according to the input text to be processed and the text analysis prompt word, and extract key information in the text to be processed. For example, when the user speaks "I want to turn off the phone sound," the text analysis model will extract both keywords of "turn off" and "phone sound.
User emotion analysis: the text analysis model can infer the current emotion state of the user according to the input text to be processed and emotion analysis prompt words. For example, when the user speaks "noisy", the text analysis model may infer that the user is currently in an anger state.
User scene analysis: the text analysis model can infer the current scene of the user according to the input text to be processed and the scene analysis prompt word. For example, when the user speaks "i am now at movie theatre", the text analysis model may infer that the user is currently in a scene of watching a movie.
User personalized analysis: the text analysis model can infer the personalized requirements of the user according to the input text to be processed and personalized analysis prompt words. For example: when the user says "me likes quiet", the text analysis model may infer that the user likes a quiet environment.
The five analysis results are output by the text analysis model according to the input text to be processed and the corresponding prompt words. Specifically, the text to be processed and the corresponding prompt words are respectively input to the text analysis model to carry out five questions, so that the text analysis model outputs the user intention analysis result, the user text analysis result, the user emotion analysis result, the user scene analysis result and the user personalized analysis result.
For example, as shown in fig. 3, a user query (corresponding to the aforementioned text to be processed) and a corresponding background prompt (corresponding to the aforementioned intent analysis prompt, text analysis prompt, emotion analysis prompt, scene analysis prompt, personalized analysis prompt, etc.) are input into a large language model, so that the large language model can output a user intent analysis result, a user text analysis result, a user emotion analysis result, a user scene analysis result, and a user personalized analysis result according to the input user query and corresponding background prompt.
Optionally, the text to be processed and the five prompt words in the embodiment of the application may be input into the text analysis model at the same time, and then the text analysis model may output five analysis results corresponding to the text to be processed at the same time.
Optionally, the text to be processed and the five prompt words may be respectively combined and sequentially input into the text analysis model, so that the text analysis model may sequentially output five analysis results corresponding to the text to be processed. The text to be processed and the five kinds of prompt words can be respectively combined, namely, the text to be processed and one kind of prompt word are respectively used as a combination and input into the text analysis model.
Step S240: and determining target prompt words corresponding to the text to be processed based on the text to be processed, the text analysis result and the tree-shaped setting knowledge base structure corresponding to the electronic equipment.
Step S250: inputting the text to be processed and the target prompt word into a pre-trained setting item recommendation model, and acquiring a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model.
Step S260: and executing the recommended setting item to finish the setting corresponding to the recommended setting item.
According to the setting item processing method, the setting items are managed through the tree-shaped setting knowledge base structure, and the target prompt words of the text to be processed are determined based on the tree-shaped setting knowledge base structure and the text analysis result, so that the setting item recommendation model can be helped to better understand the intention of a user, and more accurate, humanized and intelligent service is provided for the user.
Referring to fig. 4, a method for processing a setting item, provided in an embodiment of the present application, is applied to an electronic device, and includes:
step S310: and acquiring a text to be processed.
Step S320: and obtaining a text analysis result corresponding to the text to be processed.
Step S330: initializing a setting item knowledge base with only one root node, wherein the setting item knowledge base comprises a plurality of setting items in the electronic equipment.
In the embodiment of the application, the setting item knowledge base may include setting items of all application programs in the electronic device. The application program in the electronic device may be a native application program of the electronic device or a third party application program, which is not specifically limited herein. The setting item refers to a setting item.
A root node refers to a node that can summarize all settings of an electronic device.
Step S340: and acquiring a first prompt word corresponding to each setting item in the plurality of setting items, wherein the first prompt word is used for indicating to acquire a function analysis result of each setting item.
In the embodiment of the application, the first prompting word is a prompting word of each setting item set through a single-round prompting word template. The first prompt word is used for indicating a function of the large language model to output each item setting item.
Step S350: and inputting the setting item knowledge base and the first prompt word corresponding to each setting item into a pre-trained large model, and obtaining a functional analysis result corresponding to each setting item output by the large model.
In the embodiment of the application, for each item setting item in the setting item knowledge base, a single round of question form can be used for carrying out functional analysis based on a large language model. That is, the prompting words of each setting item are set through the single-round prompting word template, and then the large language model can output the function analysis result of each setting item according to the prompting words of each setting item. Wherein the function analysis result characterizes the function of each setting item.
Step S360: and acquiring a second prompt word corresponding to each setting item in the plurality of setting items, wherein the second prompt word is used for indicating the insertion position of each setting item.
In this embodiment of the present application, the second hint is also a hint of each setting item set by the single-round hint template, where the second hint is used to instruct the large language model to output at which position of the tree-like setting repository structure each setting item should be inserted, whether the tree-like repository structure needs to be added, and so on.
Step S370: and inputting the setting item knowledge base and the second prompt word corresponding to each setting item into the large model, and obtaining a tree-shaped setting knowledge base structure corresponding to the electronic equipment output by the large model.
In the embodiment of the present application, the large model refers to the foregoing large language model. The tree setting knowledge base not only comprises the names of the setting items of all setting items, but also comprises inheritance relations among different setting items.
The original item name of each setting item and the corresponding function analysis result can be input into the large language model together for a new round of single-round questioning, so that the large language model judges which position of the tree-shaped setting knowledge base structure the current setting item should be inserted in and whether the tree-shaped knowledge base structure needs to be newly added, and the steps are overlapped until a complete tree-shaped knowledge base structure is generated. Wherein overlapping refers to traversing each setting item to determine the insertion location of all setting items.
The method has the advantages that a complete and accurate tree structure can be quickly generated, manual intervention is not needed, and a large language model can accurately judge where each setting item should be inserted and whether a new tree knowledge base structure is needed. In this way, a complete and accurate tree structure can be quickly generated, providing a better experience for the user. In addition, the method has strong expandability, new setting items can be added to the setting item knowledge base continuously along with the time without regenerating the whole tree structure, and the new setting items can be quickly inserted into the correct positions by only carrying out functional analysis on the newly added setting items and inputting the newly added setting items into a large language model for judgment.
The tree-shaped setting knowledge base structure in the embodiment of the application can be shown as fig. 5, in fig. 5, a mobile phone setting root node is located at a root node, its child nodes are a system setting node and a three-party application setting node, wherein the three-party application setting nodes can be multiple; the next stage is a subdivided stage, for example, the system setting node can be divided into a screen related setting node, a network related setting node and the like, the screen related setting node can be divided into a personalized screen setting and other stage nodes, and the like. The rectangular box in fig. 5 represents an abstract node that does not correspond to a concrete setting item, the abstract node must have a corresponding child node, the arc angle rectangular box corresponds to a real setting item node, and corresponds to a real setting item such as "mute mode" or the like. The child nodes of the abstract node can be abstract nodes and non-abstract nodes, and the non-abstract nodes have no child nodes and can only correspond to real nodes.
When a specific setting item is represented, the traditional representation mode only represents the name of the setting item, and the tree setting knowledge base structure in the embodiment of the application can represent the node inheritance relation. For example, for mute setting, the conventional expression is denoted as "mute setting", and may be denoted as "handset setting root node" - "system setting" - "sound related setting" - "reduced sound related" - "mute mode" in this application. The method can better help the large language model to understand the intention of the user and provide more accurate, humanized and intelligent service for the user.
In the embodiment of the present application, the generation principle of the tree-like set-up knowledge base structure described in step S330-step S370 may be as shown in fig. 6, we initialize the set-up knowledge base with only one root node, for each set-up item in the set-up knowledge base, firstly, perform functional analysis based on a large language model by using a single-round question form to obtain functional analysis of each set-up item, and then input the original item name and functional analysis of the set-up item into the large language model together for a new round of single-round question, so that the large model determines where the current set-up item node should be inserted into the tree-like set-up knowledge base structure and whether a new tree-like knowledge base structure needs to be added, and iterating until a complete tree-like knowledge base structure is generated.
Step S380: and assembling the text to be processed, the text analysis result and the tree-shaped setting knowledge base structure according to a preset prompt word template to obtain a target prompt word corresponding to the text to be processed, wherein the target prompt word comprises a role description prompt word, a resource set description prompt word, a resource prompt word, an output evaluation prompt word, a task description prompt word and a return format requirement prompt word.
In the embodiment of the application, after the analysis result of the text to be processed and the tree-shaped setting knowledge base structure are obtained, due to the longer text length corresponding to the analysis result and the tree-shaped setting knowledge base structure, proper prompt assembly is needed.
Specifically, a preset prompt word template is used for assembling the text to be processed, the text analysis result and the tree-shaped setting knowledge base structure. The preset prompting word template consists of a role description prompting word, a resource set description prompting word, a resource prompting word, an output evaluation prompting word, a task description prompting word and a return format requirement prompting word. For example, the foregoing schematic diagram of the preset alert word template may be shown in fig. 7.
Each part of the preset hint word template contains the following contents:
Character description prompting words; and filling the roles and tasks to be played by the large language model of the current task, and telling the large language model of the background of the current task. In the embodiment of the application, a matching assistant for telling the large language model that the role to be played at the moment is a setting item is needed, and a proper setting service is recommended for the user at the current setting.
Resource set description hint words: and filling in a resource set required by the current task, and telling the large language model of all the resources required by the current task. In the embodiment of the application, the large language model needs to be told, and the resources required by the current task comprise the text to be processed, the tree-shaped setting knowledge base structure, the text analysis result and the like. That is, telling the large language model what the content of the text to be joined is, which setting items are included in the tree setting knowledge base structure, the effect of each setting item, how to walk through the tree setting knowledge base structure, and the like.
Resource prompt word: filling in the resource prompt needed by the current task and telling the large language model how to use the resources needed by the current task. In the embodiment of the application, here, a large language model needs to be told how to use the resources such as the text to be processed, the tree-shaped setting knowledge base structure, the text analysis result and the like.
Outputting an evaluation prompt word: and filling in the output evaluation standard of the current task, and telling the large language model how to evaluate the output result. In the embodiment of the application, the large language model needs to be told, the output result meets the requirement of the user, and more accurate, humanized and intelligent service can be provided for the user.
Task description prompt word: and filling in the concrete description of the current task, and telling the large language model what kind of work the current task needs to complete. In the embodiment of the application, a large language model needs to be told, and the current task is to recommend a proper mobile phone setting item for a user.
Returning a format requirement prompt word: the return format requirements of the current task are filled in, and the large language model is told how to return the result. In the embodiment of the present application, it is necessary to tell the language model, and the returned result should be a clear and understandable text description that meets the needs of the user, and a corresponding list of setting items, so as to facilitate subsequent execution.
Step S390: inputting the text to be processed and the target prompt word into a pre-trained setting item recommendation model, and acquiring a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model.
Step S391: and executing the recommended setting item to finish the setting corresponding to the recommended setting item.
The setting item processing method can support more setting item contents and even cover setting items of the third application, provides more choices for users, and improves user experience.
Referring to fig. 8, a method for processing a setting item, provided in an embodiment of the present application, is applied to an electronic device, and includes:
step S410: and acquiring a text to be processed.
Step S420: and carrying out accurate matching on the text to be processed.
In the embodiment of the application, the accurate matching refers to matching the center word of the text to be processed with the setting item knowledge base through a traditional text matching algorithm to obtain a matching result, and executing corresponding operation according to the matching result. The matching result can comprise a hit and a miss, wherein the hit refers to matching to a corresponding setting item; a miss refers to not matching to a corresponding setting item.
When the matching result is hit, executing the hit setting item; when the matching result is a miss, the recommended setting item corresponding to the text to be processed may be determined through the procedure described in step S430 to step S480.
Step S430: and if the text is not hit, acquiring a text analysis result corresponding to the text to be processed.
In the embodiment of the application, when the matching result is a miss, a text analysis result corresponding to the text to be processed can be further determined through a large language model.
Step S440: and determining target prompt words corresponding to the text to be processed based on the text to be processed, the text analysis result and the tree-shaped setting knowledge base structure corresponding to the electronic equipment.
Step S450: inputting the text to be processed and the target prompt word into a pre-trained setting item recommendation model, and acquiring a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model.
Fig. 9 is a schematic view of a scenario of an embodiment of the present application. When the user fuzzily expresses that ' my eyes are uncomfortable, the embodiment of the application intelligently matches the two setting items of ' eye protection mode ' and ' font size adjustment ' for the user. When a user more implicitly expresses a desire to share traffic to others in one way, embodiments of the present application intelligently recommend a setting item of "open hotspot".
Step S460: and displaying a plurality of recommended setting items.
In the embodiment of the present application, after obtaining the corresponding setting item recommendation result, the electronic device may not automatically execute the setting item recommendation result, because there may be multiple setting item recommendation results and some certainty compared with the conventional exact matching. Therefore, after the setting item recommendation result is obtained, a plurality of recommendation setting items can be displayed in the electronic device.
Step S470: and determining a target setting item from a plurality of recommended setting items.
In the embodiment of the application, the user may select a recommended setting item to be executed, that is, determine a target setting item, from a plurality of setting items through a preset operation. The preset operation may act on one of a click operation, a slide operation, or a long press operation of the recommended setting item.
Step S480: and executing the target setting item to finish the setting corresponding to the target setting item.
In the embodiment of the application, after the user determines the target setting item, the electronic device starts to automatically execute the target setting item so as to complete the setting corresponding to the target setting item.
The process described in step S410-step S480 may be applied to the structure shown in fig. 10, which is mainly composed of a precision matching module, a prompt engineering module, a confirmation module, and an execution setting result module. The precise matching module is mainly used for executing the step S410 and the step S420, the prompt engineering module is mainly used for executing the step S430-the step S450 based on the large language model technology, the confirmation module is mainly used for executing the step S460 and the step S470, and the execution setting result module is mainly used for executing the step S480.
The setting item processing method provided by the application determines whether the user needs to execute the recommended setting item through the selection operation of the user, the interaction mode is convenient, misoperation can be avoided,
referring to fig. 11, a setting item processing apparatus 500 provided in an embodiment of the present application is operated in an electronic device, where the apparatus 500 includes:
a text obtaining unit 510, configured to obtain a text to be processed.
And a result obtaining unit 520, configured to obtain a text analysis result corresponding to the text to be processed.
As one way, the result obtaining unit 520 is configured to obtain a plurality of prompt words corresponding to the text to be processed; inputting the text to be processed and the plurality of prompt words into a pre-trained text analysis model, and obtaining a text analysis result corresponding to the text to be processed, which is output by the text analysis model.
Further, the text analysis results comprise user intention analysis results, user text analysis results, user emotion analysis results, user scene analysis results and user personalized analysis results; the result obtaining unit 520 is specifically configured to obtain an intent analysis prompt word, a text analysis prompt word, an emotion analysis prompt word, a scene analysis prompt word, and a personalized analysis prompt word corresponding to the text to be processed; inputting the text to be processed and the intention analysis prompt word into the text analysis model to obtain a user intention analysis result output by the text analysis model; inputting the text to be processed and the text analysis prompt word into the text analysis model to obtain a user text analysis result output by the text analysis model; inputting the text to be processed and the emotion analysis prompt word into the text analysis model to obtain a user emotion analysis result output by the text analysis model; inputting the text to be processed and the scene analysis prompt word into the text analysis model to obtain a user scene analysis result output by the text analysis model; inputting the text to be processed and the personalized prompt word into the text analysis model, and obtaining a user personalized analysis result output by the text analysis model.
As another way, the result obtaining unit 520 is specifically configured to perform accurate matching on the text to be processed; and if the text is not hit, acquiring a text analysis result corresponding to the text to be processed.
A determining unit 530, configured to determine a target prompt word corresponding to the text to be processed based on the text to be processed, the text analysis result, and a tree-shaped setting knowledge base structure corresponding to the electronic device.
As one way, the determining unit 530 is specifically configured to initialize a setting item repository having only one root node, where the setting item repository includes a plurality of setting items in the electronic device; acquiring a first prompt word corresponding to each setting item in the plurality of setting items, wherein the first prompt word is used for indicating to acquire a function analysis result of each setting item; inputting the setting item knowledge base and a first prompt word corresponding to each setting item into a pre-trained large model, and obtaining a functional analysis result corresponding to each setting item output by the large model; acquiring a second prompting word corresponding to each setting item in the plurality of setting items, wherein the second prompting word is used for indicating the insertion position of each setting item; and inputting the setting item knowledge base and the second prompt word corresponding to each setting item into the large model, and obtaining a tree-shaped setting knowledge base structure corresponding to the electronic equipment output by the large model.
As another way, the determining unit 530 is specifically configured to assemble the text to be processed, the text analysis result, and the tree-shaped setting knowledge base structure according to a preset prompting word template, so as to obtain a target prompting word corresponding to the text to be processed, where the target prompting word includes a role description prompting word, a resource set description prompting word, a resource prompting word, an output evaluation prompting word, a task description prompting word, and a return format requirement prompting word.
And an output unit 540, configured to input the text to be processed and the target prompt word into a pre-trained setting item recommendation model, and obtain a recommended setting item corresponding to the text to be processed output by the setting item recommendation model.
And the execution unit 550 is configured to execute the recommended setting item to complete the setting corresponding to the recommended setting item.
As one way, the execution unit 550 is specifically configured to display a plurality of the recommended setting items; determining a target setting item from a plurality of recommended setting items; and executing the target setting item to finish the setting corresponding to the target setting item.
It should be noted that, in the present application, the device embodiment and the foregoing method embodiment correspond to each other, and specific principles in the device embodiment may refer to the content in the foregoing method embodiment, which is not described herein again.
An electronic device provided in the present application will be described with reference to fig. 12.
Referring to fig. 12, based on the above-mentioned method and apparatus for processing a setting item, another electronic device 800 capable of executing the above-mentioned method for processing a setting item is provided in the embodiments of the present application. The electronic device 800 includes one or more (only one shown) processors 802, memory 804, and a network module 806 coupled to each other. The memory 804 stores therein a program capable of executing the contents of the foregoing embodiments, and the processor 802 can execute the program stored in the memory 804.
Wherein the processor 802 may include one or more processing cores. The processor 802 utilizes various interfaces and lines to connect various portions of the overall electronic device 800, perform various functions of the electronic device 800, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 804, and invoking data stored in the memory 804. Alternatively, the processor 802 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 802 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 802 and may be implemented solely by a single communication chip.
The Memory 804 may include random access Memory (Random Access Memory, RAM) or Read-Only Memory (rom). Memory 804 may be used to store instructions, programs, code, sets of codes, or instruction sets. The memory 804 may include a stored program area that may store instructions for implementing an operating system, instructions for implementing at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc., and a stored data area. The storage data area may also store data created by the electronic device 800 in use (e.g., phonebook, audiovisual data, chat log data), and the like.
The network module 806 is configured to receive and transmit electromagnetic waves, and to implement mutual conversion between electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices, such as electronic devices. The network module 806 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and the like. The network module 806 may communicate with various networks such as the internet, intranets, wireless networks, or with other devices via wireless networks. The wireless network may include a cellular telephone network, a wireless local area network, or a metropolitan area network. For example, the network module 806 may interact with base stations.
Referring to fig. 13, a block diagram of a computer readable storage medium according to an embodiment of the present application is shown. The computer readable storage medium 900 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments described above.
The computer readable storage medium 900 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, computer readable storage medium 900 includes non-volatile computer readable media (non-transitory computer-readable storage medium). The computer readable storage medium 900 has storage space for program code 910 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 910 may be compressed, for example, in a suitable form.
According to the setting item processing method, the device, the electronic equipment and the storage medium, a text to be processed and a text analysis result corresponding to the text to be processed are obtained, then a target prompt word corresponding to the text to be processed is determined based on the text to be processed, the text analysis result and a tree-shaped setting knowledge base structure corresponding to the electronic equipment, the text to be processed and the target prompt word are input into a pre-trained setting item recommendation model, a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model, is obtained, and finally the recommendation setting item is executed to complete setting corresponding to the recommendation setting item. By the method, the setting items are managed through the tree-shaped setting knowledge base structure, and the target prompt words of the text to be processed are determined based on the tree-shaped setting knowledge base structure and the text analysis result, so that the setting item recommendation model can be helped to better understand the intention of the user, and more accurate, humanized and intelligent service is provided for the user.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (10)

1. A setting item processing method, characterized by being applied to an electronic device, the method comprising:
acquiring a text to be processed;
acquiring a text analysis result corresponding to the text to be processed;
determining a target prompt word corresponding to the text to be processed based on the text to be processed, the text analysis result and a tree-shaped setting knowledge base structure corresponding to the electronic equipment;
inputting the text to be processed and the target prompt word into a pre-trained setting item recommendation model, and acquiring a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model;
and executing the recommended setting item to finish the setting corresponding to the recommended setting item.
2. The method according to claim 1, wherein the obtaining the text analysis result corresponding to the text to be processed includes:
Acquiring a plurality of prompt words corresponding to the text to be processed;
inputting the text to be processed and the plurality of prompt words into a pre-trained text analysis model, and obtaining a text analysis result corresponding to the text to be processed, which is output by the text analysis model.
3. The method of claim 2, wherein the text analysis results include user intent analysis results, user text analysis results, user emotion analysis results, user scene analysis results, and user personalization analysis results; the obtaining the plurality of prompt words corresponding to the text to be processed comprises the following steps:
acquiring intention analysis prompt words, text analysis prompt words, emotion analysis prompt words, scene analysis prompt words and personalized analysis prompt words corresponding to the text to be processed;
inputting the text to be processed and the plurality of prompt words into a pre-trained text analysis model, and obtaining a text analysis result corresponding to the text to be processed, which is output by the text analysis model, wherein the text analysis result comprises:
inputting the text to be processed and the intention analysis prompt word into the text analysis model to obtain a user intention analysis result output by the text analysis model;
Inputting the text to be processed and the text analysis prompt word into the text analysis model to obtain a user text analysis result output by the text analysis model;
inputting the text to be processed and the emotion analysis prompt word into the text analysis model to obtain a user emotion analysis result output by the text analysis model;
inputting the text to be processed and the scene analysis prompt word into the text analysis model to obtain a user scene analysis result output by the text analysis model;
inputting the text to be processed and the personalized prompt word into the text analysis model, and obtaining a user personalized analysis result output by the text analysis model.
4. The method according to claim 1, wherein determining the target prompt word corresponding to the text to be processed based on the text to be processed, the text analysis result, and the tree-like setting knowledge base structure corresponding to the electronic device further comprises:
initializing a setting item knowledge base with only one root node, wherein the setting item knowledge base comprises a plurality of setting items in the electronic equipment;
acquiring a first prompt word corresponding to each setting item in the plurality of setting items, wherein the first prompt word is used for indicating to acquire a function analysis result of each setting item;
Inputting the setting item knowledge base and a first prompt word corresponding to each setting item into a pre-trained large model, and obtaining a functional analysis result corresponding to each setting item output by the large model;
acquiring a second prompting word corresponding to each setting item in the plurality of setting items, wherein the second prompting word is used for indicating the insertion position of each setting item;
and inputting the setting item knowledge base and the second prompt word corresponding to each setting item into the large model, and obtaining a tree-shaped setting knowledge base structure corresponding to the electronic equipment output by the large model.
5. The method according to claim 1 or 4, wherein the determining, based on the text to be processed, the text analysis result, and the tree-like setting knowledge base structure corresponding to the electronic device, the target prompt word corresponding to the text to be processed includes:
and assembling the text to be processed, the text analysis result and the tree-shaped setting knowledge base structure according to a preset prompt word template to obtain a target prompt word corresponding to the text to be processed, wherein the target prompt word comprises a role description prompt word, a resource set description prompt word, a resource prompt word, an output evaluation prompt word, a task description prompt word and a return format requirement prompt word.
6. The method according to claim 1, wherein the obtaining the text analysis result corresponding to the text to be processed includes:
performing accurate matching on the text to be processed;
and if the text is not hit, acquiring a text analysis result corresponding to the text to be processed.
7. The method of claim 1, wherein the recommended setting items include a plurality of recommended setting items, and wherein the executing the recommended setting items to complete the settings corresponding to the recommended setting items includes:
displaying a plurality of recommended setting items;
determining a target setting item from a plurality of recommended setting items;
and executing the target setting item to finish the setting corresponding to the target setting item.
8. A setup term processing apparatus, operable in an electronic device, the apparatus comprising:
the text acquisition unit is used for acquiring a text to be processed;
the result acquisition unit is used for acquiring a text analysis result corresponding to the text to be processed;
the determining unit is used for determining a target prompt word corresponding to the text to be processed based on the text to be processed, the text analysis result and a tree-shaped setting knowledge base structure corresponding to the electronic equipment;
The output unit is used for inputting the text to be processed and the target prompt word into a pre-trained setting item recommendation model and obtaining a recommendation setting item corresponding to the text to be processed, which is output by the setting item recommendation model;
and the execution unit is used for executing the recommended setting items to finish the setting corresponding to the recommended setting items.
9. An electronic device comprising one or more processors; one or more programs are stored in the memory and configured to perform the method of any of claims 1-7 by the one or more processors.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program code, wherein the program code, when being executed by a processor, performs the method of any of claims 1-7.
CN202311378419.6A 2023-10-23 2023-10-23 Setting item processing method and device, electronic equipment and storage medium Pending CN117424956A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311378419.6A CN117424956A (en) 2023-10-23 2023-10-23 Setting item processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311378419.6A CN117424956A (en) 2023-10-23 2023-10-23 Setting item processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117424956A true CN117424956A (en) 2024-01-19

Family

ID=89522383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311378419.6A Pending CN117424956A (en) 2023-10-23 2023-10-23 Setting item processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117424956A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117787422A (en) * 2024-02-27 2024-03-29 四川金信石信息技术有限公司 Switching operation task extraction method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117787422A (en) * 2024-02-27 2024-03-29 四川金信石信息技术有限公司 Switching operation task extraction method and system
CN117787422B (en) * 2024-02-27 2024-04-26 四川金信石信息技术有限公司 Switching operation task extraction method and system

Similar Documents

Publication Publication Date Title
US20210383154A1 (en) Image processing method and apparatus, electronic device and storage medium
CN108021572B (en) Reply information recommendation method and device
US11874904B2 (en) Electronic device including mode for using an artificial intelligence assistant function of another electronic device
CN106202165B (en) Intelligent learning method and device for man-machine interaction
CN107655154A (en) Terminal control method, air conditioner and computer-readable recording medium
CN107133354B (en) Method and device for acquiring image description information
CN117424956A (en) Setting item processing method and device, electronic equipment and storage medium
CN109377979B (en) Method and system for updating welcome language
CN110209778A (en) A kind of method and relevant apparatus of dialogue generation
CN115840841A (en) Multi-modal dialog method, device, equipment and storage medium
CN111540355A (en) Personalized setting method and device based on voice assistant
CN111611365A (en) Flow control method, device, equipment and storage medium of dialog system
CN108270661B (en) Information reply method, device and equipment
CN109547632B (en) Auxiliary call response method, user terminal device and server
CN111488744A (en) Multi-modal language information AI translation method, system and terminal
CN109725798B (en) Intelligent role switching method and related device
CN110895558B (en) Dialogue reply method and related device
CN106202222A (en) The determination method and device of focus incident
CN113643706B (en) Speech recognition method, device, electronic equipment and storage medium
CN112820265B (en) Speech synthesis model training method and related device
CN108055655A (en) A kind of method, apparatus, equipment and the storage medium of speech ciphering equipment plusing good friend
KR100677295B1 (en) Chatting method for mobile communication device using image
CN113377938A (en) Conversation processing method and device
CN113901832A (en) Man-machine conversation method, device, storage medium and electronic equipment
CN111639167A (en) Task conversation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination