CN117666812A - Prompt word processing method and device, electronic equipment and storage medium - Google Patents

Prompt word processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117666812A
CN117666812A CN202311340790.3A CN202311340790A CN117666812A CN 117666812 A CN117666812 A CN 117666812A CN 202311340790 A CN202311340790 A CN 202311340790A CN 117666812 A CN117666812 A CN 117666812A
Authority
CN
China
Prior art keywords
target
prompt word
text
prompt
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311340790.3A
Other languages
Chinese (zh)
Inventor
纪炼锋
闫思桃
张钰鑫
李鹏鹏
张晨
张博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu com Times Technology Beijing Co Ltd
Original Assignee
Baidu com Times Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu com Times Technology Beijing Co Ltd filed Critical Baidu com Times Technology Beijing Co Ltd
Priority to CN202311340790.3A priority Critical patent/CN117666812A/en
Publication of CN117666812A publication Critical patent/CN117666812A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The disclosure provides a prompt word processing method, a device, electronic equipment and a storage medium, relates to the field of artificial intelligence, and particularly relates to the technical field of natural language processing. The specific implementation scheme is as follows: responding to the first instruction, loading a target prompt word, wherein the target prompt word is used for guiding a target model to generate output information meeting target requirements; and displaying a prompt word text of the target prompt word and a corresponding test result in the first debugging interface, wherein the content of the prompt word text comprises natural language description aiming at target requirements, and the test result is used for representing whether output information generated by the target model based on the prompt word text meets the target requirements or not. According to the technology disclosed by the disclosure, the generation quality and the generation efficiency of the prompt words are improved.

Description

Prompt word processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of natural language processing in the field of artificial intelligence, and in particular relates to a method and a device for processing a prompt word, electronic equipment and a storage medium.
Background
At present, the natural language processing technology based on the large model is widely applied to numerous application scenes such as semantic search, intelligent assistants and the like, and the functions of more convenient and efficient man-machine interaction and information processing are realized.
By inputting prompt words (prompt) to the model, the effect of guiding the model output can be realized, but how to efficiently generate reasonable prompt words is a problem to be solved currently.
Disclosure of Invention
The disclosure provides a prompt word processing method, a device, electronic equipment and a storage medium.
According to a first aspect of the present disclosure, there is provided a method for processing a hint word, including:
responding to a first instruction, loading a target prompt word, wherein the target prompt word is used for guiding a target model to generate output information meeting target requirements; and displaying a prompt word text of the target prompt word and a corresponding test result in a first debugging interface, wherein the content of the prompt word text comprises natural language description aiming at the target requirement, and the test result is used for representing whether output information generated by the target model based on the prompt word text meets the target requirement or not.
According to a second aspect of the present disclosure, there is provided a cue word processing apparatus including:
the first response unit is used for responding to the first instruction and loading target prompt words, and the target prompt words are used for guiding the target model to generate output information meeting target requirements; the first processing unit is used for displaying a prompt word text of the target prompt word and a corresponding test result in a first debugging interface, wherein the content of the prompt word text comprises natural language description aiming at the target requirement, and the test result is used for representing whether output information generated by the target model based on the prompt word text meets the target requirement or not.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the first aspect of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program which, when executed by a processor, implements the method of the first aspect of the present disclosure.
According to the technology disclosed by the disclosure, the generation quality and the generation efficiency of the prompt words are improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flow chart of a method for processing a hint word according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a prompt word addition page provided in an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of generating test results according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a first debug interface provided by an embodiment of the present disclosure;
FIG. 5 is a flow chart of another method for generating test results according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of another first debug interface provided by an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of yet another first debug interface provided by embodiments of the present disclosure;
FIG. 8 is a flowchart of a method for processing a hint word according to an embodiment of the present disclosure;
FIG. 9 is a flowchart of a specific implementation of step S806;
FIG. 10 is a flowchart of processing a target prompt word according to an embodiment of the present disclosure;
fig. 11 is a block diagram of a prompt word processing device according to an embodiment of the disclosure;
FIG. 12 is a block diagram of an electronic device for implementing a hint word processing method of embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Currently, in a natural language processing technology based on a large model, by inputting a prompt word into the model, the effect of guiding the model output can be achieved, so that the content of the model output is more accurate and meets the requirements of users. However, based on the implementation principle of the large model, the internal process of processing the prompt word is opaque, so that the effect of the prompt word can be evaluated only by inputting the prompt word into the model and then actually outputting the content of the model.
The disclosure provides a method, a device, electronic equipment and a storage medium for processing a prompt word, which are applied to the technical field of natural language processing in the field of artificial intelligence so as to achieve the effect of improving the generation efficiency and quality of the prompt word.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
In order for the reader to more fully understand the principles of the implementations of the present disclosure, embodiments will now be further described and refined in conjunction with the following fig. 1-12.
Fig. 1 is a flowchart of a method for processing a prompt word, which may be executed by a device for processing a prompt word according to an embodiment of the disclosure. More specifically, for example, the method may be performed by a terminal device or a server provided with the above-described prompt word processing apparatus, as shown in fig. 1, and the method includes the steps of:
s101, loading target prompt words in response to a first instruction, wherein the target prompt words are used for guiding a target model to generate output information meeting target requirements.
The method of the embodiment of the disclosure includes that a terminal device is taken as an execution main body of the method of the embodiment of the disclosure, and the terminal device displays an interactive interface by running a target application program for managing prompt word engineering, wherein the target application program can be a client or a browser based on a software implementation architecture. Then, a target prompt word is loaded in response to a first instruction input by a user aiming at the interactive interface, wherein the prompt word is a prompt message of an input model to describe requirements, so that the aim of guiding the model to output specific contents is fulfilled, and the aim can be a problem, a section of text description or even a text description with a stack of parameters. In short, it is used for "telling the model what to do, how to do" to improve model performance and performance. In the step of this embodiment, the target prompt word is a prompt word determined based on the first instruction, and is used for guiding the corresponding target model to generate output information meeting the target requirement, for example, the target prompt word P1 is used for guiding the model to summarize the user attribute at_1 according to the user input. The target prompt word is a set of multiple items of information, and is stored in the form of 'project and task', and comprises a specific prompt word text, a target model applicable to the target prompt word, a prompt word identifier and other information. Therefore, loading the target prompt term in the step of this embodiment may also be understood as loading the target prompt term, so as to obtain the identifier corresponding to the target prompt term, the prompt term text and the applicable target model.
Further, the specific implementation mode of loading the target prompt word at least comprises the steps of building a new prompt word or selecting an existing prompt word. In a possible implementation manner, in an interactive interface of a target application program, a user clicks a new prompt word control to enable a terminal device to display a prompt word adding page for the new prompt word, and then, in response to a first instruction input by the user, the identification of the prompt word, the text content of the prompt word and a target model to which the prompt word is applicable are obtained, so that the target prompt word is generated. In another possible implementation manner, at least one created prompting word is displayed in the interactive interface of the target application program, and then one of the created prompting words is selected as the target prompting word to be loaded in response to a first instruction input by a user.
Fig. 2 is a schematic diagram of a prompt word addition page provided in an embodiment of the present disclosure, where, as shown in fig. 2, 4 controls that can be used for inputting information are set in the prompt word addition page, and are respectively shown as a control #1, a control #2, a control #3, a control #4, and a "new prompt word" control for triggering an action of a new prompt word. The control #1 is used for inputting a prompt word title; control #2 is used for inputting system prompts; the control #3 is used for inputting a user prompt; control #4 is used for inputting a target model; the system prompts a first text segment in the corresponding prompt word text and a second text segment in the corresponding prompt word text, wherein the first text segment is used for representing the data format of the output information; the second text segment is used for representing logic for processing input information to meet target requirements, after the control #1 to the control #4 are configured through the terminal equipment in response to the first instruction, a new prompt word control is triggered, and therefore a process of creating the new prompt word is completed, and the realization process of the target prompt word is obtained.
S102, in a first debugging interface, a prompt word text of a target prompt word and a corresponding test result are displayed, wherein the content of the prompt word text comprises natural language description aiming at target requirements, and the test result is used for representing whether output information generated by a target model based on the prompt word text meets the target requirements or not.
In an exemplary embodiment, after the first instruction of loading the target prompt word is responded, a test result corresponding to the prompt word text of the target prompt word is displayed in a first debug interface of the target application program, where, on one hand, the prompt word text, that is, the main body content of the target prompt word, and the content of the prompt word text includes a natural language description for the target requirement, so as to guide the model to output information meeting the target requirement. In one possible implementation, the prompt text includes static text, such as "get current date," which is not described in detail.
In another possible implementation manner, the prompt word text includes at least one input parameter, wherein the input parameter is used for representing input information aiming at a target model, and the target requirement is achieved by processing the input information. For example, the hint word text includes the following: "summarize the central idea of the section { user input }. The "user input" is an input parameter, for example, information input by the user when using functions such as "intelligent assistant", "semantic search", and the like. After the target model processes the 'user input' based on the prompt word text, the output content is the output information meeting the target requirement, namely the description text of the central idea of the 'user input'. The prompt word text is formed based on natural language, so that a developer user can design and modify the prompt word text conveniently.
Further, in one possible implementation manner, the text of the prompt word includes at least a first text segment and/or a second text segment, where the first text segment is used to characterize a data format of the output information; the second text segment is used for representing logic for processing the input information to meet target requirements; the prompt word text is presented in plain text or markup language format. Specifically, for example, the content of the first text segment is "output in Json format file"; the second text segment has content "# your character you are a robot for intent recognition and information extraction, you can extract user requirements from the { user input } sentence, user requirements include … …", etc. The markup language is, for example, markdown, which is a lightweight markup language, and the text format is characterized by the specified identification coincidence, so that the text formatting is realized. The representation mode of the prompt word text can be controlled through a preset control, so that the prompt word text is represented based on a plain text or a markup language, and the specific implementation mode is not repeated. Meanwhile, it is to be understood that the contents of the first text segment and the second text segment are merely exemplary, and specific contents are not limited.
On the other hand, the test result is the result of the effect test of the terminal equipment aiming at the prompt word text, namely whether the output information generated by the target model based on the prompt word text meets the target requirement. In one possible implementation manner, the test result is obtained after the terminal device calls the target model based on the prompt word text to perform the test, and the test process is executed on the terminal device side based on the deployed position of the target model, or may be executed on the other server side outside the terminal device. In a possible implementation manner, before displaying the test result corresponding to the target prompt word, the method further includes a step of automatically testing the target prompt word to generate the test result, and fig. 3 is an exemplary flow chart for generating the test result, as shown in fig. 3, and includes:
and S1011, acquiring a target test case corresponding to the target prompt word.
Step S1012, obtaining a test result based on the target model and the target test case.
Illustratively, after the target prompt word is determined, a corresponding target test case is obtained based on the prompt word name of the target prompt word or the prompt word identification number. The test case is a pre-generated script program for testing the prompt words, and comprises preset input information and corresponding expected output. After the corresponding target test case is obtained based on the identification information of the target prompt word, the target model is input as input data based on the input information in the target test case and the target prompt word is combined, and the output information of the target model is obtained (for example, obtained by intercepting a Json file output by the target model). And then comparing whether the output information is consistent with the expected output in the target test case or not to obtain a test result. The implementation and triggering process of the test case are known to those skilled in the art, and specific implementation manner thereof is not described herein.
Fig. 4 is a schematic diagram of a first debug interface provided by an embodiment of the present disclosure, as shown in fig. 4, by way of example, a first debug interface 400 includes a first area 401 and a second area 402, where the first area 401 is used for displaying a prompt text of a target prompt, and the second area 402 is used for displaying a test result, and referring to the figure, when the target prompt #1 is loaded, the prompt text of the prompt #1 is displayed in the first area of the first debug interface, where the contents include at least the following:
"summarize the central idea of the section { user input }. "
Simultaneously or thereafter, in a second area of the first debug interface, displaying test results, such as "success" as shown in the figures: true.
The test result characterizes that output information generated by the target model based on the prompt word text meets target requirements. Conversely, if the test result is "success: false ", the output information generated by the characterization object model based on the prompt word text does not meet the object requirement. The test result is obtained after the terminal device performs automatic test based on the target test case corresponding to the target prompt word, and the test process can be executed at a local or external server of the terminal device, which is described in detail in the embodiment shown in fig. 3.
In another possible implementation manner, the input parameters in the prompt word text may be obtained after the manual triggering by the developer user, and fig. 5 is a schematic flow chart of another test result generation provided by an embodiment of the present disclosure, as shown in fig. 5, including:
step S1013, in response to the second instruction, obtaining target input information corresponding to the input parameter.
Step S1014, obtaining a test result based on the target input information and the prompt word text.
In an exemplary implementation manner, the second instruction is used to indicate the target test case corresponding to the target prompt word, that is, the target test case corresponding to the target prompt word is obtained through the second instruction input by the user, and the target input information is obtained based on the target test case, which is described in detail in the embodiment shown in fig. 4 and is not described herein again. In another possible implementation, the second instruction is used to input the target string, i.e. the manually configured input parameters. Then, the target prompt word is automatically tested to generate a test result.
Fig. 6 is a schematic diagram of another first debug interface provided by the embodiments of the present disclosure, as shown in fig. 6, in which a first debug interface 600 includes a first area 601 and a second area 602, and in the second area 602, a control for inputting target input information corresponding to a target parameter is provided, which is shown as an editable text box 603, and a developer user can input the target input information through the control, which is shown as "xxxxxxxx" in the figure. And the terminal equipment acquires the target input information, and inserts the target input information into target parameters of the prompt word text to generate model input, wherein the target parameters are, for example, "user input" in the prompt word text, and the corresponding model input is the text "summarize the central idea of the section of the text of XXXXXXXXXX". And further, clicking a test control for triggering a test function for the target prompt word in the second area, so that the terminal equipment obtains a test result based on the target model and the prompt word text of the inserted input information.
Further, based on the input parameters manually configured by the developer and the generated output information, the test case corresponding to the target prompt word may be saved, and the above process may be performed by a user triggering operation, specifically, in response to the third instruction, the test case corresponding to the target prompt word is generated according to the target input information and the output information corresponding to the target input information. Therefore, the quick generation of the test cases is realized, and the multiplexing rate of the test cases is improved.
Then, based on the test result displayed in the first debugging interface, a developer user can judge whether the currently loaded target prompt word is reasonable or not, and further modify and optimize the target prompt word under the condition of unreasonable (not meeting the target requirement); or under the condition of reasonable (meeting the target requirement), the target model is saved as an adaptation prompt word of the target model, so that the target model can complete the function guided by the target prompt word.
Further, in order to clearly show the effect of the target prompt word, the final test result can be displayed, and meanwhile, first output information generated by the target model based on the prompt word text is displayed in the first debugging interface, and the first output information is used for being compared with expected output in the target test case to obtain the test result. Wherein the target requirement includes at least one sub-requirement; the first output information comprises at least one output parameter, and the output parameter represents a result of processing the input parameter in the prompt word text based on the sub-requirement.
Fig. 7 is a schematic diagram of still another first debug interface provided by the embodiment of the present disclosure, as shown in fig. 7, in which a first debug interface 700 includes a first area 701 and a second area 702, a target requirement includes a sub-requirement #1, a sub-requirement #2, and a sub-requirement #3, where the sub-requirements are respectively described in a text segment p1, a text segment p2, and a text segment p3 in a prompt word text of a target prompt word prompt #1 displayed in the first area 701, and specifically, for example, as shown in the drawing, the prompt word text includes a text segment p0 for defining a "task role", and the content is "you is a senior information delivery optimizer, and can extract [ attribute at_1] orientation information from { user input }. The content in the text box 703 is target input information corresponding to the input parameter, "i want to put into crowd above N1".
Further, the content of the text segment p1 is: the "[ attribute At_1] is a range including a start attribute Begin and an End attribute End. The content of the text segment p2 is: the "start attribute Begin and End attribute End are obtained from { user input }. The content of the text segment p3 is: the start attribute Begin is 18 if "below" is included in "{ user input }, and the start End attribute End is 99" if "above" is included in }.
Correspondingly, based on the prompt word text, first output information is displayed in the second area 702 of the first debug interface 700, where the first output information includes an output parameter para_1: "begin=n1", output parameter para_2: "end=99".
In the step of this embodiment, the purpose of displaying the execution condition of the sub-requirement corresponding to the prompt word text is achieved by further displaying the first output information including at least one output parameter in the first debug interface, so as to further improve the test effect of the target prompt word and provide a basis for optimizing the target prompt word subsequently.
In the embodiment of the disclosure, by responding to a first instruction, loading a target prompt word, wherein the target prompt word is used for guiding a target model to generate output information meeting target requirements; and displaying a prompt word text of the target prompt word and a corresponding test result in the first debugging interface, wherein the content of the prompt word text comprises natural language description aiming at target requirements, and the test result is used for representing whether output information generated by the target model based on the prompt word text meets the target requirements or not. The test results corresponding to the target prompt words can be displayed while the target prompt words are loaded, so that a developer user can quickly judge the quality of the target prompt words, further, the target prompt words can be directly stored or modified, the flow steps of the optimization process of the prompt words are simplified, the generation and optimization efficiency of the prompt words is improved, and the quality of the prompt words is improved.
Fig. 8 is a flow chart of a method for processing a prompt word according to an embodiment of the disclosure, where the embodiment is an alternative embodiment based on the foregoing embodiment. As shown in fig. 8, the method includes the steps of:
s801, loading a target prompt word in response to a first instruction, wherein the target prompt word is used for guiding a target model to generate output information meeting target requirements.
S802, displaying a test case set aiming at a target prompt word, wherein the test case set comprises at least two alternative test cases, and the alternative test cases are used for verifying whether target output parameters in first output information generated based on the prompt word text have expected parameter values.
S803, responding to the fourth instruction, and determining a target test case from at least two alternative test cases.
After loading the target prompt word, the terminal device may automatically or based on user trigger, display a test Case set for the target prompt word, where the test Case set includes a plurality of candidate test cases, similar to the description of the test cases in the previous embodiment, where the candidate test cases include input information and corresponding expected outputs, for example, the candidate test Case is test Case case_1, where the test Case case_1 includes input information #1, and the content of the test Case case_1 is "i want to be put into crowd above N1", and accordingly, the expected outputs in the test Case case_1 are "begin=n1" and "end=99". Wherein, begin and End are target output parameters in the first output information generated by the prompt word text, and N1 and 99 are respectively expected parameter values corresponding to the target output parameters. When the test Case case_1 is selected as the target test Case in response to the fourth instruction, based on the input information 'I want to put into more than N1 crowd' in the test Case case_1, the test is performed by combining the prompt word text as the model input, the corresponding first output information is obtained, and further, whether the parameter value of the target output parameter in the first output information is consistent with the expected parameter value of the expected output in the test Case case_1 is verified, so that the purpose of automatically testing the target prompt word is achieved.
Optionally, before step S802, the method further includes:
step S800, a first auxiliary prompt word is obtained, and a target test case is automatically generated based on the first auxiliary model and the first auxiliary prompt word, wherein the first auxiliary prompt word is used for representing a rule for generating the test case matched with the target prompt word based on the first auxiliary model.
For example, in one possible implementation manner, the above-mentioned alternative test case may be that a developer user extracts and writes the test case and predicts the test case at a terminal device or a server, where the terminal device directly obtains the test case based on the identification of the target prompt word. In another possible implementation, the above-described alternative test cases are automatically generated based on the first auxiliary model. The first auxiliary model is a pre-trained model for generating test cases, and is guided by using a first auxiliary prompt word, so that the first auxiliary model can generate the test cases matched with the target prompt word. Further, the first auxiliary prompt is used for characterizing rules for generating test cases matching the target prompt based on the first auxiliary model. The specific implementation of the first auxiliary model is not limited herein.
In the step of the embodiment, the first auxiliary prompt word is obtained, and the test case matched with the target prompt word is dynamically generated based on the first auxiliary model and the first auxiliary prompt word, so that the accurate test of the target prompt word is realized. In the scenario applied in this embodiment, since the target prompt word may be dynamically modified based on the user operation, the pre-generated test case may have a problem that the test result is inaccurate due to the modification of the target prompt word, and the step in this embodiment further improves the test accuracy of the target prompt word by dynamically generating the test case matched with the target prompt word.
S804, obtaining first output information based on the target model and the target test case.
S805, obtaining a test result according to the expected output information and the first output information corresponding to the target test case.
After the target test case is obtained, the target prompt word is tested based on the target test case and the corresponding target model, so that first output information output by the target model can be obtained, and a specific implementation manner of the first output information is described in the previous embodiment and is not repeated here.
And then, the terminal equipment compares the expected output information contained in the target test case with the first output information to obtain a test result. In one possible implementation manner, when the content of the expected output information and the content of the first output information are expressed in the form of parameter values, a test result can be obtained through consistency comparison of the parameter values of the expected output information and the first output information, for example, an output parameter para_1=n in the expected output information and an output parameter para_1=m in the first output information, if N is equal to M, the expected output information and the first output information are identical, and the test result indicates that the output information generated by the target model based on the prompt word text meets the requirement, namely, the test is passed; otherwise, if N is not equal to M, the test is not passed. In yet another possible implementation, the content of the desired output information and the first output information, when represented in text form, may be further compared semantically. Specifically, for this case, the specific implementation procedure of step S805 includes:
s8051, acquiring a first language model, wherein the first language model is used for judging consistency of at least two input texts based on semantics.
S8052, based on the first language model, processing expected output information and first output information corresponding to the target test case to obtain a tested result.
The first language model is a model with a semantic extraction function or a model with a functional unit capable of realizing the semantic extraction function, semantics of the expected output information and the first output information are respectively extracted through the first language model, and compared from semantic dimensions to obtain a test result, so that the problem of misjudgment caused by the fact that the semantics are the same but different description modes are avoided, and the accuracy of the test result is improved when the content of the expected output information and the content of the first output information are expressed in a text form.
Of course, it can be understood that the two ways of obtaining the test result may be performed independently or simultaneously, that is, the first output information expressed in text form is processed through the first language model, and the specific implementation manner is set according to specific needs through the comparison of the numerical consistency for the first output information expressed in parameter value form, which is not limited herein.
S806, in the first debugging interface, the prompt word text of the target prompt word and the corresponding test result are displayed.
The specific implementation manner of displaying the prompt word text and the corresponding test result in the first debug interface is described in the embodiment shown in fig. 1, and will not be described herein.
In one possible implementation, the target test case includes at least two sub-test cases. The specific implementation manner of step S804 includes: and obtaining a first output result corresponding to each sub-test case based on at least two sub-test cases. Correspondingly, the specific implementation manner of step S805 includes: obtaining the test passing rate corresponding to the target prompt word according to the first output result corresponding to each sub-test case; and obtaining a test result according to the test passing rate.
Specifically, when the target test case includes a plurality of sub test cases, in order to further improve the test accuracy, the target prompt word may be tested by the plurality of test cases in turn, so as to obtain a plurality of corresponding first output results, and the specific implementation manner is the same as the process of generating the first output results in the previous step, which is not repeated. And comparing the first output results with the corresponding expected output information in the sub-test cases to obtain sub-test results corresponding to the sub-test cases, wherein the sub-test results comprise, for example, first information representing pass and second information representing fail. Then, based on the ratio of the first information and the second information, obtaining a test passing rate, and based on the test passing rate, obtaining a test result, for example, if the test passing rate is greater than or equal to a passing rate threshold, the test result is "passing"; otherwise, the method is "no pass".
Optionally, after step S806, further comprising
S807, displaying optimized text corresponding to the prompt word text, wherein the content of the optimized text comprises optimized suggestions aiming at the prompt word text and/or results of sentence optimization on the prompt word text.
For example, after the test result corresponding to the target prompt word is generated and displayed, when the test result represents "failed" in the form of boolean value, or the test result represents that the reasonable degree of the target prompt word is lower than the threshold in the form of percentage, the terminal device may further display the optimized text corresponding to the prompt word text, so as to provide a modification suggestion for the developer user, or directly modify the unreasonable content in the prompt word text. Specifically, in one possible implementation, the optimization text includes a modification suggestion for the target prompt word, for example, in the first debug interface, the following optimization text is displayed:
"advice 1, ambiguous, how should" above "and" below "be handled if both are contained in the user input? It is required to specify explicitly in the prompt word text. "
"suggestion 2, lack of consideration of boundary conditions, if the user input does not include" above "or" below, "how this should be handled, requires additional examples. "
In another possible implementation, the optimized text includes a modification result for the target prompt word, where the modification result may include content modified for the target prompt word, a corresponding modification identifier, and the like. And will not be described in detail. Of course, the optimization text can also comprise the two implementations, namely, a modification suggestion for part of the content in the prompt word text and a modification result of part of the content.
Further, step 807 is configured to automatically optimize the target prompt word according to the specific situation of the test result after the test result is generated. Specifically, as shown in fig. 9, the specific implementation manner of step S807 includes:
s8071, obtaining a second auxiliary prompt word, wherein the second auxiliary prompt word is used for representing a rule for generating an optimized text aiming at the prompt word text based on the second auxiliary model.
S8072, generating optimized text corresponding to the prompt word text based on the second auxiliary model and the second auxiliary prompt word.
S8073, displaying the optimized text.
For example, the modification process of the prompt word text may be generated by a pre-trained second auxiliary model, where the second auxiliary model is used to generate a corresponding optimized text based on the second auxiliary prompt word according to the input prompt word text, and the second auxiliary prompt word is used to characterize a rule for generating the optimized text for the prompt word text based on the second auxiliary model. Similar to the first auxiliary model, the second auxiliary model is a pre-trained model for optimizing the text of the prompting word, and is guided by using the second auxiliary prompting word, so that the second auxiliary model can identify the text part with ambiguity, conflict and ambiguity in the text of the prompting word, modify or generate suggestions for the text part, and further output optimized text corresponding to the text of the prompting word.
In the step of the embodiment, the second auxiliary model and the second auxiliary prompt word are utilized to generate the optimized text, and the text of the prompt word is modified and optimized based on the optimized text, so that the quality of the target prompt word is further improved, and the iterative optimization efficiency of the prompt word is improved.
Optionally, after step S807, further includes:
s808, responding to the fifth instruction, and modifying the first text segment and/or the second text segment in the prompt word text to generate the optimized prompt word text.
S809, displaying the prompt word text and the optimized prompt word text based on paragraph comparison in the second debugging interface.
For example, based on specific user needs, the terminal device may modify the alert word text according to the fifth quality input by the user, for example, modify the first text segment and/or the second text segment in the alert word text, to generate the optimized alert word text, thereby generating the optimized alert word text. Steps (S807-S808) of this embodiment may be performed after step S806, that is, based on the optimized text generated and displayed in the above step, the alert word text corresponding to the target alert word is optimized in combination with the automatically generated modification suggestion, and the optimized alert word text is generated. Of course, in another possible implementation manner, the steps (S807-S808) of the present embodiment may also be performed at any time after the loading of the target prompt word (step S801), which is not particularly limited herein.
Specifically, after the first text segment and/or the second text segment in the prompt word text are modified in response to the fifth instruction, the prompt word text and the optimized prompt word text may be displayed in the second call interface based on paragraph comparison, that is, the prompt word text and the optimized prompt word text are aligned row by row, and the difference in the two contents is displayed in the same row, and illustratively, the difference in the two contents is displayed in a highlighting manner.
Fig. 10 is a flowchart of processing a target prompt word according to an embodiment of the disclosure, and the embodiment is described in more detail below with reference to fig. 10, and first, in response to a first instruction, the target prompt word is loaded in a first call interface as shown in fig. 10. Then, based on the first auxiliary prompt word, the first auxiliary model is guided by the first auxiliary prompt word, and a test case set containing a plurality of alternative test cases is generated; then, responding to a fourth instruction, and determining a target test case; and then, responding to the second instruction to obtain target input information. Then, based on the target model and the target test case, obtaining first output information, comparing the first output information with expected output information according to the first language model, obtaining a test result and displaying the test result; and generating optimized text corresponding to the prompt word text based on the second auxiliary prompt word and the second auxiliary prompt word, modifying the prompt word text in response to the fifth instruction, generating the optimized prompt word text, and displaying the prompt word text and the optimized prompt word text based on paragraph comparison in the second debugging interface.
In the embodiment, the target prompt word is loaded, tested and optimized in a flow mode through the interactive interface based on the terminal equipment, so that a plurality of functional links for quick creation, real-time testing and automatic optimization of the target prompt word are realized, the creation and optimization of the model prompt word can be independently completed based on a single terminal equipment, the efficiency of the creation and iterative optimization of the model prompt word is improved, the quality of the model prompt word is improved, and the model performance is improved.
Of course, it can be understood that the method provided in the above embodiment of the present disclosure may be executed by using other electronic devices such as a server as an execution body. For example, the server executes the data processing steps in the above embodiments, and sends the information generated in each step to the terminal device for display, so as to implement each step in the above embodiments, and the implementation process and principle are similar to those when the terminal device is taken as the execution main body, and are not repeated.
Corresponding to the method for processing a hint word of the above embodiment, fig. 11 is a block diagram of a structure of a device for processing a hint word provided in an embodiment of the present disclosure, and for convenience of explanation, only a portion related to the embodiment of the present disclosure is shown. Referring to fig. 11, the hint word processing apparatus 1100 includes:
A first response unit 1101, configured to load, in response to a first instruction, a target prompt word, where the target prompt word is used to guide the target model to generate output information that meets a target requirement;
the first processing unit 1102 is configured to display, in the first debug interface, a prompt word text of a target prompt word and a corresponding test result, where the content of the prompt word text includes a natural language description for a target requirement, and the test result is used to characterize whether output information generated by the target model based on the prompt word text meets the target requirement.
In some embodiments, the prompt word text includes at least one input parameter, the input parameter is used to characterize input information for the target model, and the target requirement is achieved by processing the input information.
In some embodiments, the alert word processing apparatus 1100 further includes: the second response unit is used for responding to the second instruction and obtaining target input information corresponding to the input parameters; obtaining a test result based on the target input information and the prompt word text; the second instruction is used for indicating a target test case corresponding to the target prompt word or inputting a target character string.
In some embodiments, the alert word processing apparatus 1100 further includes: and the third response unit is used for responding to the third instruction and generating a test case corresponding to the target prompt word according to the target input information and the output information corresponding to the target input information.
In some embodiments, the alert word processing apparatus 1100 further includes: the second processing unit 1102 is configured to display, in the first debug interface, first output information generated by the target model based on the prompt word text; wherein the target requirement includes at least one sub-requirement; the first output information comprises at least one output parameter, and the output parameter represents a result of processing the input parameter in the prompt word text based on the sub-requirement.
In some embodiments, the first processing unit 1101 further comprises: the first processing module is used for acquiring a target test case corresponding to the target prompt word; and the second processing module is used for obtaining a test result based on the target model and the target test case.
In some embodiments, the second processing module comprises: the first sub-processing module is used for obtaining first output information based on the target model and the target test case; and the second sub-processing module is used for obtaining a test result according to the expected output information and the first output information corresponding to the target test case.
In some embodiments, the second sub-processing module is specifically configured to: acquiring a first language model, wherein the first language model is used for judging consistency of at least two input texts based on semantics; and based on the first language model, processing the expected output information and the first output information corresponding to the target test case to obtain a tested result.
In some embodiments, the first processing module comprises: the third sub-processing module is used for displaying a test case set aiming at the target prompt word, wherein the test case set comprises at least two alternative test cases, and the alternative test cases are used for verifying whether target output parameters in first output information generated based on the prompt word text have expected parameter values or not; and responding to the fourth instruction, and determining a target test case from at least two alternative test cases.
In some embodiments, the first processing module comprises: the fourth sub-processing module is used for acquiring a first auxiliary prompt word, and the first auxiliary prompt word is used for representing a rule for generating a test case matched with the target prompt word based on the first auxiliary model; and automatically generating alternative test cases based on the first auxiliary model and the first auxiliary prompt word.
In some embodiments, the target test case includes at least two child test cases; a second processing module comprising: the fifth sub-processing module is used for obtaining a first output result corresponding to each sub-test case based on at least two sub-test cases; the sixth sub-processing module is used for obtaining the test passing rate corresponding to the target prompt word according to the first output result corresponding to each sub-test case; and the seventh sub-processing module is used for obtaining a test result according to the test passing rate.
In some embodiments, at least one of the following is included: the text of the prompt word at least comprises a first text segment and/or a second text segment, wherein the first text segment is used for representing the data format of the output information; the second text segment is used for representing logic for processing the input information to meet target requirements; the prompt word text is presented in plain text or markup language format.
In some embodiments, the alert word processing apparatus 1100 further includes: the fourth response unit is used for responding to the fifth instruction, modifying the first text segment and/or the second text segment in the prompt word text and generating an optimized prompt word text; and displaying the prompt word text and the optimized prompt word text based on paragraph comparison in the second debugging interface.
In some embodiments, the alert word processing apparatus 1100 further includes: and the third processing unit is used for displaying the optimized text corresponding to the prompt word text, wherein the content of the optimized text comprises an optimized suggestion aiming at the prompt word text and/or a sentence optimization result of the prompt word text.
In some embodiments, the third processing unit comprises: the third processing module is used for acquiring a second auxiliary prompt word, and the second auxiliary prompt word is used for representing a rule for generating an optimized text aiming at the prompt word text based on the second auxiliary model; and the fourth processing module is used for generating optimized text corresponding to the prompt word text based on the second auxiliary model and the second auxiliary prompt word.
The prompt word processing apparatus provided in fig. 11 may perform the steps involved in the foregoing corresponding method embodiments, and its implementation principle and technical effects are similar, which are not described herein again.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, the present disclosure further provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the aspects provided in any one of the embodiments described above.
According to an embodiment of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the solution provided by any one of the above embodiments.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 12 shows a schematic block diagram of an example electronic device 1200 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 12, the apparatus 1200 includes a computing unit 1201, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1202 or a computer program loaded from a storage unit 12012 into a Random Access Memory (RAM) 1203. In the RAM 1203, various programs and data required for the operation of the device 1200 may also be stored. The computing unit 1201, the ROM 1202, and the RAM 1203 are connected to each other via a bus 1204. An input/output (I/O) interface 1205 is also connected to the bus 1204.
Various components in device 1200 are connected to I/O interface 1205, including: an input unit 1206 such as a keyboard, mouse, etc.; an output unit 1207 such as various types of displays, speakers, and the like; a storage unit 12012 such as a magnetic disk, an optical disk, or the like; and a communication unit 1209, such as a network card, modem, wireless communication transceiver, etc. The communication unit 1209 allows the device 1200 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1201 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1201 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The computing unit 1201 performs the various methods and processes described above, such as the cue word processing method. For example, in some embodiments, the hint word processing method can be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 12012. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1200 via ROM 1202 and/or communication unit 1209. When the computer program is loaded into the RAM 1203 and executed by the computing unit 1201, one or more steps of the cue word processing method described above may be performed. Alternatively, in other embodiments, the computing unit 1201 may be configured to perform the hint word processing method in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (33)

1. A method for processing a hint word, comprising:
responding to a first instruction, loading a target prompt word, wherein the target prompt word is used for guiding a target model to generate output information meeting target requirements;
and displaying a prompt word text of the target prompt word and a corresponding test result in a first debugging interface, wherein the content of the prompt word text comprises natural language description aiming at the target requirement, and the test result is used for representing whether output information generated by the target model based on the prompt word text meets the target requirement or not.
2. The method of claim 1, wherein the prompt word text includes at least one input parameter, the input parameter being used to characterize input information for the target model, the target requirement being achieved by processing the input information.
3. The method of claim 2, prior to displaying the alert word text of the target alert word and the corresponding test result, the method further comprising:
responding to a second instruction, and obtaining target input information corresponding to the input parameters;
obtaining the test result based on the target input information and the prompt word text;
the second instruction is used for indicating a target test case corresponding to the target prompt word or inputting a target character string.
4. A method according to claim 3, the method further comprising:
and responding to a third instruction, and generating a test case corresponding to the target prompt word according to the target input information and output information corresponding to the target input information.
5. The method of claim 1, further comprising:
displaying first output information generated by the target model based on the prompt word text in the first debugging interface;
Wherein the target requirement includes at least one sub-requirement;
the first output information comprises at least one output parameter, and the output parameter represents a result of processing the input parameter in the prompt word text based on the sub-requirement.
6. The method of claim 1, further comprising:
acquiring a target test case corresponding to the target prompt word;
and obtaining the test result based on the target model and the target test case.
7. The method of claim 6, the obtaining the test result based on the target model and the target test case, comprising:
obtaining first output information based on the target model and the target test case;
and obtaining the test result according to the expected output information corresponding to the target test case and the first output information.
8. The method of claim 7, wherein the obtaining the test result according to the expected output information corresponding to the target test case and the first output information includes:
acquiring a first language model, wherein the first language model is used for judging consistency of at least two input texts based on semantics;
And processing the expected output information corresponding to the target test case and the first output information based on the first language model to obtain a tested result.
9. The method of claim 6, further comprising:
displaying a test case set aiming at the target prompt word, wherein the test case set comprises at least two alternative test cases, and the alternative test cases are used for verifying whether target output parameters in first output information generated based on the prompt word text have expected parameter values or not;
the obtaining the target test case corresponding to the target prompt word comprises the following steps:
and responding to a fourth instruction, and determining the target test case from the at least two alternative test cases.
10. The method of claim 6, further comprising, prior to obtaining the target test case corresponding to the target hint word:
acquiring a first auxiliary prompt word, wherein the first auxiliary prompt word is used for representing a rule for generating a test case matched with the target prompt word based on a first auxiliary model;
and automatically generating the target test case based on the first auxiliary model and the first auxiliary prompt word.
11. The method of claim 6, the target test case comprising at least two sub-test cases; the obtaining the test result based on the target model and the target test case includes:
based on the at least two sub-test cases, obtaining a first output result corresponding to each sub-test case;
obtaining the test passing rate corresponding to the target prompt word according to the first output result corresponding to each sub test case;
and obtaining the test result according to the test passing rate.
12. The method of claim 1, comprising at least one of:
the prompt word text at least comprises a first text segment and/or a second text segment, wherein the first text segment is used for representing the data format of the output information; the second text segment is used for representing logic for processing input information to meet the target requirement;
the prompt word text is presented in plain text or markup language format.
13. The method of claim 12, further comprising:
responding to a fifth instruction, and modifying the first text segment and/or the second text segment in the prompt word text to generate an optimized prompt word text;
The method further comprises the steps of:
and displaying the prompt word text and the optimized prompt word text based on paragraph comparison in a second debugging interface.
14. The method of claim 1, further comprising:
and displaying an optimized text corresponding to the prompt word text, wherein the content of the optimized text comprises an optimized suggestion aiming at the prompt word text and/or a sentence optimization result of the prompt word text.
15. The method of claim 14, the displaying the optimized text corresponding to the prompt word text, comprising:
acquiring a second auxiliary prompt word, wherein the second auxiliary prompt word is used for representing a rule for generating an optimized text aiming at the prompt word text based on a second auxiliary model;
and generating optimized text corresponding to the prompt word text based on the second auxiliary model and the second auxiliary prompt word.
16. A cue word processing apparatus comprising:
the first response unit is used for responding to the first instruction and loading target prompt words, and the target prompt words are used for guiding the target model to generate output information meeting target requirements;
the first processing unit is used for displaying a prompt word text of the target prompt word and a corresponding test result in a first debugging interface, wherein the content of the prompt word text comprises natural language description aiming at the target requirement, and the test result is used for representing whether output information generated by the target model based on the prompt word text meets the target requirement or not.
17. The apparatus of claim 16, the hint word text comprising at least one input parameter that characterizes input information for the object model, the object requirement being achieved by processing the input information.
18. The apparatus of claim 17, further comprising:
the second response unit is used for responding to a second instruction and obtaining target input information corresponding to the input parameters; obtaining the test result based on the target input information and the prompt word text; the second instruction is used for indicating a target test case corresponding to the target prompt word or inputting a target character string.
19. The apparatus of claim 18, further comprising:
and the third response unit is used for responding to a third instruction and generating a test case corresponding to the target prompt word according to the target input information and the output information corresponding to the target input information.
20. The apparatus of claim 16, further comprising:
the second processing unit is used for displaying first output information generated by the target model based on the prompt word text in the first debugging interface; wherein the target requirement includes at least one sub-requirement; the first output information comprises at least one output parameter, and the output parameter represents a result of processing the input parameter in the prompt word text based on the sub-requirement.
21. The apparatus of claim 16, the first processing unit further comprising:
the first processing module is used for acquiring a target test case corresponding to the target prompt word;
and the second processing module is used for obtaining the test result based on the target model and the target test case.
22. The apparatus of claim 21, the second processing module comprising:
the first sub-processing module is used for obtaining first output information based on the target model and the target test case;
and the second sub-processing module is used for obtaining the test result according to the expected output information corresponding to the target test case and the first output information.
23. The apparatus of claim 22, the second sub-processing module being specifically configured to:
acquiring a first language model, wherein the first language model is used for judging consistency of at least two input texts based on semantics;
and processing the expected output information corresponding to the target test case and the first output information based on the first language model to obtain a tested result.
24. The apparatus of claim 21, the first processing module comprising:
The third sub-processing module is used for displaying a test case set aiming at the target prompt word, wherein the test case set comprises at least two alternative test cases, and the alternative test cases are used for verifying whether target output parameters in first output information generated based on the prompt word text have expected parameter values or not; and responding to a fourth instruction, and determining the target test case from the at least two alternative test cases.
25. The apparatus of claim 21, the first processing module comprising:
the fourth sub-processing module is used for acquiring a first auxiliary prompt word, and the first auxiliary prompt word is used for representing a rule for generating a test case matched with the target prompt word based on a first auxiliary model; and automatically generating the alternative test cases based on the first auxiliary model and the first auxiliary prompt word.
26. The apparatus of claim 21, the target test case comprising at least two sub-test cases; the second processing module includes:
a fifth sub-processing module, configured to obtain a first output result corresponding to each sub-test case based on the at least two sub-test cases;
The sixth sub-processing module is used for obtaining the test passing rate corresponding to the target prompt word according to the first output result corresponding to each sub-test case;
and the seventh sub-processing module is used for obtaining the test result according to the test passing rate.
27. The apparatus of claim 16, comprising at least one of:
the prompt word text at least comprises a first text segment and/or a second text segment, wherein the first text segment is used for representing the data format of the output information; the second text segment is used for representing logic for processing input information to meet the target requirement;
the prompt word text is presented in plain text or markup language format.
28. The apparatus of claim 27, further comprising:
the fourth response unit is used for responding to the fifth instruction, modifying the first text segment and/or the second text segment in the prompt word text and generating an optimized prompt word text; and displaying the prompt word text and the optimized prompt word text based on paragraph comparison in a second debugging interface.
29. The apparatus of claim 16, further comprising:
and the third processing unit is used for displaying the optimized text corresponding to the prompt word text, wherein the content of the optimized text comprises an optimized suggestion aiming at the prompt word text and/or a sentence optimization result of the prompt word text.
30. The apparatus of claim 29, the third processing unit comprising:
the third processing module is used for acquiring a second auxiliary prompt word, and the second auxiliary prompt word is used for representing a rule for generating an optimized text aiming at the prompt word text based on a second auxiliary model;
and the fourth processing module is used for generating optimized text corresponding to the prompt word text based on the second auxiliary model and the second auxiliary prompt word.
31. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-15.
32. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-15.
33. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of any of claims 1-15.
CN202311340790.3A 2023-10-16 2023-10-16 Prompt word processing method and device, electronic equipment and storage medium Pending CN117666812A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311340790.3A CN117666812A (en) 2023-10-16 2023-10-16 Prompt word processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311340790.3A CN117666812A (en) 2023-10-16 2023-10-16 Prompt word processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117666812A true CN117666812A (en) 2024-03-08

Family

ID=90065142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311340790.3A Pending CN117666812A (en) 2023-10-16 2023-10-16 Prompt word processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117666812A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160055155A1 (en) * 2014-08-19 2016-02-25 International Business Machines Corporation Answering Superlative Questions with a Question and Answer System
CN113869042A (en) * 2021-09-18 2021-12-31 北京百度网讯科技有限公司 Text title generation method and device, electronic equipment and storage medium
CN113962315A (en) * 2021-10-28 2022-01-21 北京百度网讯科技有限公司 Model pre-training method, device, equipment, storage medium and program product
CN116738250A (en) * 2023-06-15 2023-09-12 广州虎牙科技有限公司 Prompt text expansion method, device, electronic equipment and storage medium
US20230315994A1 (en) * 2022-03-31 2023-10-05 Smart Information Flow Technologies, LLC Natural Language Processing for Addressing Bias
CN116860935A (en) * 2023-07-05 2023-10-10 康键信息技术(深圳)有限公司 Content management method, device, equipment and medium based on prompt word question-answer interaction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160055155A1 (en) * 2014-08-19 2016-02-25 International Business Machines Corporation Answering Superlative Questions with a Question and Answer System
CN113869042A (en) * 2021-09-18 2021-12-31 北京百度网讯科技有限公司 Text title generation method and device, electronic equipment and storage medium
CN113962315A (en) * 2021-10-28 2022-01-21 北京百度网讯科技有限公司 Model pre-training method, device, equipment, storage medium and program product
US20230315994A1 (en) * 2022-03-31 2023-10-05 Smart Information Flow Technologies, LLC Natural Language Processing for Addressing Bias
CN116738250A (en) * 2023-06-15 2023-09-12 广州虎牙科技有限公司 Prompt text expansion method, device, electronic equipment and storage medium
CN116860935A (en) * 2023-07-05 2023-10-10 康键信息技术(深圳)有限公司 Content management method, device, equipment and medium based on prompt word question-answer interaction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHOUGUAN XIAO等: ""Optimizing Continuous Prompts for Visual Relationship Detection by Affix-Tuning"", 《IEEE》, 4 July 2022 (2022-07-04) *

Similar Documents

Publication Publication Date Title
US20210342549A1 (en) Method for training semantic analysis model, electronic device and storage medium
JP2021114291A (en) Time series knowledge graph generation method, apparatus, device and medium
CN114445047B (en) Workflow generation method and device, electronic equipment and storage medium
CN112632987B (en) Word slot recognition method and device and electronic equipment
CN113590776A (en) Text processing method and device based on knowledge graph, electronic equipment and medium
CN112528641A (en) Method and device for establishing information extraction model, electronic equipment and readable storage medium
US20220198358A1 (en) Method for generating user interest profile, electronic device and storage medium
CN117539975A (en) Method, device, equipment and medium for generating prompt word information of large language model
CN113869042A (en) Text title generation method and device, electronic equipment and storage medium
CN117421403A (en) Intelligent dialogue method and device and electronic equipment
CN117371428A (en) Text processing method and device based on large language model
CN114880498B (en) Event information display method and device, equipment and medium
CN113743127B (en) Task type dialogue method, device, electronic equipment and storage medium
CN116049370A (en) Information query method and training method and device of information generation model
CN116166814A (en) Event detection method, device, equipment and storage medium
CN117666812A (en) Prompt word processing method and device, electronic equipment and storage medium
CN114048315A (en) Method and device for determining document tag, electronic equipment and storage medium
CN114118937A (en) Information recommendation method and device based on task, electronic equipment and storage medium
WO2020006090A1 (en) Skill-generating method, apparatus, and electonic device
CN114492456B (en) Text generation method, model training method, device, electronic equipment and medium
CN113705206B (en) Emotion prediction model training method, device, equipment and storage medium
CN115879468B (en) Text element extraction method, device and equipment based on natural language understanding
CN113723120B (en) Display method and device of reference information and electronic equipment
CN118363977A (en) Structured query language sentence generation method, device, equipment and storage medium
CN114896974A (en) Media information processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination