CN115132353A - Method, device and equipment for generating psychological question automatic response model - Google Patents

Method, device and equipment for generating psychological question automatic response model Download PDF

Info

Publication number
CN115132353A
CN115132353A CN202210845973.XA CN202210845973A CN115132353A CN 115132353 A CN115132353 A CN 115132353A CN 202210845973 A CN202210845973 A CN 202210845973A CN 115132353 A CN115132353 A CN 115132353A
Authority
CN
China
Prior art keywords
language
sentence
model
reply
psychological
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210845973.XA
Other languages
Chinese (zh)
Inventor
唐蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210845973.XA priority Critical patent/CN115132353A/en
Publication of CN115132353A publication Critical patent/CN115132353A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Epidemiology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Pathology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a method, a device and equipment for generating a psychological problem automatic response model, relates to the technical field of psychological problem response, and can solve the problem that a psychological problem of a help seeker cannot be responded in a targeted manner. The method comprises the following steps: acquiring a first language fragment described by a help seeker and a second language fragment replied by a psychological consultant to the first language fragment; training a basic GPT language model by utilizing the first language segment to obtain a second language segment, and generating a pre-training GPT language model; and marking a marked reply sentence for solving the psychological problem of the help seeker in the second language segment, training a pre-training GPT language model by utilizing the first identity information and the first language segment of the help seeker to obtain the marked reply sentence, and generating an automatic psychological problem reply model.

Description

Method, device and equipment for generating psychological question automatic response model
Technical Field
The present application relates to the field of psychological question answering technologies, and in particular, to a method, an apparatus, and a device for generating an automatic psychological question answering model.
Background
With the rapid development of internet technology and applications, it is becoming more and more common for psychological consultants or psychological consulting organizations to provide psychological consulting services through networks. One of the psychological counseling services is that a help seeker sends a description of his own psychological problems to a psychological counselor through a network, and the psychological counselor replies to the description of the help seeker, but the resources of the psychological counselor cannot meet the rapidly increased psychological help seeking demand.
Because psychological problems described by a plurality of recourse persons are similar, the prior art establishes a psychological recourse problem similarity model based on the text matching method in the natural language technology, and helps the recourse persons to solve the psychological problems by recommending similar problem responses to the recourse persons. However, the existing solution has the problems of insufficient personalization of question responses, namely incapability of responding to individual psychological questions in a targeted manner, insufficient user experience, namely, the help seeker only obtains responses similar to the psychological questions, lack of targeted service experience and the like.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus, and a device for generating an automatic psychological problem response model, which relate to the technical field of psychological problem response and can solve the problem that a psychological problem of a help seeker cannot be responded to in a targeted manner.
According to an aspect of the present application, there is provided a method of generating an automatic psychological question response model, the method including:
acquiring a first language section described by a help seeker and a second language section replied by a psychological consultant to the first language section;
training a basic GPT language model by utilizing the first language segment to obtain the second language segment, and generating a pre-training GPT language model;
and marking a marked reply sentence for solving the psychological problem of the help seeker in the second language segment, and training the pre-trained GPT language model by utilizing the first identity information and the first language segment of the help seeker to obtain the marked reply sentence so as to generate an automatic psychological problem reply model.
Preferably, the method further comprises:
acquiring second identity information of a target help seeker and a third language segment narrated by the target help seeker;
and inputting the second identity information and the third language segment into the automatic psychological question answering model to obtain a target answering result.
Preferably, the inputting the second identity information and the third speech segment into the automatic psychological question response model to obtain a target response result includes:
inputting the second identity information and the third language segment into the psychological question automatic response model;
the psychological question automatic response model outputs a first sub-target response result for responding to the first target sentence according to the second identity information and the first target sentence of the third language segment;
the psychological question automatic response model outputs a second sub-target response result responded to the second target sentence according to the second identity information, the first target sentence, the first sub-target response result and the second target sentence of the third language segment;
and summarizing the response results of the sub-targets corresponding to all the target sentences to obtain target response results until the response of all the target sentences of the third language section is finished.
Preferably, the training a basic GPT language model using the first corpus to obtain the second corpus includes:
training the basic GPT language model by utilizing a first sentence of the first language segment to obtain a first sub-reply sentence which replies to the first sentence in the second language segment;
training the basic GPT language model by using the first statement, the first sub-reply statement and a second statement of the first language segment to obtain a second sub-reply statement which replies the second statement in the second language segment;
and until the training of all sentences of the second speech segment is finished.
Preferably, the training the pre-trained GPT language model using the first identity information of the help seeker and the first speech segment to obtain the tagged reply sentence comprises:
determining whether the first sub-reply sentence is the markup reply sentence;
if the first sub-reply sentence is the marked reply sentence, inputting the first identity information of the help seeker and the first sentence into the pre-training GPT language model to obtain the marked reply sentence;
if the first sub-reply sentence is not the tagged reply sentence, determining whether the second sub-reply sentence is the tagged reply sentence;
if the second sub-reply sentence is the marked reply sentence, inputting the first identity information, the first sentence, the first reply sentence and the second sentence into the pre-training GPT language model to obtain the marked reply sentence;
until all of the markup reply sentences are obtained.
Preferably, after the obtaining of the target reply result, the method further comprises:
and the target help seeker evaluates the satisfaction degree of the target response result.
According to another aspect of the present application, there is provided an apparatus for generating an automatic psychological question response model, the apparatus including:
the first obtaining module is used for obtaining a first language section described by a help seeker and a second language section for a psychological consultant to reply to the first language section;
the training module is used for training a basic GPT language model by utilizing the first language fragment to obtain the second language fragment and generating a pre-training GPT language model;
and the generating module is used for marking a reply sentence for solving the psychological problem of the help seeker in the second language segment, training the pre-training GPT language model by utilizing the first identity information and the first language segment of the help seeker to obtain the reply sentence, and generating an automatic psychological problem reply model.
Preferably, the apparatus further comprises:
the second acquisition module is used for acquiring second identity information of the target help seeker and a third language segment narrated by the target help seeker;
and the reply module is used for inputting the second identity information and the third speech segment into the automatic psychological question reply model to obtain a target reply result.
According to still another aspect of the present application, there is provided a non-transitory readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described method for generating an automatic answer model to psychological questions.
According to still another aspect of the present application, there is provided a computer apparatus including a non-volatile readable storage medium, a processor, and a computer program stored on the non-volatile readable storage medium and executable on the processor, the processor implementing the method for generating the above-described automatic answer to psychological problem model when executing the program.
By means of the technical scheme, the application discloses a method, a device and equipment for generating an automatic psychological question response model, and the method, the device and the equipment are characterized in that a first language section described by a help seeker and a second language section for a psychological consultant to respond to the first language section are obtained; training a basic GPT language model by utilizing the first language fragment to obtain a second language fragment, and generating a pre-training GPT language model; and marking the marked reply sentences for solving the psychological problems of the help seeker in the second language segment, training a pre-training GPT language model by utilizing the first identity information and the first language segment of the help seeker to obtain the marked reply sentences, and generating an automatic psychological problem reply model. Through the technical scheme in the application, the problem of insufficient resources of a psychological consultant is solved by automatically generating the answer to the problem of the help seeker through the automatic answer model of the psychological problem, manpower is liberated, and meanwhile, because a large amount of data can not be completely marked, the automatic answer model of the psychological problem is obtained based on the data training of two stages, the pre-training GPT language model is obtained through a large amount of unmarked data training in the first stage, the data of the psychological problem of the help seeker is marked in the second stage, the automatic answer model of the psychological problem is obtained through the data training after marking, so that the obtained psychological problem can be pertinently solved, the individuation is not enough, the problem of insufficient user experience is solved, and the answer efficiency is improved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application to the disclosed embodiment. In the drawings:
fig. 1 is a schematic flowchart illustrating a method for generating an automatic psychological question response model according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating another method for generating an automatic answer model of psychological questions according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an apparatus for generating an automatic psychological question response model according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of another apparatus for generating an automatic psychological problem response model according to an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating a process for generating a pre-trained GPT language model according to an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating an exemplary process for generating an automatic answer model of psychological questions according to an embodiment of the present disclosure
Fig. 7 is a schematic flowchart illustrating a process for applying an automatic answer model to a psychological question according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
In view of the current problems, the embodiment of the present application provides a method for generating an automatic psychological problem response model, as shown in fig. 1, the method includes:
101. and acquiring a first language fragment described by the help seeker and a second language fragment for responding the first language fragment by the psychological consultant.
For the embodiment, the obtaining manner may include some question and answer forums, background question and answer data of each psychological consulting platform, hospital psychological consulting medical records, and the like, which are not limited herein, and the first language section and the second language section relate to various psychological problem consulting topics. The dialogue mode between the help seeker and the psychological consultant can be that the help seeker speaks a sentence, the psychological consultant correspondingly answers the sentence, all the sentences described by the help seeker finally form a first language segment, and all the sentences answered by the psychological consultant form a second language segment.
For example, the first and second tokens may be represented as a ═ S (S) 1 ,R 1 ,S 2 ,R 2 ,S 3 ,R 3 ,S 4 ,R 4 ,…,S n-1 ,R n-1 ,S n ,R n ) Wherein S represents the speech of the seeker, R represents the speech of the psychological consultant, R n Represents a pair S n Answer (S) to 1 、S 2 ...S n-1 Form a speaker of the help seeker R 1 、R 2 ...R n Constituting the words of a psychological consultant.
102. And training the basic GPT language model by utilizing the first language segment to obtain a second language segment, and generating a pre-training GPT language model.
The basic GPT (general Pre-trained Transformer) language model is obtained by a large-scale trained Chinese dialogue text, the Pre-trained GPT language model is obtained by continuous training on the basis of the basic GPT language model, the basic GPT language model is an unsupervised form, concretely, the basic GPT language model uses a Decoder structure of a Transformer, and changes some of the Transformer Decoder, the original Decoder comprises two Multi-Head Attention structures, and the basic GPT language model only reserves Mask Multi-Head Attention.
For this embodiment, as an implementation, the second language segment described by the psychological consultant is used as a training target, so as to make the pre-training model focus on the context of the psychological question consultation related subject.
103. And marking the marked reply sentences for solving the psychological problems of the help seeker in the second language segment, training a pre-training GPT language model by utilizing the first identity information and the first language segment of the help seeker to obtain the marked reply sentences, and generating an automatic psychological problem reply model.
In the embodiment, the pre-trained GPT language model obtained by training in step 102 uses a second language segment without a tag, on one hand, because the amount of data tagged with data is too small, the accuracy of model training is not high, and on the other hand, it is not practical to tag a large amount of data.
For the embodiment, as an implementation manner, after the pre-trained GPT language model is obtained by using the unmarked second language segment as the target training basic GPT language model, the marked reply sentences for solving the psychological problems of the help seeker in the second language segment are marked, and the psychological problem automatic reply model obtained by using the marked reply sentences as the target training pre-trained GPT language model is a supervision form and can be used for solving the problem response personalization deficiency in a targeted manner.
The application discloses a method, a device and equipment for generating a psychological question automatic response model, and the method comprises the steps of firstly obtaining a first language section described by a help seeker and a second language section for a psychological consultant to respond to the first language section; training a basic GPT language model by utilizing the first language segment to obtain a second language segment, and generating a pre-training GPT language model; and marking the marked reply sentences for solving the psychological problems of the help seeker in the second language segment, training a pre-training GPT language model by utilizing the first identity information and the first language segment of the help seeker to obtain the marked reply sentences, and generating an automatic psychological problem reply model. Through the technical scheme in the application, the problem of insufficient resources of a psychological consultant is solved by automatically generating the answer to the problem of the help seeker through the automatic answer model of the psychological problem, manpower is liberated, and meanwhile, because a large amount of data can not be completely marked, the automatic answer model of the psychological problem is obtained based on the data training of two stages, the pre-training GPT language model is obtained through a large amount of unmarked data training in the first stage, the data of the psychological problem of the help seeker is marked in the second stage, the automatic answer model of the psychological problem is obtained through the data training after marking, so that the obtained psychological problem can be pertinently solved, the individuation is not enough, the problem of insufficient user experience is solved, and the answer efficiency is improved.
Further, as a refinement and an extension of the specific implementation of the above embodiment, in order to fully explain the specific implementation process in this embodiment, another method for generating an automatic answer model to a psychological question is provided, as shown in fig. 2, the method includes:
201. and acquiring a first language section described by the help seeker and a second language section for responding the first language section by the psychological consultant.
For this embodiment, the detailed implementation is the same as step 101 in the embodiment, and is not described herein again.
202. And training the basic GPT language model by utilizing the first language segment to obtain a second language segment, and generating a pre-training GPT language model.
For this embodiment, training the basic GPT language model using the first language fragment to obtain the second language fragment includes: training a basic GPT language model by using a first sentence of a first language segment to obtain a first sub-reply sentence which replies to the first sentence in a second language segment; training a basic GPT language model by using the first sentence, the first sub-reply sentence and the second sentence of the first language segment to obtain a second sub-reply sentence which replies the second sentence in the second language segment; until the training of all sentences of the second speech segment is finished.
The first language fragment does not need to be manually divided into sentences, and the model can divide the sentences for the first language fragment as long as the first language fragment is input into the basic GPT language model.
Specifically, as shown in fig. 5, a piece of training data D ═ S (S) 1 ,R 1 ,S 2 ,R 2 ,S 3 ,R 3 ,S 4 ,R 4 ,S 5 ,R 5 ) For example, wherein S 1 A first sentence being a first speech segment, R 1 For a first sub-reply sentence in the second speech passage, S 2 A second sentence being a first speech segment, R 2 Replying to a second sub-answer for a second sentence in a second speech passageComplex sentences, all sentences of the second sentence section being from R 1 To R 5
Inputting S into basic GPT language model 1 To obtain R 1 Inputting S into basic GPT language model 1 、R 1 、S 2 To obtain R 2 Inputting S into basic GPT language model 1 、R 1 、S 2 、R 2 、S 3 To obtain R 3 Inputting S into the basic GPT language model 1 、R 1 、S 2 、R 2 、S 3 、R 3 、S 4 To obtain R 4 Inputting S into basic GPT language model 1 、R 1 、S 2 、R 2 、S 3 、R 3 、S 4 、R 4 、S 5 To obtain R 5 The basic GPT language model is trained once by the data, and is trained for multiple times by multiple pieces of data by the same method, and then the pre-training GPT language model is generated.
203. And marking the marked reply sentence for solving the psychological problem of the help seeker in the second language segment, determining whether the first sub reply sentence is the marked reply sentence, and if the first sub reply sentence is the marked reply sentence, inputting the first identity information and the first sentence of the help seeker into the pre-training GPT language model to obtain the marked reply sentence.
Wherein, the first identity information of the person seeking help includes the identity information of each dimension of the person seeking help, such as age, sex, occupation, education degree, religion belief, whether seek help for the first time, can be noted as: i ═ I (I) 1 ,I 2 ,I 3 ,I 4 ,I 5 ...I M )。
For the present embodiment, specifically, as shown in fig. 6, a piece of training data D ═ S 1 ,R 1 ,S 2 ,R 2 ,S 3 ,R 3 ,S 4 ,R 4 ,S 5 ,R 5 ) Marking the sentences in the second language segment with the psychological problems of the help seeker solved as marked reply sentences to obtain D ═ (S ═ 1 ,R 1 ,S 2 ,R 2 ,S 3 ,R 3 *,S 4 ,R 4 ,S 5 ,R 5 A) of which S 1 Is a first sentence of a first speech segment, R 1 If the first sub-reply sentence in the second speech sentence is a reply sentence in the first speech sentence and the first sub-reply sentence in this embodiment step is not a markup reply sentence, the content in embodiment step 204 is performed.
204. And if the first sub-reply sentence is not the marked reply sentence, determining whether the second sub-reply sentence is the marked reply sentence, and if the second sub-reply sentence is the marked reply sentence, inputting the first identity information, the first sentence, the first reply sentence and the second sentence into the pre-trained GPT language model to obtain the marked reply sentence until all marked reply sentences are obtained, so as to generate the automatic psychological question reply model.
For the present example, S is an embodiment 2 A second sentence being a first speech segment, R 2 For the second sub-reply sentence in the second speech passage that replies to the second sentence, for the example illustrated in embodiment step 203, the second sub-reply sentence is not a tagged reply sentence, and then it is continued to be determined that the third sub-reply sentence is a tagged reply sentence R 3 Therefore, the first identity information, the first sentence, the first sub-reply sentence, the second sub-reply sentence and the third sentence S in the first sentence segment 3 Inputting the input into a pre-training GPT language model to obtain a mark reply sentence R 3 Continuing to determine the fourth sub-reply sentence R 4 Instead of the markup reply sentence, the fifth sub-reply sentence R 5 Is a marked-up reply sentence, so the first identity information, the first sentence, the first sub-reply sentence, the second sub-reply sentence, the third sub-reply sentence, the fourth sub-reply sentence and the fifth sentence S in the first sentence segment 5 Inputting the result into a pre-training GPT language model to obtain a fifth sub-reply sentence R 5 The data is trained once for the pre-trained GPT language model, the same method is used, a plurality of data are used for training the pre-trained GPT language model for a plurality of times, and then the psychological problem automatic answer is generatedAnd (5) a complex model.
205. And acquiring second identity information of the target help seeker and a third language segment narrated by the target help seeker, and inputting the second identity information and the third language segment into the psychological problem automatic response model to obtain a target response result.
The second identity information in this embodiment is the identity information of the target help seeker, and the first identity information in step 203 in this embodiment is the identity information of the help seeker used for training.
For this embodiment, as an implementation manner, inputting the second identity information and the third speech segment into the automatic psychological question response model to obtain the target response result, including: inputting the second identity information and the third language segment into a psychological question automatic answering model; the psychological question automatic answer model outputs a first sub-target answer result for answering the first target sentence according to the second identity information and the first target sentence of the third language segment; the psychological question automatic answer model outputs a second sub-target answer result for answering the second target sentence according to the second identity information, the first target sentence, the first sub-target answer result and the second target sentence of the third language segment; and summarizing the response results of the sub-targets corresponding to all the target sentences to obtain the target response result until the response of all the target sentences in the third language section is finished.
The third language segment is not required to be manually divided, and the third language segment is only required to be input into the psychological problem automatic answer model, so that the model divides the third language segment into (S) and (B) as shown in fig. 7 1 ,R 1 ,S 2 ,R 2 ,S 3 ,R 3 ),S 1 Is the first target statement of the third language segment, R 1 For a first sub-target answer result to the first target sentence answer, S 2 A second target sentence of a third speech segment, R 2 For the second sub-target answer result to the second target sentence answer, S 3 A third target sentence in a third speech segment, R 3 For the third sub-target reply result to the third target sentence, all target sentences in the third speech segment are S 1 、S 2 、S 3 I.e. to the firstAnd summarizing the first sub-target reply result, the second sub-target reply result and the third sub-target reply result to obtain a target reply result.
206. And the target help seeker evaluates the satisfaction degree of the target response result.
For the embodiment, the accuracy of the automatic psychological question response model can be verified through the evaluation result of the satisfaction evaluation of the target help seeker on the target response result, if the evaluation result does not reach the qualified threshold, a large amount of data is continuously acquired to train the model, and the automatic psychological question response model with higher accuracy is obtained.
The application discloses a method, a device and equipment for generating a psychological question automatic response model, and the method comprises the steps of firstly obtaining a first language section described by a help seeker and a second language section for a psychological consultant to respond to the first language section; training a basic GPT language model by utilizing the first language segment to obtain a second language segment, and generating a pre-training GPT language model; and marking the marked reply sentences for solving the psychological problems of the help seeker in the second language segment, training a pre-training GPT language model by utilizing the first identity information and the first language segment of the help seeker to obtain the marked reply sentences, and generating an automatic psychological problem reply model. Through the technical scheme in the application, the problem of insufficient resources of a psychological consultant is solved by automatically generating the answer to the problem of the help seeker through the automatic answer model of the psychological problem, manpower is liberated, and meanwhile, because a large amount of data can not be completely marked, the automatic answer model of the psychological problem is obtained based on the data training of two stages, the pre-training GPT language model is obtained through a large amount of unmarked data training in the first stage, the data of the psychological problem of the help seeker is marked in the second stage, the automatic answer model of the psychological problem is obtained through the data training after marking, so that the obtained psychological problem can be pertinently solved, the individuation is not enough, the problem of insufficient user experience is solved, and the answer efficiency is improved.
Further, as a specific implementation of the method shown in fig. 1 and fig. 2, an embodiment of the present application provides an apparatus for generating an automatic psychological question response model, as shown in fig. 3, the apparatus includes: a first acquisition module 31, a training module 32, and a generation module 33;
the first obtaining module 31 is configured to obtain a first language segment described by the help seeker and a second language segment in which the psychological consultant replies to the first language segment;
the training module 32 is configured to train the basic GPT language model using the first speech segment to obtain a second speech segment, and generate a pre-training GPT language model;
the generating module 33 may be configured to mark a reply sentence in the second speech segment for solving the psychological problem of the help seeker, train the pre-training GPT language model by using the first identity information of the help seeker and the first speech segment to obtain the reply sentence, and generate the automatic psychological problem reply model.
In a specific application scenario, an apparatus for generating an automatic psychological question response model, as shown in fig. 4, further includes: the second obtaining module 34 is specifically configured to obtain second identity information of the target help seeker and a third language segment narrated by the target help seeker.
In a specific application scenario, an apparatus for generating an automatic psychological question response model, as shown in fig. 4, further includes: the reply module 35 is specifically configured to input the second identity information and the third speech segment into the automatic psychological question reply model, so as to obtain a target reply result.
Accordingly, in order to input the second identity information and the third speech segment into the automatic psychological question response model to obtain the target response result, as shown in fig. 4, the response module 35 may specifically include: first output section 351, second output section 352, and summing section 353.
The first output unit 351 is configured to input the second identity information and the third language segment into the psychological question automatic response model, and the psychological question automatic response model outputs a first sub-target response result to the first target sentence according to the second identity information and the first target sentence of the third language segment;
a second output unit 352, configured to output, by the automatic psychological question reply model, a second sub-target reply result to the second target sentence according to the second identity information, the first target sentence, the first sub-target reply result, and the second target sentence in the third speech segment;
the summarizing unit 353 may be configured to summarize the sub-target response results of all the target sentences to obtain the target response result until all the target sentences in the third sentence segment are completely responded.
In a specific application scenario, in order to train the basic GPT language model using the first speech segment to obtain the second speech segment, as shown in fig. 4, the training module 32 may specifically include: a first training unit 321, a second training unit 322, a first termination unit 323;
a first training unit 321, configured to train a basic GPT language model using a first sentence in a first language segment to obtain a first sub-reply sentence in a second language segment that replies to the first sentence;
a second training unit 322, configured to train the basic GPT language model using the first sentence, the first sub-reply sentence, and the second sentence of the first speech segment, so as to obtain a second sub-reply sentence in the second speech segment that replies to the second sentence;
the first end unit 323 is configured to end training of all sentences of the second speech segment.
In a specific application scenario, in order to train the pre-trained GPT language model by using the first identity information of the help seeker and the first speech segment to obtain the tagged reply sentence, as shown in fig. 4, the generating module 33 may specifically include: a first determination unit 331, a first input unit 332, a second determination unit 333, a second input unit 334, a second end unit 335;
a first determination unit 331 operable to determine whether the first sub-reply sentence is a markup reply sentence;
a first input unit 332, configured to, if the first sub-reply sentence is a tag reply sentence, input the first identity information of the help seeker and the first sentence into the pre-trained GPT language model to obtain a tag reply sentence;
a second determining unit 333 operable to determine whether or not the second sub-reply sentence is a markup reply sentence if the first sub-reply sentence is not the markup reply sentence;
a second input unit 334, configured to, if the second sub-reply sentence is a tag reply sentence, input the first identity information, the first sentence, the first reply sentence, and the second sentence into the pre-trained GPT language model to obtain a tag reply sentence;
a second end unit 335 may be used until all the markup reply sentences are available.
In a specific application scenario, an apparatus for generating an automatic psychological question response model, as shown in fig. 4, further includes: the evaluation module 36 is specifically configured to evaluate the satisfaction degree of the target help seeker with the target response result.
It should be noted that other corresponding descriptions of the functional units related to the apparatus for generating an automatic psychological problem response model provided in this embodiment may refer to the corresponding descriptions in fig. 1 to fig. 2, and are not repeated herein.
Based on the method shown in fig. 1 to fig. 2, correspondingly, the present embodiment further provides a storage medium, which may be volatile or nonvolatile, and on which computer readable instructions are stored, and when the computer readable instructions are executed by a processor, the method for generating the automatic psychological question response model shown in fig. 1 to fig. 2 is implemented.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, or the like), and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device, or the like) to execute the method of the embodiments of the present application.
Based on the method shown in fig. 1 to fig. 2 and the virtual device embodiments shown in fig. 3 and fig. 4, in order to achieve the above object, the present embodiment further provides a computer device, where the computer device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the method for generating the psychological problem automatic response model as described above with reference to fig. 1 to 2.
Optionally, the computer device may further include a user interface, a network interface, a camera, Radio Frequency (RF) circuitry, a sensor, audio circuitry, a WI-FI module, and so forth. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
It will be understood by those skilled in the art that the structure of a computer device provided in the present embodiment does not constitute a limitation of the physical device, and may include more or fewer components, or some components in combination, or a different arrangement of components.
The storage medium may further include an operating system and a network communication module. The operating system is a program that manages the hardware and software resources of the computer device described above, supporting the execution of information processing programs and other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and communication with other hardware and software in the information processing entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware.
By applying the technical scheme of the application, compared with the prior art, the application discloses a method, a device and equipment for generating a psychological question automatic response model, wherein the method comprises the steps of firstly obtaining a first language section described by a help seeker and a second language section for a psychological consultant to respond to the first language section; training a basic GPT language model by utilizing the first language fragment to obtain a second language fragment, and generating a pre-training GPT language model; and marking the marked reply sentences for solving the psychological problems of the help seeker in the second language segment, training a pre-training GPT language model by utilizing the first identity information and the first language segment of the help seeker to obtain the marked reply sentences, and generating an automatic psychological problem reply model. Through the technical scheme in the application, the problem of insufficient resources of a psychological consultant is solved by automatically generating the answer to the problem of the help seeker through the automatic answer model of the psychological problem, manpower is liberated, and meanwhile, because a large amount of data can not be completely marked, the automatic answer model of the psychological problem is obtained based on the data training of two stages, the pre-training GPT language model is obtained through a large amount of unmarked data training in the first stage, the data of the psychological problem of the help seeker is marked in the second stage, the automatic answer model of the psychological problem is obtained through the data training after marking, so that the obtained psychological problem can be pertinently solved, the individuation is not enough, the problem of insufficient user experience is solved, and the answer efficiency is improved.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (10)

1. A method for generating an automatic answer model for psychological questions, comprising:
acquiring a first language section described by a help seeker and a second language section replied by a psychological consultant to the first language section;
training a basic GPT language model by utilizing the first language fragment to obtain a second language fragment, and generating a pre-training GPT language model;
and marking a marked reply sentence for solving the psychological problem of the help seeker in the second language segment, and training the pre-trained GPT language model by utilizing the first identity information and the first language segment of the help seeker to obtain the marked reply sentence so as to generate an automatic psychological problem reply model.
2. The method of claim 1, further comprising:
acquiring second identity information of a target help seeker and a third language segment narrated by the target help seeker;
and inputting the second identity information and the third language segment into the automatic psychological question answering model to obtain a target answering result.
3. The method according to claim 2, wherein said inputting said second identity information and said third speech segment into said automatic psychological question response model to obtain a target response result comprises:
inputting the second identity information and the third speech segment into the psychological question automatic response model;
the psychological question automatic response model outputs a first sub-target response result for responding to the first target sentence according to the second identity information and the first target sentence of the third language segment;
the psychological question automatic response model outputs a second sub-target response result responded to the second target sentence according to the second identity information, the first target sentence, the first sub-target response result and the second target sentence of the third language segment;
and summarizing the response results of the sub-targets corresponding to all the target sentences to obtain target response results until the response of all the target sentences of the third language section is finished.
4. The method of claim 1, wherein said training a basic GPT language model using the first speech segment to obtain the second speech segment comprises:
training the basic GPT language model by utilizing the first sentence of the first language fragment to obtain a first sub-reply sentence which is in the second language fragment and replies to the first sentence;
training the basic GPT language model by using the first sentence, the first sub-reply sentence and a second sentence of the first speech section to obtain a second sub-reply sentence which is in the second speech section and replies to the second sentence;
until training is finished for all sentences of the second language segment.
5. The method of claim 4, wherein the training the pre-trained GPT language model using the first identity information of the help seeker and the first speech passage to obtain the tagged reply sentence, comprises:
determining whether the first sub-reply sentence is the markup reply sentence;
if the first sub-reply sentence is the marked reply sentence, inputting the first identity information of the help seeker and the first sentence into the pre-training GPT language model to obtain the marked reply sentence;
if the first sub-reply sentence is not the tagged reply sentence, determining whether the second sub-reply sentence is the tagged reply sentence;
if the second sub-reply sentence is the tag reply sentence, inputting the first identity information, the first sentence, the first reply sentence and the second sentence into the pre-training GPT language model to obtain the tag reply sentence;
until all of the markup reply sentences are obtained.
6. The method according to claim 2, wherein after said obtaining a target response result, said method further comprises:
and the target help seeker evaluates the satisfaction degree of the target response result.
7. An apparatus for generating an automatic answer model to a psychological question, comprising:
the first obtaining module is used for obtaining a first language section described by a help seeker and a second language section for a psychological consultant to reply to the first language section;
the training module is used for training a basic GPT language model by utilizing the first language segment to obtain the second language segment and generating a pre-training GPT language model;
and the generating module is used for marking a reply sentence for solving the psychological problem of the help seeker in the second language segment, training the pre-training GPT language model by utilizing the first identity information and the first language segment of the help seeker to obtain the reply sentence, and generating an automatic psychological problem reply model.
8. The apparatus of claim 7, further comprising:
the second obtaining module is used for obtaining second identity information of the target help seeker and a third language segment narrated by the target help seeker;
and the reply module is used for inputting the second identity information and the third language segment into the psychological question automatic reply model to obtain a target reply result.
9. A storage medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method for generating an automatic answer model to a psychological question according to any one of claims 1 to 6.
10. A computer apparatus comprising a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, wherein the processor implements the method for generating the automatic psychological problem response model according to any one of claims 1 to 6 when executing the program.
CN202210845973.XA 2022-07-19 2022-07-19 Method, device and equipment for generating psychological question automatic response model Pending CN115132353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210845973.XA CN115132353A (en) 2022-07-19 2022-07-19 Method, device and equipment for generating psychological question automatic response model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210845973.XA CN115132353A (en) 2022-07-19 2022-07-19 Method, device and equipment for generating psychological question automatic response model

Publications (1)

Publication Number Publication Date
CN115132353A true CN115132353A (en) 2022-09-30

Family

ID=83383607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210845973.XA Pending CN115132353A (en) 2022-07-19 2022-07-19 Method, device and equipment for generating psychological question automatic response model

Country Status (1)

Country Link
CN (1) CN115132353A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468298A (en) * 2023-06-12 2023-07-21 江西五十铃汽车有限公司 GPT network model-based automobile technology planning and decision-making method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468298A (en) * 2023-06-12 2023-07-21 江西五十铃汽车有限公司 GPT network model-based automobile technology planning and decision-making method and system
CN116468298B (en) * 2023-06-12 2023-11-03 江西五十铃汽车有限公司 GPT network model-based automobile technology planning and decision-making method and system

Similar Documents

Publication Publication Date Title
Bibauw et al. Discussing with a computer to practice a foreign language: Research synthesis and conceptual framework of dialogue-based CALL
Ziegele et al. What creates interactivity in online news discussions? An exploratory analysis of discussion factors in user comments on news items
Beltman et al. Quietly sharing the load? The role of school psychologists in enabling teacher resilience
CN108121800B (en) Information generation method and device based on artificial intelligence
Osterman Experiences of Japanese university students’ willingness to speak English in class: A multiple case study
Warwick et al. The role of pupil voice as a trigger for teacher learning in Lesson Study professional groups
King Recognising adulthood? Young adults’ accomplishment of their age identities
Ferschke et al. Fostering discussion across communication media in massive open online courses
CN108268450B (en) Method and apparatus for generating information
Mawhinney et al. I just feel so guilty: The role of emotions in former urban teachers’ career paths
Chen et al. Teachers’ literal and inferential questions and children’s responses: A study of teacher–Child linguistic interactions during whole-group instruction in Hong Kong kindergarten classrooms
KR20210001419A (en) User device, system and method for providing interview consulting service
Thomassen The hidden battle that shaped the history of sociology: Arnold van Gennep contra Emile Durkheim
Tseng et al. The effects of MALL on L2 pronunciation learning: A meta-analysis
US9547995B1 (en) Dynamic instructional course
Hutter et al. Promoting inclusive and accessible design in usability testing: a teaching case with users who are deaf
CN115132353A (en) Method, device and equipment for generating psychological question automatic response model
Li et al. Technology-enhanced reflection and teacher development: A student teacher's journey
Roth Are competence frameworks fit for practice? Examining the validity of competence frameworks for CBT, psychodynamic, and humanistic therapies
De Wit et al. Can openness to ICT and scientific research predict the ICT skills and ICT use of bachelor's students?
Vincze et al. Ignorance-unmasking questions in the Royal–Sarkozy presidential debate: A resource to claim epistemic authority
Broza et al. Exploring a model to develop critical reflective thought among elementary school math preservice teachers
Baer et al. Computer assessment of simulated patient interviews (CASPI): Psychometric properties of a web-based system for the assessment of motivational interviewing skills
Schmidt Listening is essential: An experiential exercise on listening behaviors
Madalińska-Michalak Fostering quality education research: The role of the European Educational Research Association as a scientific association

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination