CN111259119B - Question recommending method and device - Google Patents

Question recommending method and device Download PDF

Info

Publication number
CN111259119B
CN111259119B CN201811458062.1A CN201811458062A CN111259119B CN 111259119 B CN111259119 B CN 111259119B CN 201811458062 A CN201811458062 A CN 201811458062A CN 111259119 B CN111259119 B CN 111259119B
Authority
CN
China
Prior art keywords
candidate
training
candidate problem
request
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811458062.1A
Other languages
Chinese (zh)
Other versions
CN111259119A (en
Inventor
张姣姣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN201811458062.1A priority Critical patent/CN111259119B/en
Publication of CN111259119A publication Critical patent/CN111259119A/en
Application granted granted Critical
Publication of CN111259119B publication Critical patent/CN111259119B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/01Customer relationship services
    • G06Q30/015Providing customer assistance, e.g. assisting a customer within a business location or via helpdesk
    • G06Q30/016After-sales

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a problem recommending method and device, wherein the method comprises the following steps: after detecting that a request end initiates a session request, determining an accepted probability of recommending each candidate problem in a candidate problem set to the request end based on characteristic information of the request end and a first prediction model common to different candidate problems; determining whether each candidate problem in the candidate problem set is accepted by the request end or not according to the characteristic information of the request end and a second prediction model matched with each candidate problem in the candidate problem set; screening a predicted result from the candidate problem set according to the predicted result corresponding to each candidate problem, wherein the predicted result represents at least one target candidate problem accepted by the requested end; and selecting a question recommended to the requesting end from the at least one target candidate question according to the accepted probability corresponding to the at least one target candidate question. Therefore, the problem of recommending each request end can be personalized, and the consultation requirements of different request ends can be better met.

Description

Question recommending method and device
Technical Field
The application relates to the technical field of internet, in particular to a problem recommending method and device.
Background
With the rapid development and popularization of the internet, various internet applications, such as online shopping applications, online taxi taking applications and the like, are also layered endlessly. Users may encounter some problems in using internet applications requiring consultation services, so that the internet applications are generally configured with a consultation function in order to provide the consultation services to the users.
When a user consults a question, the consultation system typically recommends some candidate questions to the user in order for the user to select the question to be consulted. The form of the recommended questions of the consultation system at present adopts a static configuration mode, namely, candidate questions which can be selected by a user are preconfigured. However, this manner of static configuration is difficult to meet the consultation requirements of different users, for example, some of the questions to be consulted by the users may not be among the candidate questions, so that the users also need to spend time consulting or listening to the candidate questions, resulting in low efficiency of the consultation of the questions.
Disclosure of Invention
In view of this, an objective of the embodiments of the present application is to provide a method and an apparatus for problem recommendation, so as to better satisfy the consultation requirements of different users and improve the problem consultation efficiency.
In a first aspect, the present application provides a problem recommendation method, including:
After detecting that a request end initiates a session request, determining an accepted probability of recommending each candidate problem in a candidate problem set to the request end based on characteristic information of the request end and a first prediction model which is trained in advance and is common to different types of candidate problems;
determining a prediction result of whether each candidate problem in the candidate problem set is accepted by the request terminal or not based on the characteristic information of the request terminal and a pre-trained second prediction model matched with each candidate problem in the candidate problem set;
screening a predicted result from the candidate problem set according to the predicted result corresponding to each candidate problem, wherein the predicted result represents at least one target candidate problem accepted by the requested end;
and selecting a question recommended to the request end from the at least one target candidate question according to the accepted probability corresponding to the at least one target candidate question.
In a possible implementation manner, the selecting, according to the accepted probability corresponding to the at least one target candidate problem, a problem recommended to the requesting end from the at least one target candidate problem includes:
and recommending the target candidate questions with the accepting probability higher than a preset probability value to the request end as the questions of the at least one target candidate question.
In a possible implementation manner, the selecting, according to the accepted probability corresponding to the at least one target candidate problem, a problem recommended to the requesting end from the at least one target candidate problem includes:
arranging the at least one target candidate problem in the order from the high probability to the low probability;
and taking the target candidate questions with the accepted probability arranged in the first k bits in the at least one target candidate question as questions recommended to the request end, wherein k is a positive integer.
In a possible implementation manner, the determining, based on the feature information of the requesting end and a first prediction model commonly used by different kinds of pre-trained candidate questions, an acceptance probability of recommending each candidate question in the candidate question set to the requesting end includes:
extracting the characteristics of the characteristic information to obtain a characteristic vector;
and inputting the feature vector into a pre-trained first prediction model, and outputting the acceptance probability that each candidate problem in the candidate problem set is recommended to the request terminal.
In a possible implementation manner, the determining, based on the feature information of the request end and a pre-trained second prediction model matched with each candidate problem in the candidate problem set, whether each candidate problem in the candidate problem set is accepted by the request end includes:
Extracting the characteristics of the characteristic information to obtain a characteristic vector;
and inputting the feature vector extracted from the feature information into a pre-trained second prediction model matched with each candidate problem in the candidate problem set, and outputting a prediction result of whether each candidate problem in the candidate problem set is accepted by a request end.
In a possible implementation manner, the questions recommended to the requesting end further include a preset prompting question, where the preset prompting question is used to prompt the requesting end whether to request to respond to other questions.
In a possible implementation manner, before detecting that the request end initiates the session request, the method further includes:
counting the total times of each problem requested to respond by different request ends in the second historical time period;
and taking the counted problems with the total times meeting the preset conditions as candidate problems to form the candidate problem set.
In a possible implementation manner, when the request end is a service provider terminal, the feature information includes at least one of the following information:
character description information of the service provider;
order description information of an order processed by the service provider last time;
Character description information of a service requester of the last processed order;
the service provider initiates order state information when the session request;
the location and time when the service provider initiates the session request;
the service provider aggregates information for orders over a first historical period of time.
In a possible embodiment, the method further comprises:
acquiring history session record information in a third history time period, wherein the history session record information comprises history characteristic information of each request end when each request end initiates a session request and a history problem of each request end when each request end initiates a session request;
extracting a history feature vector corresponding to each history feature information;
taking each extracted historical feature vector as a training sample to form a first sample training set, wherein each training sample corresponds to a question label, and different question labels are used for identifying historical questions respectively corresponding to different historical feature vectors;
and training the first prediction model based on the first sample training set until the first prediction model training is determined to be completed.
In a possible implementation manner, the training the first prediction model based on the first sample training set until it is determined that the training of the first prediction model is completed includes:
inputting a preset number of training samples in the first sample training set into the first prediction model, respectively outputting a history accepted probability that each candidate problem in the candidate problem set is recommended to the request end according to each inputted training sample, and determining a candidate problem with the highest history accepted probability corresponding to each training sample;
determining a first loss value of the training process by comparing the candidate problem with the highest historical accepted probability corresponding to each training sample with the problem label corresponding to each training sample;
when the first loss value is larger than a first set value, the model parameters of the first prediction model are adjusted, the next round of training process is conducted by using the adjusted first prediction model, and when the determined first loss value is smaller than or equal to the first set value, the first prediction model training is determined to be completed.
In a possible embodiment, the method further comprises:
generating a second prediction model matched with each candidate problem aiming at each candidate problem in the candidate problem set, and generating a second sample training set corresponding to each candidate problem;
and training the second prediction model matched with each candidate problem based on the second sample training set corresponding to each candidate problem until the second prediction model matched with each candidate problem is determined to be trained.
In a possible implementation manner, the generating the second sample training set corresponding to each candidate problem includes:
aiming at a first candidate problem in the candidate sample set, the first candidate problem is any candidate problem in the candidate sample set, and the following operation is executed:
screening out first historical characteristic information of a first request end and second historical characteristic information of a second request end from the historical session record information; the first request end indicates that the historical problem of the request response is the request end of the first candidate problem, and the second request end indicates that the historical problem of the request response is not the request end of the first candidate problem;
Extracting a first historical feature vector corresponding to each first historical feature information, and extracting a second historical feature vector corresponding to each second historical feature information;
taking each extracted first historical feature vector as a positive training sample to form a positive sample training set, and taking each extracted second historical feature vector as a negative training sample to form a negative sample training set;
forming a second sample training set corresponding to the first candidate problem by the positive sample training set and the negative sample training set;
each positive training sample corresponds to a positive label, each negative training sample corresponds to a negative label, the positive label indicates that the problem requested to be responded by the request end is the first candidate problem, and the negative label indicates that the problem requested to be responded by the request end is not the first candidate problem.
In a possible implementation manner, the training, based on the second sample training set corresponding to each candidate problem, the second prediction model matching each candidate problem until it is determined that the training of the second prediction model matching each candidate problem is completed, includes:
for the second prediction model of the first candidate problem match, performing the following training process:
Acquiring a first preset number of positive training samples and a second preset number of negative training samples from a second sample training set corresponding to the first candidate problem;
inputting the first preset number of positive training samples and the second preset number of negative training samples into a second prediction model matched with the first candidate problem, and outputting a classification result corresponding to each positive training sample and a classification result corresponding to each negative training sample; the classification result indicates whether the problem requested to be responded by the request end is the first candidate problem or not;
determining a second loss value of the training process by comparing the classification result corresponding to each positive training sample with the positive label and comparing the classification result corresponding to each negative training sample with the negative label;
when the second loss value is larger than a second set value, model parameters of a second prediction model matched with the first candidate problem are adjusted, and a next training process is carried out by using the adjusted second prediction model matched with the first candidate problem until the determined second loss value is smaller than or equal to the second set value, and training of the second prediction model matched with the first candidate problem is determined to be completed.
In a second aspect, the present application provides a problem recommendation device, including:
the first determining module is used for determining the accepted probability of recommending each candidate problem in the candidate problem set to the request terminal based on the characteristic information of the request terminal and a first prediction model which is trained in advance and is common to different candidate problems after the request terminal initiates a session request;
the second determining module is used for determining whether each candidate problem in the candidate problem set is accepted by the request end or not based on the characteristic information of the request end and a pre-trained second prediction model matched with each candidate problem in the candidate problem set;
the first screening module is used for screening a predicted result from the candidate problem set according to the predicted result corresponding to each candidate problem, wherein the predicted result represents at least one target candidate problem accepted by the requested end;
and the second screening module is used for selecting the questions recommended to the request end from the at least one target candidate questions according to the accepted probability corresponding to the at least one target candidate question.
In one possible design, the second filtering module is specifically configured to, when selecting, according to the probability of being accepted corresponding to the at least one target candidate problem, a problem recommended to the requesting end from the at least one target candidate problem:
And recommending the target candidate questions with the accepting probability higher than a preset probability value to the request end as the questions of the at least one target candidate question.
In one possible design, the second filtering module is specifically configured to, when selecting, according to the probability of being accepted corresponding to the at least one target candidate problem, a problem recommended to the requesting end from the at least one target candidate problem:
arranging the at least one target candidate problem in the order from the high probability to the low probability;
and taking the target candidate questions with the accepted probability arranged in the first k bits in the at least one target candidate question as questions recommended to the request end, wherein k is a positive integer.
In one possible design, the first determining module is specifically configured to, when determining, based on the feature information of the request terminal and a first prediction model that is common to different types of candidate questions trained in advance, a probability of acceptance of recommending each candidate question in the candidate question set to the request terminal:
extracting the characteristics of the characteristic information to obtain a characteristic vector;
and inputting the feature vector into a pre-trained first prediction model, and outputting the acceptance probability that each candidate problem in the candidate problem set is recommended to the request terminal.
In one possible design, the second determining module is specifically configured to, when determining, based on the feature information of the requesting end, a pre-trained second prediction model that matches each candidate problem in the candidate problem set, whether each candidate problem in the candidate problem set is a prediction result accepted by the requesting end:
extracting the characteristics of the characteristic information to obtain a characteristic vector;
and inputting the feature vector extracted from the feature information into a pre-trained second prediction model matched with each candidate problem in the candidate problem set, and outputting a prediction result of whether each candidate problem in the candidate problem set is accepted by a request end.
In one possible design, the questions recommended to the requesting end further include a preset prompting question, where the preset prompting question is used to prompt the requesting end whether to request to respond to other questions.
In one possible design, the first determining module is further configured, before detecting that the requesting end initiates the session request, to:
counting the total times of each problem requested to respond by different request ends in the second historical time period;
and taking the counted problems with the total times meeting the preset conditions as candidate problems to form the candidate problem set.
In one possible design, when the request end is a service provider terminal, the feature information includes at least one of the following information:
character description information of the service provider;
order description information of an order processed by the service provider last time;
character description information of a service requester of the last processed order;
the service provider initiates order state information when the session request;
the location and time when the service provider initiates the session request;
the service provider aggregates information for orders over a first historical period of time.
In one possible design, the apparatus further comprises:
the first model training module is used for acquiring historical session record information in a third historical time period, wherein the historical session record information comprises historical characteristic information of each request end when each request end initiates a session request and historical problems of each request end when each request end initiates a session request;
extracting a history feature vector corresponding to each history feature information;
taking each extracted historical feature vector as a training sample to form a first sample training set, wherein each training sample corresponds to a question label, and different question labels are used for identifying historical questions respectively corresponding to different historical feature vectors;
And training the first prediction model based on the first sample training set until the first prediction model training is determined to be completed.
In one possible design, the first model training module is specifically configured to, when training the first prediction model based on the first sample training set until it is determined that the first prediction model training is completed:
inputting a preset number of training samples in the first sample training set into the first prediction model, respectively outputting a history accepted probability that each candidate problem in the candidate problem set is recommended to the request end according to each inputted training sample, and determining a candidate problem with the highest history accepted probability corresponding to each training sample;
determining a first loss value of the training process by comparing the candidate problem with the highest historical accepted probability corresponding to each training sample with the problem label corresponding to each training sample;
when the first loss value is larger than a first set value, the model parameters of the first prediction model are adjusted, the next round of training process is conducted by using the adjusted first prediction model, and when the determined first loss value is smaller than or equal to the first set value, the first prediction model training is determined to be completed.
In one possible design, the apparatus further comprises:
the second module training module is used for generating a second prediction model matched with each candidate problem aiming at each candidate problem in the candidate problem set, and generating a second sample training set corresponding to each candidate problem;
and training the second prediction model matched with each candidate problem based on the second sample training set corresponding to each candidate problem until the second prediction model matched with each candidate problem is determined to be trained.
In one possible design, the second model training module is specifically configured to, when generating the second sample training set corresponding to each candidate problem:
aiming at a first candidate problem in the candidate sample set, the first candidate problem is any candidate problem in the candidate sample set, and the following operation is executed:
screening out first historical characteristic information of a first request end and second historical characteristic information of a second request end from the historical session record information; the first request end indicates that the historical problem of the request response is the request end of the first candidate problem, and the second request end indicates that the historical problem of the request response is not the request end of the first candidate problem;
Extracting a first historical feature vector corresponding to each first historical feature information, and extracting a second historical feature vector corresponding to each second historical feature information;
taking each extracted first historical feature vector as a positive training sample to form a positive sample training set, and taking each extracted second historical feature vector as a negative training sample to form a negative sample training set;
forming a second sample training set corresponding to the first candidate problem by the positive sample training set and the negative sample training set;
each positive training sample corresponds to a positive label, each negative training sample corresponds to a negative label, the positive label indicates that the problem requested to be responded by the request end is the first candidate problem, and the negative label indicates that the problem requested to be responded by the request end is not the first candidate problem.
In one possible design, the second model training module is specifically configured to, when training the second prediction model matched with each candidate problem based on the second sample training set corresponding to each candidate problem until it is determined that training of the second prediction model matched with each candidate problem is completed:
For the second prediction model of the first candidate problem match, performing the following training process:
acquiring a first preset number of positive training samples and a second preset number of negative training samples from a second sample training set corresponding to the first candidate problem;
inputting the first preset number of positive training samples and the second preset number of negative training samples into a second prediction model matched with the first candidate problem, and outputting a classification result corresponding to each positive training sample and a classification result corresponding to each negative training sample; the classification result indicates whether the problem requested to be responded by the request end is the first candidate problem or not;
determining a second loss value of the training process by comparing the classification result corresponding to each positive training sample with the positive label and comparing the classification result corresponding to each negative training sample with the negative label;
when the second loss value is larger than a second set value, model parameters of a second prediction model matched with the first candidate problem are adjusted, and a next training process is carried out by using the adjusted second prediction model matched with the first candidate problem until the determined second loss value is smaller than or equal to the second set value, and training of the second prediction model matched with the first candidate problem is determined to be completed.
The functions of the above modules may be referred to the description of the first aspect, and will not be further described herein.
In a third aspect, embodiments of the present application further provide an electronic device, including: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the steps of the problem recommendation method described in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, the embodiments of the present application further provide a computer readable storage medium, on which a computer program is stored, which when executed by a processor performs the steps of the problem recommendation method described in the first aspect or any of the possible implementation manners of the first aspect.
In this embodiment of the present application, after a request terminal initiates a session request, a server may obtain feature information of the request terminal, and then may respectively predict, using a first prediction model that is common to different types of candidate questions and a second prediction model that is matched with each candidate question in the candidate question set, a recommended accepted probability of each candidate question in the candidate question set and a prediction result of whether each candidate question is accepted by the request terminal. Further, at least one target candidate problem, the predicted result of which represents the acceptance of the requested end, may be screened from the candidate problem set, and then the problem which is finally recommended to the user may be determined according to the acceptance probabilities corresponding to the target candidate problems. Compared with the scheme with the pre-configured candidate questions, the scheme can screen the questions most likely to be accepted by the request end from the candidate question set and recommend the questions to the request end based on the characteristic information of each request end and the two types of prediction models, and the method can conduct the question recommendation in a targeted manner, meets the personalized consultation requirements of different request ends, and improves the efficiency of the question consultation.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a block diagram of a service system 100 of some embodiments of the present application;
FIG. 2 shows a schematic diagram of exemplary hardware and software components of an electronic device 200 of some embodiments of the present application;
fig. 3 is a schematic flow chart of a problem recommending method according to an embodiment of the present application;
FIG. 4 shows an exemplary illustration of a DNN model provided by embodiments of the present application;
fig. 5 shows a flow chart of a problem recommendation method in a specific application scenario provided in the embodiment of the present application;
FIG. 6 illustrates a flow diagram for training a first predictive model provided by an embodiment of the present application;
FIG. 7 is a schematic flow chart of generating a second sample training set according to an embodiment of the present application;
FIG. 8 illustrates a flow diagram for training a second predictive model provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a problem recommending apparatus according to an embodiment of the present application;
fig. 10 shows a schematic structural diagram of an electronic device 100 according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the accompanying drawings in the present application are only for the purpose of illustration and description, and are not intended to limit the protection scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this application, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to the flow diagrams and one or more operations may be removed from the flow diagrams as directed by those skilled in the art.
In addition, the described embodiments are only some, but not all, of the embodiments of the present application. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
In order to enable those skilled in the art to use the present application, the following embodiments are presented in connection with a specific application scenario "user consults questions with a service system". It will be apparent to those having ordinary skill in the art that the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present application. Although the present application is primarily described in the context of a taxi service system, it should be understood that this is but one exemplary embodiment. The present application may be applied to any other traffic type of service system. For example, the present application may be applied to different transportation system environments, including land, sea, or air, among others, or any combination thereof. The transportation means of the transportation system may include taxis, private cars, windmills, buses, trains, bullet trains, high speed railways, subways, ships, airplanes, spacecraft, hot air balloons, or unmanned vehicles, etc., or any combination thereof. The application can also include any service system capable of providing counseling services, such as a system in an online shopping platform that provides counseling services to users, a system in an online meal ordering platform that provides counseling services to users. And, the providing modes of the consulting service in the application include but are not limited to the following two types: one is online consultation, i.e. online consultation of problems through a network, and the other is hot line consultation, i.e. hot line consultation of problems by dialing customer service. Applications of the systems or methods of the present application may include web pages, plug-ins to browsers, client terminals, customization systems, internal analysis systems, or artificial intelligence robots, etc., or any combination thereof.
It should be noted that the term "comprising" will be used in the embodiments of the present application to indicate the presence of the features stated hereinafter, but not to exclude the addition of other features.
The terms "passenger," "requestor," "service requestor," are used interchangeably herein to refer to a person, entity, or tool that may request or subscribe to a service. The terms "driver," "provider," "service provider," are used interchangeably herein to refer to a person, entity, or tool that can provide a service. The term "user" in this application may refer to a person, entity, or tool requesting, subscribing to, providing, or facilitating the provision of a service. In the embodiment of the present application, the user may be, for example, a passenger as a service requester, a driver as a service provider, or the like, or any combination thereof.
One aspect of the present application relates to a service system. When the system processes the consultation service, the problems about to be responded by each request end can be predicted according to the characteristic information of different request ends and the prediction model trained in advance through the deep learning algorithm, and the matched problems are recommended for each request end in a personalized mode based on the prediction results corresponding to each request end.
It is worth noting that, before the present application proposes, the present consultation system mostly adopts a static configuration mode to pre-configure candidate questions, when a request end consultation problem exists, the consultation system recommends the pre-configured candidate questions to the request end, the recommendation mode is difficult to adapt to the consultation requirements of different users, the problem that the user needs to spend time to check or listen to the consultation system recommendation easily occurs, but the situation that the problem to be consulted cannot be found, so that the problem consultation efficiency is lower, and the user experience is poor. However, according to the problem recommending method provided by the application, the problem can be individualized to be recommended to each request end in a deep learning mode according to the characteristic information of different request ends, and the consultation requirements of users of different request ends can be better met in the individualized recommending mode, so that the waiting time of the users in the consultation of the problems is reduced, the efficiency of the consultation of the problems is improved, and the user experience is further improved.
Fig. 1 is a block diagram of a service system 100 of some embodiments of the present application. For example, the service system 100 may be an online transport service platform for a transport service such as a taxi, a ride service, a express, a carpool, a bus service, a driver rental, or a class service, or any combination thereof. Service system 100 may include one or more of a server 110, a network 120, a service requester terminal 130, a service provider terminal 140, and a database 150, and a processor executing instruction operations may be included in server 110.
In some embodiments, the server 110 may be a single server or a group of servers. The server farm may be centralized or distributed (e.g., server 110 may be a distributed system). In some embodiments, the server 110 may be local or remote to the terminal. For example, the server 110 may access information and/or data stored in the service requester terminal 130, the service provider terminal 140, or the database 150, or any combination thereof, via the network 120. As another example, the server 110 may be directly connected to at least one of the service requester terminal 130, the service provider terminal 140, and the database 150 to access stored information and/or data. In some embodiments, server 110 may be implemented on a cloud platform; for example only, the cloud platform may include a private cloud, public cloud, hybrid cloud, community cloud (community cloud), distributed cloud, inter-cloud (inter-cloud), multi-cloud (multi-cloud), and the like, or any combination thereof. In some embodiments, server 110 may be implemented on an electronic device 200 having one or more of the components shown in fig. 2 herein.
In some embodiments, the electronic device 200 may include a processor 220. The processor 220 may process information and/or data related to service requests (which in this application include session requests sent by a requesting end at the time of a problem consultation, and problem consultation requests, etc.) to perform one or more of the functions described in this application. For example, the processor 220 may establish a session connection with the service-requester terminal 130, or the like, based on a session request obtained from the service-requester terminal 130. In some embodiments, processor 220 may include one or more processing cores (e.g., a single core processor (S) or a multi-core processor (S)). By way of example only, the Processor 220 may include a central processing unit (Central Processing Unit, CPU), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a special instruction set Processor (Application Specific Instruction-set Processor, ASIP), a graphics processing unit (Graphics Processing Unit, GPU), a physical processing unit (Physics Processing Unit, PPU), a digital signal Processor (Digital Signal Processor, DSP), a field programmable gate array (Field Programmable Gate Array, FPGA), a programmable logic device (Programmable Logic Device, PLD), a controller, a microcontroller unit, a reduced instruction set computer (Reduced Instruction Set Computing, RISC), a microprocessor, or the like, or any combination thereof.
Network 120 may be used for the exchange of information and/or data. In some embodiments, one or more components in the service system 100 (e.g., the server 110, the service requester terminal 130, the service provider terminal 140, and the database 150) may send information and/or data to other components. For example, the server 110 may obtain a service request from the service requester terminal 130 via the network 120. In some embodiments, network 120 may be any type of wired or wireless network, or a combination thereof. By way of example only, the network 130 may include a wired network, a wireless network, a fiber optic network, a telecommunications network, an intranet, the internet, a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN), a wireless local area network (Wireless Local Area Networks, WLAN), a metropolitan area network (Metropolitan Area Network, MAN), a wide area network (Wide Area Network, WAN), a public switched telephone network (Public Switched Telephone Network, PSTN), a bluetooth network, a ZigBee network, a near field communication (Near Field Communication, NFC) network, or the like, or any combination thereof. In some embodiments, network 120 may include one or more network access points. For example, network 120 may include wired or wireless network access points, such as base stations and/or network switching nodes, through which one or more components of service system 100 may connect to network 120 to exchange data and/or information.
In some embodiments, the user of the service requester terminal 130 may be the actual service demander or may be a person other than the actual service demander. For example, user a of service requester terminal 130 may use service requester terminal 130 to initiate a service request for service actual requester B (e.g., user a may call his own friend B), or receive service information or instructions from server 110, etc. In some embodiments, the user of the service provider terminal 140 may be the actual service provider or may be a person other than the actual service provider. For example, user C of service provider terminal 140 may use service provider terminal 140 to receive a service request for providing a service by service actual provider D (e.g., user C may pick up for driver D employed by himself), and/or information or instructions from server 110. In some embodiments, "service requester" and "service requester terminal" may be used interchangeably and "service provider" and "service provider terminal" may be used interchangeably.
In some embodiments, the service requester terminal 130 may include a mobile device, a tablet computer, a laptop computer, or a built-in device in a motor vehicle, or the like, or any combination thereof. In some embodiments, the mobile device may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, or an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home device may include a smart lighting device, a control device for a smart appliance device, a smart monitoring device, a smart television, a smart video camera, or an intercom, or the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, a smart lace, a smart glass, a smart helmet, a smart watch, a smart garment, a smart backpack, a smart accessory, etc., or any combination thereof. In some embodiments, the smart mobile device may include a smart phone, a personal digital assistant (Personal Digital Assistant, PDA), a gaming device, a navigation device, or a point of sale (POS) device, or the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, a virtual reality glass, a virtual reality patch, an augmented reality helmet, an augmented reality glass, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or the augmented reality device may include various virtual reality products, and the like. In some embodiments, the built-in devices in the motor vehicle may include an on-board computer, an on-board television, and the like. In some embodiments, the service requester terminal 130 may be a device having location technology for locating the location of the service requester and/or service requester terminal.
In some embodiments, the service provider terminal 140 may be a similar or identical device to the service requester terminal 130. In some embodiments, the service provider terminal 140 may be a device with positioning technology for locating the location of the service provider and/or service provider terminal. In some embodiments, the service requester terminal 130 and/or the service provider terminal 140 may communicate with other positioning devices to determine the location of the service requester, the service requester terminal 130, the service provider, or the service provider terminal 140, or any combination thereof. In some embodiments, the service requester terminal 130 and/or the service provider terminal 140 may send the positioning information to the server 110.
Database 150 may store data and/or instructions. In some embodiments, database 150 may store data obtained from service requester terminal 130 and/or service provider terminal 140. In some embodiments, database 150 may store data and/or instructions for the exemplary methods described in this application. In some embodiments, database 150 may include mass storage, removable storage, volatile Read-write Memory, or Read-Only Memory (ROM), or the like, or any combination thereof. By way of example, mass storage may include magnetic disks, optical disks, solid state drives, and the like; removable memory may include flash drives, floppy disks, optical disks, memory cards, zip disks, magnetic tape, and the like; the volatile read-write memory may include random access memory (Random Access Memory, RAM); the RAM may include dynamic RAM (Dynamic Random Access Memory, DRAM), double data Rate Synchronous dynamic RAM (DDR SDRAM); static Random-Access Memory (SRAM), thyristor RAM (T-RAM) and Zero-capacitor RAM (Zero-RAM), etc. By way of example, ROM may include Mask Read-Only Memory (MROM), programmable ROM (Programmable Read-Only Memory, PROM), erasable programmable ROM (Programmable Erasable Read-Only Memory, PEROM), electrically erasable programmable ROM (Electrically Erasable Programmable Read Only Memory, EEPROM), compact disk ROM (CD-ROM), digital versatile disk ROM, and the like. In some embodiments, database 150 may be implemented on a cloud platform. For example only, the cloud platform may include a private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, cross-cloud, multi-cloud, or other similar, or the like, or any combination thereof.
In some embodiments, database 150 may be connected to network 120 to communicate with one or more components in service system 100 (e.g., server 110, service requester terminal 130, service provider terminal 140, etc.). One or more components in the service system 100 may access data or instructions stored in the database 150 via the network 120. In some embodiments, database 150 may be directly connected to one or more components in service system 100 (e.g., server 110, service requester terminal 130, service provider terminal 140, etc.); alternatively, in some embodiments, database 150 may also be part of server 110.
In some embodiments, one or more components in the service system 100 (e.g., server 110, service requester terminal 130, service provider terminal 140, etc.) may have access to the database 150. In some embodiments, one or more components in service system 100 may read and/or modify information related to a service requester, a service provider, or the public, or any combination thereof, when certain conditions are met. For example, server 110 may read and/or modify information of one or more users after receiving a service request. As another example, the service provider terminal 140 may access information related to the service requester upon receiving a service request from the service requester terminal 130, but the service provider terminal 140 may not modify the related information of the service requester.
In some embodiments, the exchange of information of one or more components in service system 100 may be accomplished by requesting a service. The object of the service request may be any product. In some embodiments, the product may be a tangible product or a non-physical product. The tangible product may include a food, a pharmaceutical, a merchandise, a chemical product, an appliance, a garment, an automobile, a house, a luxury item, or the like, or any combination thereof. The non-substance product may include a service product, a financial product, a knowledge product, an internet product, or the like, or any combination thereof. The internet product may include a host product alone, a web product, a mobile internet product, a commercial host product, an embedded product, or the like, or any combination thereof. The internet product may be used in software, a program, a system, etc. of the mobile terminal, or any combination thereof. The mobile terminal may include a tablet computer, a notebook computer, a mobile phone, a personal digital assistant (Personal Digital Assistant, PDA), a smart watch, a Point of sale (POS) device, a car computer, a car television, or a wearable device, or the like, or any combination thereof. For example, the internet product may be any software and/or application used in a computer or mobile phone. The software and/or applications may involve social, shopping, shipping, entertainment time, learning, or investment, or the like, or any combination thereof. In some embodiments, the transportation related software and/or applications may include travel software and/or applications, vehicle scheduling software and/or applications, drawing software and/or applications, and the like. In the vehicle scheduling software and/or applications, the vehicle may include horses, dollies, rickshaw (e.g., wheelbarrows, bicycles, tricycles, etc.), automobiles (e.g., taxis, buses, private cars, etc.), trains, subways, watercraft, aircraft (e.g., aircraft, helicopters, space shuttles, rockets, hot air balloons, etc.), and the like, or any combination thereof.
Fig. 2 shows a schematic diagram of exemplary hardware and software components of an electronic device 200 of a server 110, a service requester terminal 130, a service provider terminal 140, which may implement the concepts of the present application, according to some embodiments of the present application. For example, the processor 220 may be used on the electronic device 200 and to perform the functions herein.
The electronic device 200 may be a general purpose computer or a special purpose computer, both of which may be used to implement the problem recommendation method of the present application. Although only one computer is shown, the functionality described herein may be implemented in a distributed fashion across multiple similar platforms for convenience to balance processing loads.
For example, the electronic device 200 may include a network port 210 connected to a network, one or more processors 220 for executing program instructions, a communication bus 230, and various forms of storage media 240, such as magnetic disk, ROM, or RAM, or any combination thereof. By way of example, the computer platform may also include program instructions stored in ROM, RAM, or other types of non-transitory storage media, or any combination thereof. The methods of the present application may be implemented in accordance with these program instructions. The electronic device 200 also includes an Input/Output (I/O) interface 250 between the computer and other Input/Output devices (e.g., keyboard, display screen).
For ease of illustration, only one processor is depicted in the electronic device 200. It should be noted, however, that the electronic device 200 in the present application may also include multiple processors, and thus steps performed by one processor described in the present application may also be performed jointly by multiple processors or separately. For example, if the processor of the electronic device 200 performs steps a and B, it should be understood that steps a and B may also be performed by two different processors together or performed separately in one processor. For example, the first processor performs step a, the second processor performs step B, or the first processor and the second processor together perform steps a and B.
In combination with the foregoing description of the service system and each electronic device in the service system, the following describes in detail the problem recommendation method provided in the present application with reference to specific embodiments.
Referring to fig. 3, a flowchart of a problem recommendation method provided in an embodiment of the present application is shown, where the problem recommendation method may be executed by a server in the service system shown in fig. 1, and a specific execution process includes the following steps:
step 301, after detecting that a request end initiates a session request, determining an accepted probability of recommending each candidate problem in a candidate problem set to the request end based on feature information of the request end and a first prediction model which is trained in advance and is common to different types of candidate problems.
Step 302, determining whether each candidate problem in the candidate problem set is a predicted result accepted by the request terminal based on the feature information of the request terminal and a pre-trained second prediction model matched with each candidate problem in the candidate problem set.
It should be noted that, the execution sequence of step 301 and step 302 may be different.
Step 303, screening a prediction result from the candidate problem set according to the prediction result corresponding to each candidate problem, wherein the prediction result represents at least one target candidate problem accepted by the requested end.
Step 304, selecting a question recommended to the requesting end from the at least one target candidate question according to the accepted probability corresponding to the at least one target candidate question.
In the embodiment of the application, the request end can be a service request end or a service provider end. Among these, service requesters and service providers are also distinguished in different application scenarios, for example, in a taxi service system, a service requester is for example a passenger, and a service provider is for example a driver. In an online shopping service system, a service requester is, for example, a buyer who purchases goods, and a service provider is, for example, a seller who sells goods. The present application is not limited in this regard.
In an embodiment of the present application, the server may obtain the feature information of the request end after detecting that the request end initiates the session request. Wherein the session request is used for requesting to establish a session with the server so as to conduct problem consultation. For example, the user of the request terminal, that is, the user may initiate the session request by triggering the triggering control of the online consultation function in the request terminal, or the user may initiate the session request by triggering the triggering control of the online consultation function in the request terminal, and further dialing a hotline phone call.
In an embodiment of the present application, when the request end is a service provider terminal, the feature information of the request end may include, for example, but not limited to, at least one of the following information:
(1) Personally descriptive information of the service provider.
In an example, when the service provider is a driver, the persona description of the service provider may include, for example, one or more of the following: the method comprises the steps of age, sex, registration time of a driver, departure time period, common departure place and departure time length in a preset time period, and income running average value, historical complaint condition, order payment condition and the like in the preset time period.
(2) Order description information of the last order processed by the service provider.
In one example, the order description information of the last order processed by the service provider may include, for example, one or more of the following: the amount of the last order of the driver, the running time, the driving receiving time, the payment state, whether the order has additional fees, whether the fees are abnormal, the starting time and the ending time of the order, and the like.
(3) Character description information of a service requester of the last processed order.
In one example, the persona description of the service requester of the last processed order may include, for example, one or more of the following: the age, sex, occupation, number of driving times, common departure place and destination place, driving time period, maximum driving cost, average driving cost, historical complaint condition, bill payment condition and the like of passengers in a preset time period.
(4) Order status information at the time the service provider initiated the session request.
In an example, the order status information at the time the service provider initiated the session request may include, for example, one or more of the following: whether the driver is waiting for allocation of an order, whether the driver has accepted the order, the length of time the last time the order was ended, the length of time the session request was initiated, whether the driver has accepted the order for the first time on the day, etc.
(5) The location and time at which the service provider initiated the session request.
In an example, the location and time at which the service provider initiated the session request may include, for example, the geographic location at which the driver placed a hotline phone call or an online consultation, and the corresponding point in time.
(6) The service provider aggregates the information for orders over a first historical period of time.
In one example, the order summary information for the service provider over the first historical period of time includes, for example, one or more of the following: the total amount of orders accepted, total length of orders accepted, total revenue, actual revenue to account, amount of payment not received, amount of complaints, distribution of complaints, amount of complaints, and set of complaint problems, etc. in the first historical period.
The first history period may be understood as a preset period of time before the current time. The preset time period may be configured according to actual requirements, for example, may be one week or one month.
As can be seen from the feature information given in the above example, the feature information of the requesting end is classified into three types: static features, dynamic features, and statistical features. In a possible implementation manner, the static characteristics may be pre-stored in a database of the service system shown in fig. 1, the dynamic characteristics may be obtained by the server from a request end or other devices through a network in the service system shown in fig. 1, and the statistical characteristics may be obtained by the server based on data recorded in the database of the service system.
Of course, the characteristic information of the request end may be the characteristic information of the service request end, and the content and the acquisition mode included in the characteristic information of the service request end and the content and the acquisition mode included in the characteristic information of the service provider end are based on the same technical concept, and specific content is not explained here.
In this embodiment of the present application, when executing step 301, the server determines, based on the feature information of the request end and a first prediction model that is commonly used for different types of candidate questions and is trained in advance, that each candidate question in the candidate question set is recommended to the accepted probability of the request end, the feature information may be extracted first to obtain a feature vector. Because the feature information contains different types of data, the feature information can be preprocessed for easy identification, each type of data can be digitally represented, so that the feature information can be converted into multi-dimensional feature vectors, and each dimension can represent one type of data in the feature information. In an example, for the age of the driver included in the feature information may be converted into a numerical representation of the type 18 to 60, the point in time when the session request was initiated may be represented by, for example, "2018-01-0108:01:30 "represents etc.
Further, the extracted feature vector may be input into a first pre-trained prediction model, and the accepted probability that each candidate problem in the candidate problem set is recommended to the requesting end is output. Here, the accepted probability may also be understood as a probability of whether the candidate problem is a problem that the requesting end wants to consult.
The first prediction model may be, for example, a deep neural network (Deep Neural Networks, DNN) model. Referring to fig. 4, an exemplary description of a DNN model according to an embodiment of the present application is shown, where the DNN model includes an input layer (input layer), a hidden layer (hidden layer), and an output layer (output layer), where: the input layer, i.e. the first layer of the DNN model, may comprise a plurality of input nodes, e.g. 200 input nodes when the extracted feature vector comprises 200-dimensional features; the output layer, i.e. the last layer of the DNN model, the number of output nodes comprised by the output layer depends on the kind of problem comprised in the candidate problem set, e.g. when 10 candidate problems are comprised in the candidate problem set, then the output layer may comprise 10 output nodes; the hidden layers are located between the input layer and the output layer, the hidden layers can have multiple layers, only one hidden layer is simply listed in fig. 4, and the more the hidden layers are, the more nodes each hidden layer contains, the stronger the expression capability of the first prediction model is. In the embodiment of the present application, the training process related to the first training model will be described in detail below, which is not described herein.
The candidate question set may be obtained based on previously recorded questions that each requesting end requests to respond to during the session. In a possible implementation manner, the total number of times of each question requested to be responded by different requesting ends in the second historical time period can be counted, and then the questions with the counted total number of times meeting the preset condition are taken as candidate questions to form a candidate question set. For example, each problem with the total number of times counted exceeding a preset threshold is used as a candidate problem, or the total number of times corresponding to each problem is arranged in order from large to small, and each problem with the total number of times arranged in the first M is used as a candidate problem, where M is a positive integer.
In the embodiment of the present application, in order to improve accuracy of problem recommendation prediction, feature information of a request terminal may also be respectively input into a pre-trained second prediction model matched with each candidate problem in the candidate problem set, so as to predict whether each candidate problem in the candidate problem set is accepted by the request terminal. Each candidate problem in the candidate problem set is matched with a second prediction model, and each second prediction model is used for predicting whether the problem requested to be responded by the request end is a matched candidate problem or not.
In one possible implementation, the feature vector extracted from the feature information may be input into a pre-trained second prediction model matched with each candidate problem, and a prediction result for indicating whether the problem requested to be responded by the request end is the candidate problem may be output from the second prediction model matched with each candidate problem.
Here, the second prediction model may for example employ a gradient-lifting decision tree (Gradient Boosting Decision Tree, GBDT), which may be understood as an iterative decision tree algorithm consisting of a plurality of decision trees, the classification results of all trees being accumulated to obtain the final classification result. In this embodiment of the present application, the final classification result is a two-classification result, that is, a classification result of whether the problem requested to be responded by the requesting end is a matched candidate problem. In this embodiment, since each candidate problem is matched with one second prediction model, before each second prediction model is put into use, the second prediction model may be trained based on a training sample set corresponding to each candidate problem, and the training process will be described in detail below, which is not described herein.
In this embodiment of the present application, after the accepted probability corresponding to each candidate problem and the predicted result corresponding to each candidate problem are predicted by using the first prediction model and the second prediction model, the predicted result may be first screened from the candidate problem set according to the predicted result corresponding to each candidate problem to represent at least one target candidate problem accepted by the request end.
Further, for the at least one selected target candidate problem, a problem recommended to the requesting end can be selected from the at least one target candidate problem according to the accepted probability corresponding to the at least one target candidate problem.
In one possible implementation manner, the target candidate problem with the acceptance probability higher than the preset probability value in the at least one target candidate problem may be used as the problem recommended to the request end.
In another possible implementation manner, at least one target candidate problem may be arranged in order of from a larger to a smaller probability of being received, and then the target candidate problem with the probability of being received arranged in the first k bits of the at least one target candidate problem is used as a problem recommended to the requesting end, where k is a positive integer.
The problem recommended to the requesting end determined by the two embodiments has two characteristics: firstly, the prediction result obtained through the second prediction model indicates that the request terminal can accept the request terminal, and secondly, the accepted probability obtained through the first prediction model is higher than a preset probability value. In view of the above two features, it can be explained that the problem recommended to the requesting end is accepted with higher reliability and more accurate prediction.
In addition, since most of the problems need to be consulted in the process of consulting the problems at the requesting end, the accuracy of the first problem recommended to the requesting end directly affects the efficiency of consulting the problems at the requesting end and the user experience of the requesting end. In this embodiment of the present application, the candidate problem that can be accepted by the request end and has the highest acceptance probability may be indicated as the first candidate problem of the problem recommended by the server, and then the preset prompting problem may be indicated as the second candidate problem of the problem recommended by the server.
The process of problem recommendation according to the embodiment of the present application is described below with reference to fig. 5, and in detail, an application scenario is described.
Referring to fig. 5, it is assumed that an application scenario initiates a session request to a server by making a hot line call for a request end to consult a question, and that a candidate question set, which is a terminal used by a driver, includes 10 types of candidate questions. Then the server may perform the following steps:
first, a driver initiated session request is detected.
And secondly, acquiring characteristic information of a driver, and extracting a characteristic vector from the characteristic information.
And thirdly, inputting the feature vector into a first prediction model (namely a DNN model shown in fig. 5) which is common to different candidate questions, and outputting to obtain the accepted probabilities respectively corresponding to the 10 candidate questions.
And fourthly, inputting the feature vector into a second prediction model matched with each candidate problem in the candidate problem set, and outputting a prediction result of whether each candidate problem is accepted by a driver.
And fifthly, screening the predicted result from the candidate problem set according to the predicted result corresponding to each candidate problem to represent the target candidate problem accepted by the driver.
And sixthly, selecting the problem with the acceptance probability higher than a preset value from the target candidate problems and/or the problem arranged in the front N bits as the problem recommended to the driver according to the acceptance probability corresponding to the target candidate problem.
In addition, the preset prompting problem can also be used as a problem recommended to the driver and arranged at the final position of the recommended problem.
For example, a candidate question of top1 to top3, which has an acceptance probability higher than a preset value and is arranged in the first 3 bits, may be selected as a question recommended to the driver. In addition, a preset prompting question may be set to prompt the driver whether to ask other candidate questions other than top1 to top 3.
The following describes the training process of two types of prediction models proposed in the embodiments of the present application with reference to specific embodiments.
First predictive model
In the embodiment of the present application, in order to train the first prediction model, first, a first sample training set for training the first prediction model needs to be generated. In a possible implementation manner, history session record information in the third history period may be acquired, where the history session record information includes history feature information of each requesting end when each session request is initiated, and history problems of each requesting end when each session request is initiated. And then, extracting the historical feature vectors corresponding to each piece of historical feature information, and taking each extracted historical feature vector as a training sample to form a first sample training set. Each training sample corresponds to one problem label, and different problem labels are used for identifying historical problems corresponding to different historical feature vectors respectively. In one example, the issue label may be identified by a number of the order of 1/2/3 ….
After the first sample training set is obtained, the first predictive model may be trained based on the first sample training set until it is determined that the training of the first predictive model is complete.
Referring to fig. 6, a flowchart of training a first prediction model according to an embodiment of the present application includes the following steps:
Step 601, inputting a preset number of training samples in a first sample training set into a first prediction model, and respectively outputting historical accepted probabilities of recommending each candidate problem in a candidate problem set to a request end according to each input training sample.
Step 602, determining a candidate problem with highest history acceptance probability corresponding to each training sample.
Step 603, determining a first loss value of the present training process by comparing the candidate problem with the highest history acceptance probability corresponding to each training sample and the problem label corresponding to each training sample.
In specific implementation, for each training sample, whether the candidate problem with the highest accepted probability corresponding to the training sample is consistent with the problem identified by the problem label corresponding to the training sample can be compared, if so, the prediction of the training sample is determined to be accurate, and if not, the prediction of the training sample is determined to be inaccurate. By traversing all training samples, a first loss value of the present training process can be calculated, and the first loss value can reflect the accuracy of the first prediction model prediction.
Step 604, determining whether the first loss value in the training process is greater than a first set value.
If yes, go to step 605; if the determination result is negative, step 606 is performed.
Step 606, adjusting the model parameters of the first prediction model, and returning to step 601, and performing the next training process by using the adjusted first prediction model.
Step 605, determining that the first predictive model training is complete.
(II) second predictive model
In this embodiment of the present application, since each candidate problem in the candidate problem set is matched with a second prediction model, when the second prediction model is trained, the second prediction model matched with each candidate problem may be trained respectively. In order to train the first prediction model, a second sample training set corresponding to each candidate problem is firstly required to be generated, and then the second prediction model matched with each candidate problem is trained based on the second sample training set corresponding to each candidate problem until the second prediction model matched with each candidate problem is determined to be trained.
Referring to fig. 7, a flowchart of generating a second sample training set according to an embodiment of the present application is shown, and for a first candidate problem in a candidate sample set, the following operations are performed for the first candidate problem being any candidate problem in the candidate sample set:
Step 701, screening out first historical characteristic information of the first request end and second historical characteristic information of the second request end from the historical session record information.
The first request end represents a request end for requesting a responding historical problem as a first candidate problem, and the second request end represents a request end for requesting the responding historical problem not as the first candidate problem;
and extracting a first historical feature vector corresponding to each first historical feature information, and extracting a second historical feature vector corresponding to each second historical feature information.
Step 702, taking each extracted first historical feature vector as a positive training sample to form a positive sample training set, and taking each extracted second historical feature vector as a negative training sample to form a negative sample training set.
Each positive training sample corresponds to a positive label, each negative training sample corresponds to a negative label, the positive label indicates that the problem requested to be responded by the request end is a first candidate problem, and the negative label indicates that the problem requested to be responded by the request end is not the first candidate problem.
Step 703, forming the positive sample training set and the negative sample training set into a second sample training set corresponding to the first candidate problem.
And training a second prediction model matched with each candidate problem in the second sample training set corresponding to each candidate problem. For the second prediction model matched with the first candidate problem, the following training procedure may be performed as shown in fig. 8:
step 801, a first preset number of positive training samples and a second preset number of negative training samples are obtained from a second sample training set corresponding to the first candidate problem.
The first preset number and the second preset number can be the same or different, and if the first preset number and the second preset number are different, the difference between the first preset number and the second preset number is smaller, so that samples are balanced.
Step 802, inputting a first preset number of positive training samples and a second preset number of negative training samples into a second prediction model matched with the first candidate problem, and outputting a classification result corresponding to each positive training sample and a classification result corresponding to each negative training sample.
The classification result output by the second prediction model indicates whether the problem requested to be responded by the request end is a first candidate problem.
Step 803, determining a second loss value of the present training process by comparing the classification result corresponding to each positive training sample with the positive label, and comparing the classification result corresponding to each negative training sample with the negative label.
In specific implementation, for each positive training sample, whether the classification result corresponding to the positive training sample is consistent with the result marked by the positive label corresponding to the positive training sample can be compared, if so, the prediction of the positive training sample is determined to be accurate, and if not, the prediction of the positive training sample is determined to be inaccurate. For each negative training sample, it may also be determined whether the prediction for each negative training sample is accurate with reference to the above-described process. By traversing all the positive training samples and the negative training samples, a second loss value of the present round of training process can be calculated, and the second loss value can reflect the accuracy of the second prediction model prediction.
Step 804, determining whether the second loss value in the training process is greater than a second set value.
If yes, go to step 805; if the determination is negative, step 806 is performed.
And step 805, adjusting model parameters of the second prediction model matched with the first candidate problem, and returning to step 801, and performing the next training process by using the adjusted second prediction model matched with the first candidate problem.
Step 806, determining that the training of the second prediction model for the first candidate problem match is complete.
In this embodiment of the present application, after a request terminal initiates a session request, a server may obtain feature information of the request terminal, and then may respectively predict, using a first prediction model that is common to different types of candidate questions and a second prediction model that is matched with each candidate question in the candidate question set, a recommended accepted probability of each candidate question in the candidate question set and a prediction result of whether each candidate question is accepted by the request terminal. Further, at least one target candidate problem, the predicted result of which represents the acceptance of the requested end, may be screened from the candidate problem set, and then the problem which is finally recommended to the user may be determined according to the acceptance probabilities corresponding to the target candidate problems. Compared with the scheme with the pre-configured candidate questions, the scheme can screen the questions most likely to be accepted by the request end from the candidate question set and recommend the questions to the request end based on the characteristic information of each request end and the two types of prediction models, and the method can conduct the question recommendation in a targeted manner, meets the personalized consultation requirements of different request ends, and improves the efficiency of the question consultation.
Based on the same technical concept, the embodiment of the present application further provides a problem recommendation device corresponding to the problem recommendation method, and since the principle of solving the problem by the device in the embodiment of the present application is similar to that of the problem recommendation method in the embodiment of the present application, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 9, a schematic structural diagram of a problem recommending apparatus according to an embodiment of the present application is shown, where the apparatus 90 includes:
a first determining module 91, configured to determine, after detecting that a request end initiates a session request, an accepted probability of recommending each candidate problem in a candidate problem set to the request end based on feature information of the request end and a first prediction model that is trained in advance and is common to different types of candidate problems;
a second determining module 92, configured to determine, based on the feature information of the request terminal, a pre-trained second prediction model that matches each candidate problem in the candidate problem set, and whether each candidate problem in the candidate problem set is a prediction result accepted by the request terminal;
a first screening module 93, configured to screen, according to a prediction result corresponding to each candidate problem, from the candidate problem set, that the prediction result represents at least one target candidate problem accepted by the requested end;
and a second filtering module 94, configured to select, according to the probability of being accepted corresponding to the at least one target candidate problem, a problem recommended to the requesting end from the at least one target candidate problem.
In one possible design, the second filtering module 94 is specifically configured to, when selecting, according to the probability of being accepted corresponding to the at least one target candidate problem, a problem recommended to the requesting end from the at least one target candidate problem:
And recommending the target candidate questions with the accepting probability higher than a preset probability value to the request end as the questions of the at least one target candidate question.
In one possible design, the second filtering module 94 is specifically configured to, when selecting, according to the probability of being accepted corresponding to the at least one target candidate problem, a problem recommended to the requesting end from the at least one target candidate problem:
arranging the at least one target candidate problem in the order from the high probability to the low probability;
and taking the target candidate questions with the accepted probability arranged in the first k bits in the at least one target candidate question as questions recommended to the request end, wherein k is a positive integer.
In one possible design, the first determining module 91 is specifically configured to, when determining, based on the feature information of the request terminal and a first prediction model that is common to different types of candidate questions trained in advance, a probability of acceptance of recommending each candidate question in the candidate question set to the request terminal:
extracting the characteristics of the characteristic information to obtain a characteristic vector;
and inputting the feature vector into a pre-trained first prediction model, and outputting the acceptance probability that each candidate problem in the candidate problem set is recommended to the request terminal.
In one possible design, the second determining module 92 is specifically configured to, when determining, based on the feature information of the requesting end, a pre-trained second prediction model that matches each candidate problem in the candidate problem set, whether each candidate problem in the candidate problem set is a prediction result accepted by the requesting end:
extracting the characteristics of the characteristic information to obtain a characteristic vector;
and inputting the feature vector extracted from the feature information into a pre-trained second prediction model matched with each candidate problem in the candidate problem set, and outputting a prediction result of whether each candidate problem in the candidate problem set is accepted by a request end.
In one possible design, the questions recommended to the requesting end further include a preset prompting question, where the preset prompting question is used to prompt the requesting end whether to request to respond to other questions.
In a possible design, the first determining module 91 is further configured, before detecting that the requesting end initiates the session request, to:
counting the total times of each problem requested to respond by different request ends in the second historical time period;
and taking the counted problems with the total times meeting the preset conditions as candidate problems to form the candidate problem set.
In one possible design, when the request end is a service provider terminal, the feature information includes at least one of the following information:
character description information of the service provider;
order description information of an order processed by the service provider last time;
character description information of a service requester of the last processed order;
the service provider initiates order state information when the session request;
the location and time when the service provider initiates the session request;
the service provider aggregates information for orders over a first historical period of time.
In one possible design, the apparatus further comprises:
the first model training module 95 is configured to obtain historical session record information in a third historical time period, where the historical session record information includes historical feature information of each request end when each request end initiates a session request, and a historical problem of each request end when each request end initiates a session request;
extracting a history feature vector corresponding to each history feature information;
taking each extracted historical feature vector as a training sample to form a first sample training set, wherein each training sample corresponds to a question label, and different question labels are used for identifying historical questions respectively corresponding to different historical feature vectors;
And training the first prediction model based on the first sample training set until the first prediction model training is determined to be completed.
In one possible design, the first model training module 95 is specifically configured to, when training the first prediction model based on the first sample training set until it is determined that the first prediction model training is completed:
inputting a preset number of training samples in the first sample training set into the first prediction model, respectively outputting a history accepted probability that each candidate problem in the candidate problem set is recommended to the request end according to each inputted training sample, and determining a candidate problem with the highest history accepted probability corresponding to each training sample;
determining a first loss value of the training process by comparing the candidate problem with the highest historical accepted probability corresponding to each training sample with the problem label corresponding to each training sample;
when the first loss value is larger than a first set value, the model parameters of the first prediction model are adjusted, the next round of training process is conducted by using the adjusted first prediction model, and when the determined first loss value is smaller than or equal to the first set value, the first prediction model training is determined to be completed.
In one possible design, the apparatus further comprises:
a second model training module 96, configured to generate, for each candidate problem in the candidate problem set, a second prediction model that matches each candidate problem, and generate a second sample training set corresponding to each candidate problem;
and training the second prediction model matched with each candidate problem based on the second sample training set corresponding to each candidate problem until the second prediction model matched with each candidate problem is determined to be trained.
In one possible design, the second model training module 96 is specifically configured to, when generating the second sample training set corresponding to each candidate problem:
aiming at a first candidate problem in the candidate sample set, the first candidate problem is any candidate problem in the candidate sample set, and the following operation is executed:
screening out first historical characteristic information of a first request end and second historical characteristic information of a second request end from the historical session record information; the first request end indicates that the historical problem of the request response is the request end of the first candidate problem, and the second request end indicates that the historical problem of the request response is not the request end of the first candidate problem;
Extracting a first historical feature vector corresponding to each first historical feature information, and extracting a second historical feature vector corresponding to each second historical feature information;
taking each extracted first historical feature vector as a positive training sample to form a positive sample training set, and taking each extracted second historical feature vector as a negative training sample to form a negative sample training set;
forming a second sample training set corresponding to the first candidate problem by the positive sample training set and the negative sample training set;
each positive training sample corresponds to a positive label, each negative training sample corresponds to a negative label, the positive label indicates that the problem requested to be responded by the request end is the first candidate problem, and the negative label indicates that the problem requested to be responded by the request end is not the first candidate problem.
In one possible design, the second model training module 96 is specifically configured to, when training the second prediction model matched with each candidate problem based on the second sample training set corresponding to each candidate problem until it is determined that the training of the second prediction model matched with each candidate problem is completed:
For the second prediction model of the first candidate problem match, performing the following training process:
acquiring a first preset number of positive training samples and a second preset number of negative training samples from a second sample training set corresponding to the first candidate problem;
inputting the first preset number of positive training samples and the second preset number of negative training samples into a second prediction model matched with the first candidate problem, and outputting a classification result corresponding to each positive training sample and a classification result corresponding to each negative training sample; the classification result indicates whether the problem requested to be responded by the request end is the first candidate problem or not;
determining a second loss value of the training process by comparing the classification result corresponding to each positive training sample with the positive label and comparing the classification result corresponding to each negative training sample with the negative label;
when the second loss value is larger than a second set value, model parameters of a second prediction model matched with the first candidate problem are adjusted, and a next training process is carried out by using the adjusted second prediction model matched with the first candidate problem until the determined second loss value is smaller than or equal to the second set value, and training of the second prediction model matched with the first candidate problem is determined to be completed.
The functions of the above modules may be referred to the description of the above method embodiments, and will not be further described herein.
Based on the same technical conception, the embodiment of the application also provides electronic equipment. Referring to fig. 10, a schematic structural diagram of an electronic device 100 according to an embodiment of the present application includes a processor 101, a memory 102, and a bus 103. The memory 102 is used for storing execution instructions, including a memory 1021 and an external memory 1022; the memory 1021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 101 and data exchanged with the external memory 1022 such as a hard disk, and the processor 101 exchanges data with the external memory 1022 through the memory 1021, and when the computer device 100 is running, the processor 101 and the memory 102 communicate with each other through the bus 103, so that the processor 101 executes the following instructions:
after detecting that a request end initiates a session request, determining an accepted probability of recommending each candidate problem in a candidate problem set to the request end based on characteristic information of the request end and a first prediction model which is trained in advance and is common to different types of candidate problems;
determining a prediction result of whether each candidate problem in the candidate problem set is accepted by the request terminal or not based on the characteristic information of the request terminal and a pre-trained second prediction model matched with each candidate problem in the candidate problem set;
Screening a predicted result from the candidate problem set according to the predicted result corresponding to each candidate problem, wherein the predicted result represents at least one target candidate problem accepted by the requested end;
and selecting a question recommended to the request end from the at least one target candidate question according to the accepted probability corresponding to the at least one target candidate question.
The specific process flow of the processor 101 may refer to the descriptions of the above method embodiments, and will not be described herein.
Based on the same technical idea, the embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, which when being executed by a processor, performs the steps of the problem recommendation method described above.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, and the like, and when the computer program on the storage medium is run, the problem recommending method can be executed, so that the consultation requirements of users with different request ends can be better met, the waiting time of the users in the process of consulting the problems can be reduced, and the efficiency of consulting the problems can be improved.
Based on the same technical concept, the embodiments of the present application further provide a computer program product, which includes a computer readable storage medium storing program code, where instructions included in the program code may be used to execute the steps of the problem recommendation method, and specific implementation may be referred to the method embodiments, and will not be described herein.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the method embodiments, which are not described in detail in this application. In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, and the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, and for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, indirect coupling or communication connection of devices or modules, electrical, mechanical, or other form.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (28)

1. A problem recommendation method, comprising:
after a request end is detected to initiate a session request, determining an accepted probability of recommending each candidate problem in a candidate problem set to the request end based on characteristic information of the request end and a first prediction model which is trained in advance and is common to different types of candidate problems, wherein the session request is initiated by a user trigger corresponding to the request end and is used for establishing a session between the request and a server so as to carry out problem consultation;
determining a prediction result of whether each candidate problem in the candidate problem set is accepted by the request terminal or not based on the characteristic information of the request terminal and a pre-trained second prediction model matched with each candidate problem in the candidate problem set;
screening a predicted result from the candidate problem set according to the predicted result corresponding to each candidate problem, wherein the predicted result represents at least one target candidate problem accepted by the requested end;
and selecting a question recommended to the request end from the at least one target candidate question according to the accepted probability corresponding to the at least one target candidate question.
2. The method of claim 1, wherein selecting a question recommended to the requesting end from the at least one target candidate question according to an accepted probability corresponding to the at least one target candidate question, comprises:
And recommending the target candidate questions with the accepting probability higher than a preset probability value to the request end as the questions of the at least one target candidate question.
3. The method of claim 1, wherein selecting a question recommended to the requesting end from the at least one target candidate question according to an accepted probability corresponding to the at least one target candidate question, comprises:
arranging the at least one target candidate problem in the order from the high probability to the low probability;
and taking the target candidate questions with the accepted probability arranged in the first k bits in the at least one target candidate question as questions recommended to the request end, wherein k is a positive integer.
4. The method of claim 1, wherein the determining an accepted probability of recommending each candidate problem in the set of candidate problems to the requesting end based on the characteristic information of the requesting end and a first predictive model common to pre-trained different types of candidate problems comprises:
extracting the characteristics of the characteristic information to obtain a characteristic vector;
and inputting the feature vector into a pre-trained first prediction model, and outputting the acceptance probability that each candidate problem in the candidate problem set is recommended to the request terminal.
5. The method of claim 1, wherein the determining a prediction of whether each candidate issue in the set of candidate issues is accepted by the requesting end based on the feature information of the requesting end, a pre-trained second prediction model that matches each candidate issue in the set of candidate issues, comprises:
extracting the characteristics of the characteristic information to obtain a characteristic vector;
and inputting the feature vector extracted from the feature information into a pre-trained second prediction model matched with each candidate problem in the candidate problem set, and outputting a prediction result of whether each candidate problem in the candidate problem set is accepted by a request end.
6. The method of claim 2, wherein the questions recommended to the requesting end further comprise a preset prompting question for prompting the requesting end whether to request to respond to other questions.
7. The method of claim 1, wherein prior to detecting that the requesting end initiates the session request, the method further comprises:
counting the total times of each problem requested to respond by different request ends in the second historical time period;
And taking the counted problems with the total times meeting the preset conditions as candidate problems to form the candidate problem set.
8. The method of claim 1, wherein when the requesting end is a service provider terminal, the characteristic information includes at least one of the following information:
character description information of the service provider;
order description information of an order processed by the service provider last time;
character description information of a service requester of the last processed order;
the service provider initiates order state information when the session request;
the location and time when the service provider initiates the session request;
the service provider aggregates information for orders over a first historical period of time.
9. The method according to claim 1, wherein the method further comprises:
acquiring history session record information in a third history time period, wherein the history session record information comprises history characteristic information of each request end when each request end initiates a session request and a history problem of each request end when each request end initiates a session request;
extracting a history feature vector corresponding to each history feature information;
Taking each extracted historical feature vector as a training sample to form a first sample training set, wherein each training sample corresponds to a question label, and different question labels are used for identifying historical questions respectively corresponding to different historical feature vectors;
and training the first prediction model based on the first sample training set until the first prediction model training is determined to be completed.
10. The method of claim 9, wherein the training the first predictive model based on the first sample training set until it is determined that the first predictive model training is complete comprises:
inputting a preset number of training samples in the first sample training set into the first prediction model, respectively outputting a history accepted probability that each candidate problem in the candidate problem set is recommended to the request end according to each inputted training sample, and determining a candidate problem with the highest history accepted probability corresponding to each training sample;
determining a first loss value of the training process by comparing the candidate problem with the highest historical accepted probability corresponding to each training sample with the problem label corresponding to each training sample;
When the first loss value is larger than a first set value, the model parameters of the first prediction model are adjusted, the next round of training process is conducted by using the adjusted first prediction model, and when the determined first loss value is smaller than or equal to the first set value, the first prediction model training is determined to be completed.
11. The method of claim 1, wherein the method further comprises:
generating a second prediction model matched with each candidate problem aiming at each candidate problem in the candidate problem set, and generating a second sample training set corresponding to each candidate problem;
and training the second prediction model matched with each candidate problem based on the second sample training set corresponding to each candidate problem until the second prediction model matched with each candidate problem is determined to be trained.
12. The method of claim 11, wherein generating a second training set of samples for each candidate problem comprises:
aiming at a first candidate problem in a candidate sample set, wherein the first candidate problem is any candidate problem in the candidate sample set, the following operation is executed:
Screening first historical characteristic information of a first request end and second historical characteristic information of a second request end from the historical session record information; the first request end indicates that the historical problem of the request response is the request end of the first candidate problem, and the second request end indicates that the historical problem of the request response is not the request end of the first candidate problem;
extracting a first historical feature vector corresponding to each first historical feature information, and extracting a second historical feature vector corresponding to each second historical feature information;
taking each extracted first historical feature vector as a positive training sample to form a positive sample training set, and taking each extracted second historical feature vector as a negative training sample to form a negative sample training set;
forming a second sample training set corresponding to the first candidate problem by the positive sample training set and the negative sample training set;
each positive training sample corresponds to a positive label, each negative training sample corresponds to a negative label, the positive label indicates that the problem requested to be responded by the request end is the first candidate problem, and the negative label indicates that the problem requested to be responded by the request end is not the first candidate problem.
13. The method of claim 12, wherein training the second predictive model for each candidate problem match based on the second sample training set for each candidate problem until it is determined that the training of the second predictive model for each candidate problem match is complete, comprises:
for the second prediction model of the first candidate problem match, performing the following training process:
acquiring a first preset number of positive training samples and a second preset number of negative training samples from a second sample training set corresponding to the first candidate problem;
inputting the first preset number of positive training samples and the second preset number of negative training samples into a second prediction model matched with the first candidate problem, and outputting a classification result corresponding to each positive training sample and a classification result corresponding to each negative training sample; the classification result indicates whether the problem requested to be responded by the request end is the first candidate problem or not;
determining a second loss value of the training process by comparing the classification result corresponding to each positive training sample with the positive label and comparing the classification result corresponding to each negative training sample with the negative label;
When the second loss value is larger than a second set value, model parameters of a second prediction model matched with the first candidate problem are adjusted, and a next training process is carried out by using the adjusted second prediction model matched with the first candidate problem until the determined second loss value is smaller than or equal to the second set value, and training of the second prediction model matched with the first candidate problem is determined to be completed.
14. A question recommending apparatus, comprising:
the first determining module is used for determining the accepted probability of recommending each candidate problem in the candidate problem set to the request terminal based on the characteristic information of the request terminal and a first prediction model which is trained in advance and is common to different types of candidate problems after detecting that the request terminal initiates a session request, wherein the session request is initiated by a user trigger corresponding to the request terminal and is used for establishing a session between the request and a server so as to carry out problem consultation;
the second determining module is used for determining whether each candidate problem in the candidate problem set is accepted by the request end or not based on the characteristic information of the request end and a pre-trained second prediction model matched with each candidate problem in the candidate problem set;
The first screening module is used for screening a predicted result from the candidate problem set according to the predicted result corresponding to each candidate problem, wherein the predicted result represents at least one target candidate problem accepted by the requested end;
and the second screening module is used for selecting the questions recommended to the request end from the at least one target candidate questions according to the accepted probability corresponding to the at least one target candidate question.
15. The apparatus of claim 14, wherein the second filtering module is configured to, when selecting a question recommended to the requesting end from the at least one target candidate question according to an accepted probability corresponding to the at least one target candidate question:
and recommending the target candidate questions with the accepting probability higher than a preset probability value to the request end as the questions of the at least one target candidate question.
16. The apparatus of claim 14, wherein the second filtering module is configured to, when selecting a question recommended to the requesting end from the at least one target candidate question according to an accepted probability corresponding to the at least one target candidate question:
arranging the at least one target candidate problem in the order from the high probability to the low probability;
And taking the target candidate questions with the accepted probability arranged in the first k bits in the at least one target candidate question as questions recommended to the request end, wherein k is a positive integer.
17. The apparatus of claim 14, wherein the first determining module, when determining an accepted probability of recommending each candidate problem in a set of candidate problems to the requesting end based on the characteristic information of the requesting end and a first predictive model common to pre-trained different types of candidate problems, is specifically configured to:
extracting the characteristics of the characteristic information to obtain a characteristic vector;
and inputting the feature vector into a pre-trained first prediction model, and outputting the acceptance probability that each candidate problem in the candidate problem set is recommended to the request terminal.
18. The apparatus of claim 14, wherein the second determining module, when determining a prediction result of whether each candidate problem in the candidate problem set is accepted by the requesting end based on the feature information of the requesting end and a pre-trained second prediction model that matches each candidate problem in the candidate problem set, is specifically configured to:
extracting the characteristics of the characteristic information to obtain a characteristic vector;
And inputting the feature vector extracted from the feature information into a pre-trained second prediction model matched with each candidate problem in the candidate problem set, and outputting a prediction result of whether each candidate problem in the candidate problem set is accepted by a request end.
19. The apparatus of claim 15, wherein the questions recommended to the requesting end further comprise a preset prompting question for prompting the requesting end whether to request to respond to other questions.
20. The apparatus of claim 14, wherein the first determining module, prior to detecting that the requesting end initiates the session request, is further configured to:
counting the total times of each problem requested to respond by different request ends in the second historical time period;
and taking the counted problems with the total times meeting the preset conditions as candidate problems to form the candidate problem set.
21. The apparatus of claim 14, wherein when the requesting end is a service provider terminal, the characteristic information comprises at least one of:
character description information of the service provider;
order description information of an order processed by the service provider last time;
Character description information of a service requester of the last processed order;
the service provider initiates order state information when the session request;
the location and time when the service provider initiates the session request;
the service provider aggregates information for orders over a first historical period of time.
22. The apparatus of claim 14, wherein the apparatus further comprises:
the first model training module is used for acquiring historical session record information in a third historical time period, wherein the historical session record information comprises historical characteristic information of each request end when each request end initiates a session request and historical problems of each request end when each request end initiates a session request;
extracting a history feature vector corresponding to each history feature information;
taking each extracted historical feature vector as a training sample to form a first sample training set, wherein each training sample corresponds to a question label, and different question labels are used for identifying historical questions respectively corresponding to different historical feature vectors;
and training the first prediction model based on the first sample training set until the first prediction model training is determined to be completed.
23. The apparatus of claim 22, wherein the first model training module, when training the first predictive model based on the first sample training set until it is determined that the first predictive model training is complete, is specifically configured to:
inputting a preset number of training samples in the first sample training set into the first prediction model, respectively outputting a history accepted probability that each candidate problem in the candidate problem set is recommended to the request end according to each inputted training sample, and determining a candidate problem with the highest history accepted probability corresponding to each training sample;
determining a first loss value of the training process by comparing the candidate problem with the highest historical accepted probability corresponding to each training sample with the problem label corresponding to each training sample;
when the first loss value is larger than a first set value, the model parameters of the first prediction model are adjusted, the next round of training process is conducted by using the adjusted first prediction model, and when the determined first loss value is smaller than or equal to the first set value, the first prediction model training is determined to be completed.
24. The apparatus of claim 14, wherein the apparatus further comprises:
the second model training module is used for generating a second prediction model matched with each candidate problem aiming at each candidate problem in the candidate problem set, and generating a second sample training set corresponding to each candidate problem;
and training the second prediction model matched with each candidate problem based on the second sample training set corresponding to each candidate problem until the second prediction model matched with each candidate problem is determined to be trained.
25. The apparatus of claim 24, wherein the second model training module, when generating the second sample training set for each candidate problem, is specifically configured to:
aiming at a first candidate problem in a candidate sample set, wherein the first candidate problem is any candidate problem in the candidate sample set, the following operation is executed:
screening first historical characteristic information of a first request end and second historical characteristic information of a second request end from the historical session record information; the first request end indicates that the historical problem of the request response is the request end of the first candidate problem, and the second request end indicates that the historical problem of the request response is not the request end of the first candidate problem;
Extracting a first historical feature vector corresponding to each first historical feature information, and extracting a second historical feature vector corresponding to each second historical feature information;
taking each extracted first historical feature vector as a positive training sample to form a positive sample training set, and taking each extracted second historical feature vector as a negative training sample to form a negative sample training set;
forming a second sample training set corresponding to the first candidate problem by the positive sample training set and the negative sample training set;
each positive training sample corresponds to a positive label, each negative training sample corresponds to a negative label, the positive label indicates that the problem requested to be responded by the request end is the first candidate problem, and the negative label indicates that the problem requested to be responded by the request end is not the first candidate problem.
26. The apparatus of claim 25, wherein the second model training module, when training the second prediction model for each candidate problem match based on the second sample training set for each candidate problem until it is determined that the training of the second prediction model for each candidate problem match is complete, is specifically configured to:
For the second prediction model of the first candidate problem match, performing the following training process:
acquiring a first preset number of positive training samples and a second preset number of negative training samples from a second sample training set corresponding to the first candidate problem;
inputting the first preset number of positive training samples and the second preset number of negative training samples into a second prediction model matched with the first candidate problem, and outputting a classification result corresponding to each positive training sample and a classification result corresponding to each negative training sample; the classification result indicates whether the problem requested to be responded by the request end is the first candidate problem or not;
determining a second loss value of the training process by comparing the classification result corresponding to each positive training sample with the positive label and comparing the classification result corresponding to each negative training sample with the negative label;
when the second loss value is larger than a second set value, model parameters of a second prediction model matched with the first candidate problem are adjusted, and a next training process is carried out by using the adjusted second prediction model matched with the first candidate problem until the determined second loss value is smaller than or equal to the second set value, and training of the second prediction model matched with the first candidate problem is determined to be completed.
27. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the problem recommendation method according to any one of claims 1 to 13 when executed.
28. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the problem recommendation method according to any one of claims 1 to 13.
CN201811458062.1A 2018-11-30 2018-11-30 Question recommending method and device Active CN111259119B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811458062.1A CN111259119B (en) 2018-11-30 2018-11-30 Question recommending method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811458062.1A CN111259119B (en) 2018-11-30 2018-11-30 Question recommending method and device

Publications (2)

Publication Number Publication Date
CN111259119A CN111259119A (en) 2020-06-09
CN111259119B true CN111259119B (en) 2023-05-26

Family

ID=70944816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811458062.1A Active CN111259119B (en) 2018-11-30 2018-11-30 Question recommending method and device

Country Status (1)

Country Link
CN (1) CN111259119B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107688641B (en) * 2017-08-28 2021-12-28 江西博瑞彤芸科技有限公司 Question management method and system
CN112529602A (en) * 2020-12-23 2021-03-19 北京嘀嘀无限科技发展有限公司 Data processing method and device, readable storage medium and electronic equipment
CN112885175B (en) * 2021-01-15 2022-10-21 杭州安恒信息安全技术有限公司 Information security question generation method and device, electronic device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103229223A (en) * 2010-09-28 2013-07-31 国际商业机器公司 Providing answers to questions using multiple models to score candidate answers
CN104965890A (en) * 2015-06-17 2015-10-07 深圳市腾讯计算机***有限公司 Advertisement recommendation method and apparatus
CN106682387A (en) * 2016-10-26 2017-05-17 百度国际科技(深圳)有限公司 Method and device used for outputting information
CN107451199A (en) * 2017-07-05 2017-12-08 阿里巴巴集团控股有限公司 Method for recommending problem and device, equipment
CN107463704A (en) * 2017-08-16 2017-12-12 北京百度网讯科技有限公司 Searching method and device based on artificial intelligence
CN107977411A (en) * 2017-11-21 2018-05-01 腾讯科技(成都)有限公司 Group recommending method, device, storage medium and server
WO2018184395A1 (en) * 2017-04-07 2018-10-11 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for activity recommendation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103229223A (en) * 2010-09-28 2013-07-31 国际商业机器公司 Providing answers to questions using multiple models to score candidate answers
CN104965890A (en) * 2015-06-17 2015-10-07 深圳市腾讯计算机***有限公司 Advertisement recommendation method and apparatus
CN106682387A (en) * 2016-10-26 2017-05-17 百度国际科技(深圳)有限公司 Method and device used for outputting information
WO2018184395A1 (en) * 2017-04-07 2018-10-11 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for activity recommendation
CN107451199A (en) * 2017-07-05 2017-12-08 阿里巴巴集团控股有限公司 Method for recommending problem and device, equipment
CN107463704A (en) * 2017-08-16 2017-12-12 北京百度网讯科技有限公司 Searching method and device based on artificial intelligence
CN107977411A (en) * 2017-11-21 2018-05-01 腾讯科技(成都)有限公司 Group recommending method, device, storage medium and server

Also Published As

Publication number Publication date
CN111259119A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN111353092B (en) Service pushing method, device, server and readable storage medium
US20200051193A1 (en) Systems and methods for allocating orders
CN111367575B (en) User behavior prediction method and device, electronic equipment and storage medium
CN112236787A (en) System and method for generating personalized destination recommendations
JP2019532372A (en) System and method for determining a driver's safety score
TWI724958B (en) Systems, methods, and computer readable media for online to offline service
CN111259119B (en) Question recommending method and device
CN111104585B (en) Question recommending method and device
CN111105120B (en) Work order processing method and device
CN109313742A (en) Determine the method and system for estimating arrival time
CN111105251A (en) Information pushing method and device
CN111433795A (en) System and method for determining estimated arrival time of online-to-offline service
CN111316308A (en) System and method for identifying wrong order requests
US20200104889A1 (en) Systems and methods for price estimation using machine learning techniques
CN111198989A (en) Method and device for determining travel recommendation data, storage medium and electronic equipment
CN110750709A (en) Service recommendation method and device
CN111489214B (en) Order allocation method, condition setting method, device and electronic equipment
CN111259229B (en) Question recommending method and device
CN111353093B (en) Problem recommendation method, device, server and readable storage medium
CN111274471B (en) Information pushing method, device, server and readable storage medium
CN111291253B (en) Model training method, consultation recommendation method and device and electronic equipment
CN111260423B (en) Order allocation method, order allocation device, electronic equipment and computer readable storage medium
CN111275062A (en) Model training method, device, server and computer readable storage medium
CN111127126A (en) Information feedback method and device and computer readable storage medium
CN111695919B (en) Evaluation data processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant