CN117808545A - Attention mechanism-based multitasking method, device, equipment and medium - Google Patents

Attention mechanism-based multitasking method, device, equipment and medium Download PDF

Info

Publication number
CN117808545A
CN117808545A CN202311792845.4A CN202311792845A CN117808545A CN 117808545 A CN117808545 A CN 117808545A CN 202311792845 A CN202311792845 A CN 202311792845A CN 117808545 A CN117808545 A CN 117808545A
Authority
CN
China
Prior art keywords
feature
user
task
historical
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311792845.4A
Other languages
Chinese (zh)
Inventor
刘文海
于敬
李健
丁佼
纪达麒
陈运文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daguan Data Suzhou Co ltd
Original Assignee
Daguan Data Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daguan Data Suzhou Co ltd filed Critical Daguan Data Suzhou Co ltd
Priority to CN202311792845.4A priority Critical patent/CN117808545A/en
Publication of CN117808545A publication Critical patent/CN117808545A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a multitasking method, a device, equipment and a medium based on an attention mechanism. Acquiring current user characteristic description information and user scene task number of a target user in real time; respectively carrying out feature coding processing on the basic attribute feature of the current user, the behavior sequence feature of the current user and the feature of the current candidate commodity to obtain a basic attribute feature coding vector of the current user, a behavior sequence feature coding vector of the current user and a feature coding vector of the current candidate commodity; inputting the current user basic attribute feature coding vector, the current user behavior sequence feature coding vector and the current candidate commodity feature coding vector into a pre-constructed multi-task weight determining model to obtain task weights matched with the task number of a user scene; and respectively sequencing the task weights to obtain a multi-task sequencing result so as to realize recommendation display of the user according to the multi-task sequencing result. The efficiency and the accuracy of recommendation for the user are improved.

Description

Attention mechanism-based multitasking method, device, equipment and medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method, an apparatus, a device, and a medium for multitasking sequencing based on an attention mechanism.
Background
Among the recommendation algorithms, the ranking algorithm is a very important one. For the application scene of commodity selection of the user, candidate commodities are scored according to the interests and commodity characteristics of the user, and candidate commodities with high scores are recommended to the user preferentially.
The inventors have found that the following drawbacks exist in the prior art in the process of implementing the present invention: currently, in the actual business index optimization process, one or more optimization objectives are mostly involved. For example: click rate and conversion rate, click rate and reading duration, and order amount and order quantity. However, in the conventional single-task model, each task can only be modeled one by one, and then optimization of a plurality of indexes is controlled by means of model series connection, coarse row and fine row. However, the system in this way has high complexity and large model parameters, and the situation that the indexes are increased and decreased frequently occurs is poor.
Disclosure of Invention
The invention provides a multitask ordering method, a multitask ordering device, multitask ordering equipment and a multitask ordering medium based on an attention mechanism, so as to improve the efficiency and the accuracy of user recommendation.
According to an aspect of the present invention, there is provided a method for multitasking based on an attention mechanism, including:
acquiring current user characteristic description information of a target user in real time, and acquiring the number of user scene tasks corresponding to the target user;
the current user characteristic description information comprises a current user basic attribute characteristic, a current user behavior sequence characteristic and a current candidate commodity characteristic; the number of the user scene tasks is more than or equal to 2;
performing feature coding processing on the current user basic attribute feature, the current user behavior sequence feature and the current candidate commodity feature respectively to obtain a current user basic attribute feature coding vector, a current user behavior sequence feature coding vector and a current candidate commodity feature coding vector;
inputting the current user basic attribute feature code vector, the current user behavior sequence feature code vector and the current candidate commodity feature code vector into a pre-constructed multi-task weight determining model to obtain task weights matched with the task number of the user scene;
and respectively sequencing the task weights to obtain a multi-task sequencing result corresponding to the target user so as to realize recommendation display of the user according to the multi-task sequencing result.
According to another aspect of the present invention, there is provided an attention mechanism-based multitasking apparatus, comprising:
the system comprises a current user characteristic description information and user scene task number acquisition module, a target user processing module and a user scene task number acquisition module, wherein the current user characteristic description information and the user scene task number acquisition module are used for acquiring current user characteristic description information of the target user in real time and acquiring the user scene task number corresponding to the target user;
the current user characteristic description information comprises a current user basic attribute characteristic, a current user behavior sequence characteristic and a current candidate commodity characteristic; the number of the user scene tasks is more than or equal to 2;
the feature coding processing module is used for respectively carrying out feature coding processing on the current user basic attribute feature, the current user behavior sequence feature and the current candidate commodity feature to obtain a current user basic attribute feature coding vector, a current user behavior sequence feature coding vector and a current candidate commodity feature coding vector;
the task weight determining module is used for inputting the current user basic attribute feature code vector, the current user behavior sequence feature code vector and the current candidate commodity feature code vector into a pre-constructed multi-task weight determining model to obtain task weights matched with the task number of the user scene;
The multi-task sequencing result determining module is used for sequencing each task weight respectively to obtain a multi-task sequencing result corresponding to the target user so as to realize recommendation display of the user according to the multi-task sequencing result.
According to another aspect of the present invention, there is provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements a method for attention-based multitasking according to any of the embodiments of the present invention when executing the computer program.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement a method of attention-based multitasking in accordance with any of the embodiments of the present invention.
According to the technical scheme, the current user characteristic description information of the target user is obtained in real time, and the number of user scene tasks corresponding to the target user is obtained; respectively carrying out feature coding processing on the basic attribute feature of the current user, the behavior sequence feature of the current user and the feature of the current candidate commodity to obtain a basic attribute feature coding vector of the current user, a behavior sequence feature coding vector of the current user and a feature coding vector of the current candidate commodity; inputting the current user basic attribute feature coding vector, the current user behavior sequence feature coding vector and the current candidate commodity feature coding vector into a pre-constructed multi-task weight determining model to obtain task weights matched with the task number of a user scene; and respectively sequencing the task weights to obtain a multi-task sequencing result corresponding to the target user so as to realize recommendation display of the user according to the multi-task sequencing result. The method solves the problem that the efficiency is low because the multitasking is processed and each task can only be modeled one by one, and improves the efficiency and accuracy of recommending to users.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1a is a flow chart of a method for attention-based multi-tasking ordering according to a first embodiment of the present invention;
FIG. 1b is a schematic diagram of a multi-task weight determination model with a user scene task number of 2 in a method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a attention mechanism-based multitasking device according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "target," "current," and the like in the description and claims of the present invention and the above-described drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1a is a flowchart of an attention-based multitasking method according to an embodiment of the present invention, where the method may be performed by an attention-based multitasking device, and the attention-based multitasking device may be implemented in hardware and/or software.
Accordingly, as shown in fig. 1a, the method comprises:
s110, acquiring current user characteristic description information of a target user in real time, and acquiring the number of user scene tasks corresponding to the target user.
The current user characteristic description information comprises a current user basic attribute characteristic, a current user behavior sequence characteristic and a current candidate commodity characteristic; and the number of the user scene tasks is more than or equal to 2.
The current user characteristic description information may be information describing various types of characteristics of the user.
Specifically, the current user basic attribute features may be features describing basic information of the user, and may include features such as user age, asset balance, and city to which the user belongs. The current user behavior sequence features comprise features of long-term interest evolution of the user and features of short-term interest evolution of the user. The current candidate commodity characteristics may include characteristics of commodity category, commodity price, commodity brand, commodity color, commodity applicable gender, commodity applicable season, commodity applicable scenario, and the like.
The number of tasks in the user scene can be different numbers of tasks corresponding to different task scenes. For example: it may include 2 tasks for website a, specifically click through rate and conversion rate. Similarly, it may be determined that website B includes 4 tasks, specifically, click rate, reading duration, order amount, and order quantity.
It will be appreciated that, in general, the number of tasks in the user scenario is greater than or equal to 2, i.e. includes a plurality of tasks, and the optimization recommendation processing operation needs to be performed on the plurality of tasks.
And S120, respectively carrying out feature coding processing on the current user basic attribute feature, the current user behavior sequence feature and the current candidate commodity feature to obtain a current user basic attribute feature coding vector, a current user behavior sequence feature coding vector and a current candidate commodity feature coding vector.
In this embodiment, since both the current user basic attribute feature and the current candidate commodity feature have numerical features, discretization processing is performed on the numerical features, so that coding processing is performed on the discretized features, and corresponding current user basic attribute feature coding vectors and current candidate commodity feature coding vectors can be obtained.
Specifically, the encoding process herein may be one-hot encoding, which is not specifically limited herein.
Optionally, the current user basic attribute features include a current user basic attribute numerical feature and a current user basic attribute discrete feature; the current user behavior sequence features comprise current user behavior long-term sequence features and current user behavior short-term sequence features; the current candidate commodity characteristics comprise current candidate commodity numerical characteristics and current candidate commodity discrete characteristics; and respectively carrying out feature coding processing on the current user basic attribute feature and the current candidate commodity feature to obtain a current user basic attribute feature coding vector and a current candidate commodity feature coding vector, wherein the feature coding processing comprises the following steps: performing feature conversion on the current user basic attribute numerical type feature to obtain a current user basic attribute conversion discrete type feature; performing feature conversion on the current candidate commodity numerical type feature to obtain a current candidate commodity conversion discrete type feature; and carrying out feature coding processing on the current user basic attribute conversion discrete feature, the current user basic attribute discrete feature, the current candidate commodity conversion discrete feature and the current candidate commodity discrete feature to obtain a current user basic attribute feature coding vector and a current candidate commodity feature coding vector.
The current user basic attribute numerical type feature may be a numerical type user basic attribute feature. For example, it may be a characteristic of the user's age, balance of assets, annual income of the user, and the like. The current user base attribute discrete feature may be a discrete user base attribute feature. For example, the characteristics of the city, the sex of the user, the academy of the user, and the like can be adopted.
In this embodiment, the numerical user basic attribute features need to be converted into discrete user basic attribute features, and the method of automatic box division is not specifically limited herein. Further, the discrete user basic attribute feature and the current user basic attribute discrete feature are subjected to coding processing operation through one-hot coding to obtain a current user basic attribute feature coding vector.
Similarly, the current candidate commodity feature can be subjected to numerical discretization, and then the encoding processing operation is performed through one-hot encoding, so that the current candidate commodity feature encoding vector is obtained.
Accordingly, since the current user behavior sequence features include the current user behavior long-term sequence features and the current user behavior short-term sequence features. Specifically, the long-term sequence feature of the current user behavior may be a feature of a longer period of the user, which is longer than the data acquisition period length corresponding to the short-term sequence feature of the current user behavior.
For example, the current user behavior long-term sequence feature may use the user's purchasing behavior for the last year; the current user behavior short-term sequence feature may use nearly two weeks of clicking, browsing, collecting, purchasing, and purchasing behavior.
S130, inputting the current user basic attribute feature coding vector, the current user behavior sequence feature coding vector and the current candidate commodity feature coding vector into a pre-constructed multi-task weight determining model to obtain task weights matched with the task number of the user scene.
Wherein the task weight may be the magnitude of the weight for different tasks. In this embodiment, the sorting process may be performed according to the weights of the tasks, so as to obtain a corresponding recommended sorting result.
In this embodiment, it is assumed that the number of user scenario tasks of the target user is 2, that is, 2 sets of task weights are output. Assume that the tasks are click through rate and purchase rate, respectively. Assume that there are 5 products that find to meet the requirements: the click rate of the commodity 1 is 0.3 and the purchase rate is 0.1; the click rate of the commodity 2 is 0.7 and the purchase rate is 0.5; the click rate of the commodity 3 is 0.6 and the purchase rate is 0.2; the click rate of the commodity 4 is 0.2 and the purchase rate is 0.1; the click rate of the commodity 5 was 0.6 and the purchase rate was 0.5.
And S140, respectively sequencing the task weights to obtain a multi-task sequencing result corresponding to the target user so as to realize recommendation display of the user according to the multi-task sequencing result.
The multi-task sequencing result may be a result obtained by sequencing each task.
In the previous example, since the click rate and the purchase rate corresponding to the 5 commodities respectively are obtained, the results can be further ordered according to the weights of the click rate and the purchase rate, so that the multi-task ordering result is obtained. And after the multi-task sequencing result is obtained, the obtained multi-task sequencing result can be recommended and displayed to the user.
Optionally, the sorting processing is performed on each task weight to obtain a multi-task sorting result corresponding to the target user, including: sequencing the task weights respectively to obtain task weight sequencing results corresponding to the task weights respectively; judging whether at least one target task limiting weight threshold exists, if so, acquiring a preset target task limiting weight threshold, and judging whether the task weight of the target task in each task is larger than or equal to the target task limiting weight threshold; if yes, combining task weight sequencing results corresponding to the task weights meeting the requirements and task weight sequencing results corresponding to at least one residual task weight respectively to obtain a multi-task sequencing result corresponding to the target user.
In the previous example, it is assumed that the target task definition weight threshold is set to a weight threshold limiting the click rate of 0.4. Firstly, sorting processing is needed to be carried out on the click rate and the purchase rate corresponding to each commodity, and a corresponding task weight sorting result can be obtained.
Further, there is a target task defined weight threshold for click rate, and the purchase rate does not have a target task defined weight threshold, i.e. the click rate needs to be filtered. Since the click rate is larger than 0.4, namely the commodity 2, the commodity 3 and the commodity 5 which meet the requirements are respectively.
Accordingly, since the click rate of the commodity 2 is 0.7 and the purchase rate is 0.5; the click rate of the commodity 3 is 0.6 and the purchase rate is 0.2; the click rate of the commodity 5 was 0.6 and the purchase rate was 0.5. Assuming that the judgment ordering weight of the click rate is larger than the purchase rate, the priority of the click rate is larger. Since the click rate of the commodity 2 is larger than that of the commodity 3 and the commodity 5, the commodity 2 is first ranked. The click rates of the products 3 and 5 are the same, but the purchase rate of the products 5 is larger than that of the products 3, so that the products 5 are arranged in the second row and the products 3 are arranged in the third row. The products 1 and 4 may not be displayed as desired. That is, a multi-task ranking result corresponding to the target user is obtained.
Optionally, after the determining whether there is at least one target task defined weight threshold, the method further includes: if not, combining the task weight sequencing results to obtain a multi-task sequencing result corresponding to the target user.
In this embodiment, it is assumed that there is no target task defining weight threshold, that is, it is only necessary to directly sort according to each task weight.
Here, it is assumed that the judgment ranking weight of the click rate is larger than the purchase rate, and the priority of the click rate is larger. Similarly, the multitasking results may be determined to be commodity 2, commodity 5, commodity 3, commodity 1, and commodity 4.
Optionally, before the acquiring, in real time, the current user feature description information of the target user and the number of user scene tasks corresponding to the target user, the method further includes: acquiring historical user characteristic description information, and respectively acquiring the number of historical user scene tasks corresponding to each historical target user; the historical user characteristic description information comprises a historical user basic attribute characteristic, a historical user behavior sequence characteristic and a historical candidate commodity characteristic; the method comprises the steps of performing feature coding processing on historical user basic attribute features, historical user behavior sequence features and historical candidate commodity features, inputting obtained results into an initial multi-task weight determining model to perform model training, and performing model parameter optimization through a loss function to train and complete the multi-task weight determining model.
In this embodiment, the historical user feature description information and the number of tasks of the historical user scene may be obtained, and the initial multi-task weight determination model may be further trained by analyzing the historical user feature description information and the number of tasks of the historical user scene, so that the initial multi-task weight determination model may be further optimized according to the loss function, and thus the multi-task weight determination model may be obtained through training. For example, the Loss function here may be a Loss function.
Optionally, the historical user behavior sequence features include a historical user behavior long-term sequence feature and a historical user behavior short-term sequence feature; the historical user behavior long-term sequence feature comprises at least one historical user behavior long-term sequence sub-feature; the historical user behavior short-term sequence feature comprises at least one historical user behavior short-term sequence sub-feature; the method comprises the steps of respectively carrying out feature coding processing on the basic attribute features of the historical user, the behavior sequence features of the historical user and the candidate commodity features of the historical user, inputting the obtained results into an initial multitask weight determination model for training the model, and comprises the following steps: through GRU (Gated Recurrent Unit, gate control circulation unit) network in the initial multitasking weight determining model, respectively carrying out state vector generation on each historical user behavior long-term sequence sub-feature and each historical user behavior short-term sequence sub-feature to respectively obtain a historical long-term state vector and a historical short-term state vector; acquiring a historical candidate commodity feature code vector obtained through feature code processing, and processing the historical candidate commodity feature code vector, each historical user behavior long-term sequence sub-feature and each historical user behavior short-term sequence sub-feature respectively through an activation network in the initial multi-task weight determination model to obtain each historical long-term weight value and each historical short-term weight value; calculating to obtain a long-term self-attention mechanism total weight and a short-term self-attention mechanism total weight according to each historical long-term weight value, each historical short-term weight value, each historical user behavior long-term sequence sub-feature and each historical user behavior short-term sequence sub-feature; and splicing the historical user basic attribute feature code vector and the historical candidate commodity feature code vector which are obtained through feature coding, and the historical long-term state vector, the historical short-term state vector, the long-term self-attention mechanism total weight and the short-term self-attention mechanism total weight, and determining a plurality of expert network structures and shared network structures in the model through initial multi-task weights to train the model.
In this embodiment, fig. 1b is a schematic structural diagram of a multi-task weight determining model with a task number of 2 in a user scenario.
Specifically, after the historical user basic attribute features are coded, dense user feature vectors are generated through an enabling layer, so that training of the multi-task weight determination model can be better performed.
Similarly, the historical candidate commodity features (namely commodity features in the figure) are coded, and dense commodity feature vectors are generated through an embellishing layer. It can be understood that the commodities in the historical user behavior sequence features (the values of the behavior commodities i and i are 1-n) and the historical candidate commodity features share the parameters of the ebedding layer, and dense behavior feature vectors are generated after passing through the ebedding layer. (namely, in the step 1b, commodity characteristics obtain commodity characteristic vectors through an ebedding layer, and the action commodity 1, the action commodity 2 and the action commodity … and the action commodity n obtain corresponding action characteristic vectors 1, 2 and … and n through the ebedding layer).
Further, behavior feature vector v n Firstly, generating a state vector h through a GRU (Gated Recurrent Unit, gate control loop unit) network n ,h n And a subsequent behavior feature vector v n+1 Commonly input to the next GRU network to generate a state vector h n+1 And so on, the last GRU network generates a final state vector h. Two state vectors are eventually generated as well, due to the long-term and short-term sequences of user behavior, respectively, being used.
Optionally, the calculating to obtain a total weight of the long-term self-attention mechanism and a total weight of the short-term self-attention mechanism according to each historical long-term weight value and each historical short-term weight value, each historical user behavior long-term sequence sub-feature and each historical user behavior short-term sequence sub-feature includes: performing corresponding multiplication processing operation according to each historical long-term weight value and each historical user behavior long-term sequence sub-feature to obtain a long-term self-attention mechanism total weight; and carrying out corresponding multiplication processing operation according to each historical short-term weight value and each historical user behavior short-term sequence sub-feature to obtain the total weight of the short-term self-attention mechanism.
Behavior feature vector v of the previous example n The commodity characteristic vector is input into an activation network together, the output layer of the activation network is a linear layer, and finally a weight value (which can comprise each historical long-term weight value and each historical short-term weight value) is output to represent the commodity characteristic vector Correlation coefficient w of behavior and candidate commodity n . And then the correlation coefficient w n Multiplying the behavior feature vector to obtain an output vector corresponding to the behavior feature vector. And obtaining the total weight of the self-attention mechanism by summing the output vectors corresponding to all the behavior feature vectors. As long-term and short-term sequences of user behavior are used, respectively, two total self-attention mechanisms weights (i.e., a long-term total self-attention mechanism weight and a short-term total self-attention mechanism weight) are also ultimately generated.
Correspondingly, splicing the historical user basic attribute feature code vector, the historical candidate commodity feature code vector, the historical long-term state vector and the total weight of the long-term self-attention mechanism; and splicing the historical user basic attribute feature code vector, the historical candidate commodity feature code vector, the historical short-term state vector and the short-term self-attention mechanism total weight to obtain 2 input feature vectors input of the initial multi-task weight determination model.
Specifically, there are three network structures (full connection layers) on the input feature vector inputs vector. Wherein, the two network structures (expert network structures) respectively correspond to the two learning tasks; and another network structure (shared network structure) for extracting common knowledge between two learning tasks. Further, the output result of the shared network is respectively input into two expert networks through two full-connection layers. The softmax layers of the two expert networks finally output two prediction results. And then, carrying out weighted summation on the loss functions of the two expert networks to obtain the overall loss of the network, so as to achieve the effect of combined training, and training to obtain a multi-task weight determination model.
It will be appreciated that the number of user scenario tasks is 3, which may include 3 expert network structures and a shared network structure, and 3 tasks may be output.
According to the technical scheme, the current user characteristic description information of the target user is obtained in real time, and the number of user scene tasks corresponding to the target user is obtained; respectively carrying out feature coding processing on the basic attribute feature of the current user, the behavior sequence feature of the current user and the feature of the current candidate commodity to obtain a basic attribute feature coding vector of the current user, a behavior sequence feature coding vector of the current user and a feature coding vector of the current candidate commodity; inputting the current user basic attribute feature coding vector, the current user behavior sequence feature coding vector and the current candidate commodity feature coding vector into a pre-constructed multi-task weight determining model to obtain task weights matched with the task number of a user scene; and respectively sequencing the task weights to obtain a multi-task sequencing result corresponding to the target user so as to realize recommendation display of the user according to the multi-task sequencing result. The method solves the problem that the efficiency is low because the multitasking is processed and each task can only be modeled one by one, and improves the efficiency and accuracy of recommending to users.
Example two
Fig. 2 is a schematic structural diagram of a multitasking device based on an attention mechanism according to a second embodiment of the present invention. The attention mechanism-based multitasking device provided in the embodiment of the invention may be implemented by software and/or hardware, and may be configured in a terminal device or a server to implement an attention mechanism-based multitasking method in the embodiment of the invention. As shown in fig. 2, the apparatus includes: the current user feature description information and user scene task number acquisition module 210, the feature encoding processing module 220, the task weight determination module 230 and the multi-task ranking result determination module 240.
The current user feature description information and user scene task number acquisition module 210 is configured to acquire current user feature description information of a target user in real time, and acquire the number of user scene tasks corresponding to the target user;
the current user characteristic description information comprises a current user basic attribute characteristic, a current user behavior sequence characteristic and a current candidate commodity characteristic; the number of the user scene tasks is more than or equal to 2;
the feature encoding processing module 220 is configured to perform feature encoding processing on the current user basic attribute feature, the current user behavior sequence feature, and the current candidate commodity feature, to obtain a current user basic attribute feature encoding vector, a current user behavior sequence feature encoding vector, and a current candidate commodity feature encoding vector;
The task weight determining module 230 is configured to input the current user basic attribute feature encoding vector, the current user behavior sequence feature encoding vector, and the current candidate commodity feature encoding vector into a pre-constructed multi-task weight determining model, so as to obtain task weights that are matched with the task number of the user scene;
the multi-task sequencing result determining module 240 is configured to sequence each task weight respectively to obtain a multi-task sequencing result corresponding to the target user, so as to implement recommendation display for the user according to the multi-task sequencing result.
According to the technical scheme, the current user characteristic description information of the target user is obtained in real time, and the number of user scene tasks corresponding to the target user is obtained; respectively carrying out feature coding processing on the basic attribute feature of the current user, the behavior sequence feature of the current user and the feature of the current candidate commodity to obtain a basic attribute feature coding vector of the current user, a behavior sequence feature coding vector of the current user and a feature coding vector of the current candidate commodity; inputting the current user basic attribute feature coding vector, the current user behavior sequence feature coding vector and the current candidate commodity feature coding vector into a pre-constructed multi-task weight determining model to obtain task weights matched with the task number of a user scene; and respectively sequencing the task weights to obtain a multi-task sequencing result corresponding to the target user so as to realize recommendation display of the user according to the multi-task sequencing result. The method solves the problem that the efficiency is low because the multitasking is processed and each task can only be modeled one by one, and improves the efficiency and accuracy of recommending to users.
Based on the above embodiments, the determining module 240 of the multi-task ordering result may be specifically configured to: sequencing the task weights respectively to obtain task weight sequencing results corresponding to the task weights respectively; judging whether at least one target task limiting weight threshold exists, if so, acquiring a preset target task limiting weight threshold, and judging whether the task weight of the target task in each task is larger than or equal to the target task limiting weight threshold; if yes, combining task weight sequencing results corresponding to the task weights meeting the requirements and task weight sequencing results corresponding to at least one residual task weight respectively to obtain a multi-task sequencing result corresponding to the target user.
Based on the foregoing embodiments, the determining module 240 of the multi-task ordering result may be further specifically configured to: and after judging whether at least one target task limiting weight threshold exists, if the at least one target task limiting weight threshold does not exist, combining the task weight sequencing results to obtain a multi-task sequencing result corresponding to the target user.
On the basis of the above embodiments, the feature code processing module 220 may specifically be configured to: the current user basic attribute characteristics comprise current user basic attribute numerical characteristics and current user basic attribute discrete characteristics; the current user behavior sequence features comprise current user behavior long-term sequence features and current user behavior short-term sequence features; the current candidate commodity characteristics comprise current candidate commodity numerical characteristics and current candidate commodity discrete characteristics;
on the basis of the foregoing embodiments, the feature code processing module 220 may be specifically further configured to: performing feature conversion on the current user basic attribute numerical type feature to obtain a current user basic attribute conversion discrete type feature; performing feature conversion on the current candidate commodity numerical type feature to obtain a current candidate commodity conversion discrete type feature; and carrying out feature coding processing on the current user basic attribute conversion discrete feature, the current user basic attribute discrete feature, the current candidate commodity conversion discrete feature and the current candidate commodity discrete feature to obtain a current user basic attribute feature coding vector and a current candidate commodity feature coding vector.
Based on the above embodiments, the multi-task weight determination model training module may be specifically configured to: before the current user characteristic description information of a target user and the number of user scene tasks corresponding to the target user are obtained in real time, historical user characteristic description information is obtained, and the number of historical user scene tasks corresponding to each historical target user is respectively obtained; the historical user characteristic description information comprises a historical user basic attribute characteristic, a historical user behavior sequence characteristic and a historical candidate commodity characteristic; the method comprises the steps of performing feature coding processing on historical user basic attribute features, historical user behavior sequence features and historical candidate commodity features, inputting obtained results into an initial multi-task weight determining model to perform model training, and performing model parameter optimization through a loss function to train and complete the multi-task weight determining model.
On the basis of the above embodiments, the historical user behavior sequence features include a historical user behavior long-term sequence feature and a historical user behavior short-term sequence feature; the historical user behavior long-term sequence feature comprises at least one historical user behavior long-term sequence sub-feature; the historical user behavior short-term sequence feature includes at least one historical user behavior short-term sequence sub-feature.
Based on the above embodiments, the multi-task weight determination model training module may be further specifically configured to: through GRU network in initial multitasking weight determining model, respectively carrying out state vector generation on each long-term sequence sub-feature of historical user behavior and each short-term sequence sub-feature of historical user behavior to obtain a long-term state vector and a short-term state vector of historical user behavior; acquiring a historical candidate commodity feature code vector obtained through feature code processing, and processing the historical candidate commodity feature code vector, each historical user behavior long-term sequence sub-feature and each historical user behavior short-term sequence sub-feature respectively through an activation network in the initial multi-task weight determination model to obtain each historical long-term weight value and each historical short-term weight value; calculating to obtain a long-term self-attention mechanism total weight and a short-term self-attention mechanism total weight according to each historical long-term weight value, each historical short-term weight value, each historical user behavior long-term sequence sub-feature and each historical user behavior short-term sequence sub-feature; and splicing the historical user basic attribute feature code vector and the historical candidate commodity feature code vector which are obtained through feature coding, and the historical long-term state vector, the historical short-term state vector, the long-term self-attention mechanism total weight and the short-term self-attention mechanism total weight, and determining a plurality of expert network structures and shared network structures in the model through initial multi-task weights to train the model.
Based on the above embodiments, the multi-task weight determination model training module may be further specifically configured to: performing corresponding multiplication processing operation according to each historical long-term weight value and each historical user behavior long-term sequence sub-feature to obtain a long-term self-attention mechanism total weight; and carrying out corresponding multiplication processing operation according to each historical short-term weight value and each historical user behavior short-term sequence sub-feature to obtain the total weight of the short-term self-attention mechanism.
The attention mechanism-based multitasking device provided by the embodiment of the invention can execute the attention mechanism-based multitasking method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example III
Fig. 3 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement a third embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 3, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as a multitasking method based on an attention mechanism.
In some embodiments, a method of attention mechanism based multitasking may be implemented as a computer program tangibly embodied on a computer readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the attention-based mechanism-based multitasking method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform a method of attention-based multitasking by any other suitable means (e.g., by means of firmware).
The method comprises the following steps: acquiring current user characteristic description information of a target user in real time, and acquiring the number of user scene tasks corresponding to the target user; the current user characteristic description information comprises a current user basic attribute characteristic, a current user behavior sequence characteristic and a current candidate commodity characteristic; the number of the user scene tasks is more than or equal to 2; performing feature coding processing on the current user basic attribute feature, the current user behavior sequence feature and the current candidate commodity feature respectively to obtain a current user basic attribute feature coding vector, a current user behavior sequence feature coding vector and a current candidate commodity feature coding vector; inputting the current user basic attribute feature code vector, the current user behavior sequence feature code vector and the current candidate commodity feature code vector into a pre-constructed multi-task weight determining model to obtain task weights matched with the task number of the user scene; and respectively sequencing the task weights to obtain a multi-task sequencing result corresponding to the target user so as to realize recommendation display of the user according to the multi-task sequencing result.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.
Example IV
A fourth embodiment of the present invention also provides a computer-readable storage medium containing computer-readable instructions, which when executed by a computer processor, are configured to perform a method of attention-based multitasking, the method comprising: acquiring current user characteristic description information of a target user in real time, and acquiring the number of user scene tasks corresponding to the target user; the current user characteristic description information comprises a current user basic attribute characteristic, a current user behavior sequence characteristic and a current candidate commodity characteristic; the number of the user scene tasks is more than or equal to 2; performing feature coding processing on the current user basic attribute feature, the current user behavior sequence feature and the current candidate commodity feature respectively to obtain a current user basic attribute feature coding vector, a current user behavior sequence feature coding vector and a current candidate commodity feature coding vector; inputting the current user basic attribute feature code vector, the current user behavior sequence feature code vector and the current candidate commodity feature code vector into a pre-constructed multi-task weight determining model to obtain task weights matched with the task number of the user scene; and respectively sequencing the task weights to obtain a multi-task sequencing result corresponding to the target user so as to realize recommendation display of the user according to the multi-task sequencing result.
Of course, the embodiment of the present invention provides a computer-readable storage medium, where the computer-executable instructions are not limited to the method operations described above, but may also perform the related operations in the attention-based multi-task ordering method according to any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
It should be noted that, in the above-mentioned embodiment of the attention mechanism-based multitasking device, each unit and module included are only divided according to the functional logic, but not limited to the above-mentioned division, as long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for attention mechanism-based multitasking sequencing, comprising:
acquiring current user characteristic description information of a target user in real time, and acquiring the number of user scene tasks corresponding to the target user;
the current user characteristic description information comprises a current user basic attribute characteristic, a current user behavior sequence characteristic and a current candidate commodity characteristic; the number of the user scene tasks is more than or equal to 2;
Performing feature coding processing on the current user basic attribute feature, the current user behavior sequence feature and the current candidate commodity feature respectively to obtain a current user basic attribute feature coding vector, a current user behavior sequence feature coding vector and a current candidate commodity feature coding vector;
inputting the current user basic attribute feature code vector, the current user behavior sequence feature code vector and the current candidate commodity feature code vector into a pre-constructed multi-task weight determining model to obtain task weights matched with the task number of the user scene;
and respectively sequencing the task weights to obtain a multi-task sequencing result corresponding to the target user so as to realize recommendation display of the user according to the multi-task sequencing result.
2. The method of claim 1, wherein the sorting the task weights to obtain a multi-task sorting result corresponding to the target user comprises:
sequencing the task weights respectively to obtain task weight sequencing results corresponding to the task weights respectively;
judging whether at least one target task limiting weight threshold exists, if so, acquiring a preset target task limiting weight threshold, and judging whether the task weight of the target task in each task is larger than or equal to the target task limiting weight threshold;
If yes, combining task weight sequencing results corresponding to the task weights meeting the requirements and task weight sequencing results corresponding to at least one residual task weight respectively to obtain a multi-task sequencing result corresponding to the target user.
3. The method of claim 2, further comprising, after said determining whether at least one target task defined weight threshold exists:
if not, combining the task weight sequencing results to obtain a multi-task sequencing result corresponding to the target user.
4. A method according to claim 3, wherein the current user base attribute features include a current user base attribute numeric feature and a current user base attribute discrete feature; the current user behavior sequence features comprise current user behavior long-term sequence features and current user behavior short-term sequence features; the current candidate commodity characteristics comprise current candidate commodity numerical characteristics and current candidate commodity discrete characteristics;
and respectively carrying out feature coding processing on the current user basic attribute feature and the current candidate commodity feature to obtain a current user basic attribute feature coding vector and a current candidate commodity feature coding vector, wherein the feature coding processing comprises the following steps:
Performing feature conversion on the current user basic attribute numerical type feature to obtain a current user basic attribute conversion discrete type feature;
performing feature conversion on the current candidate commodity numerical type feature to obtain a current candidate commodity conversion discrete type feature;
and carrying out feature coding processing on the current user basic attribute conversion discrete feature, the current user basic attribute discrete feature, the current candidate commodity conversion discrete feature and the current candidate commodity discrete feature to obtain a current user basic attribute feature coding vector and a current candidate commodity feature coding vector.
5. The method of claim 1, further comprising, prior to the acquiring in real time the current user profile of the target user and the number of user scenario tasks corresponding to the target user:
acquiring historical user characteristic description information, and respectively acquiring the number of historical user scene tasks corresponding to each historical target user;
the historical user characteristic description information comprises a historical user basic attribute characteristic, a historical user behavior sequence characteristic and a historical candidate commodity characteristic;
The method comprises the steps of performing feature coding processing on historical user basic attribute features, historical user behavior sequence features and historical candidate commodity features, inputting obtained results into an initial multi-task weight determining model to perform model training, and performing model parameter optimization through a loss function to train and complete the multi-task weight determining model.
6. The method of claim 5, wherein the historical user behavior sequence features include historical user behavior long-term sequence features and historical user behavior short-term sequence features; the historical user behavior long-term sequence feature comprises at least one historical user behavior long-term sequence sub-feature; the historical user behavior short-term sequence feature comprises at least one historical user behavior short-term sequence sub-feature;
the method comprises the steps of respectively carrying out feature coding processing on the basic attribute features of the historical user, the behavior sequence features of the historical user and the candidate commodity features of the historical user, inputting the obtained results into an initial multitask weight determination model for training the model, and comprises the following steps:
carrying out state vector generation on each historical user behavior long-term sequence sub-feature and each historical user behavior short-term sequence sub-feature through a gate control loop unit GRU network in an initial multitasking weight determination model to respectively obtain a historical long-term state vector and a historical short-term state vector;
Acquiring a historical candidate commodity feature code vector obtained through feature code processing, and processing the historical candidate commodity feature code vector, each historical user behavior long-term sequence sub-feature and each historical user behavior short-term sequence sub-feature respectively through an activation network in the initial multi-task weight determination model to obtain each historical long-term weight value and each historical short-term weight value;
calculating to obtain a long-term self-attention mechanism total weight and a short-term self-attention mechanism total weight according to each historical long-term weight value, each historical short-term weight value, each historical user behavior long-term sequence sub-feature and each historical user behavior short-term sequence sub-feature;
and splicing the historical user basic attribute feature code vector and the historical candidate commodity feature code vector which are obtained through feature coding, and the historical long-term state vector, the historical short-term state vector, the long-term self-attention mechanism total weight and the short-term self-attention mechanism total weight, and determining a plurality of expert network structures and shared network structures in the model through initial multi-task weights to train the model.
7. The method of claim 6, wherein said calculating a long-term self-attention mechanism total weight and a short-term self-attention mechanism total weight based on each of said historical long-term weight values and each of said historical short-term weight values, and each of said historical user behavior long-term sequence sub-features and each of said historical user behavior short-term sequence sub-features comprises:
Performing corresponding multiplication processing operation according to each historical long-term weight value and each historical user behavior long-term sequence sub-feature to obtain a long-term self-attention mechanism total weight;
and carrying out corresponding multiplication processing operation according to each historical short-term weight value and each historical user behavior short-term sequence sub-feature to obtain the total weight of the short-term self-attention mechanism.
8. A attention mechanism-based multitasking apparatus comprising:
the system comprises a current user characteristic description information and user scene task number acquisition module, a target user processing module and a user scene task number acquisition module, wherein the current user characteristic description information and the user scene task number acquisition module are used for acquiring current user characteristic description information of the target user in real time and acquiring the user scene task number corresponding to the target user;
the current user characteristic description information comprises a current user basic attribute characteristic, a current user behavior sequence characteristic and a current candidate commodity characteristic; the number of the user scene tasks is more than or equal to 2;
the feature coding processing module is used for respectively carrying out feature coding processing on the current user basic attribute feature, the current user behavior sequence feature and the current candidate commodity feature to obtain a current user basic attribute feature coding vector, a current user behavior sequence feature coding vector and a current candidate commodity feature coding vector;
The task weight determining module is used for inputting the current user basic attribute feature code vector, the current user behavior sequence feature code vector and the current candidate commodity feature code vector into a pre-constructed multi-task weight determining model to obtain task weights matched with the task number of the user scene;
the multi-task sequencing result determining module is used for sequencing each task weight respectively to obtain a multi-task sequencing result corresponding to the target user so as to realize recommendation display of the user according to the multi-task sequencing result.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the attention-based multitasking method of any of claims 1-7 when the computer program is executed.
10. A computer readable storage medium storing computer instructions for causing a processor to implement the attention-based multitasking method of any of claims 1-7 when executed.
CN202311792845.4A 2023-12-25 2023-12-25 Attention mechanism-based multitasking method, device, equipment and medium Pending CN117808545A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311792845.4A CN117808545A (en) 2023-12-25 2023-12-25 Attention mechanism-based multitasking method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311792845.4A CN117808545A (en) 2023-12-25 2023-12-25 Attention mechanism-based multitasking method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117808545A true CN117808545A (en) 2024-04-02

Family

ID=90421023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311792845.4A Pending CN117808545A (en) 2023-12-25 2023-12-25 Attention mechanism-based multitasking method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117808545A (en)

Similar Documents

Publication Publication Date Title
CN112561077B (en) Training method and device of multi-task model and electronic equipment
CN114240555A (en) Click rate prediction model training method and device and click rate prediction method and device
CN115202847A (en) Task scheduling method and device
CN116629620B (en) Risk level determining method and device, electronic equipment and storage medium
CN117333076A (en) Evaluation method, device, equipment and medium based on mixed expert model
CN115907926A (en) Commodity recommendation method and device, electronic equipment and storage medium
CN115293291B (en) Training method and device for sequencing model, sequencing method and device, electronic equipment and medium
CN115827994A (en) Data processing method, device, equipment and storage medium
CN114169418B (en) Label recommendation model training method and device and label acquisition method and device
CN113761379B (en) Commodity recommendation method and device, electronic equipment and medium
CN111768218A (en) Method and device for processing user interaction information
CN117808545A (en) Attention mechanism-based multitasking method, device, equipment and medium
CN114297511A (en) Financing recommendation method, device, system and storage medium
CN114037060A (en) Pre-training model generation method and device, electronic equipment and storage medium
CN113010782A (en) Demand amount acquisition method and device, electronic equipment and computer readable medium
CN113051472B (en) Modeling method, device, equipment and storage medium of click through rate estimation model
CN114037057B (en) Pre-training model generation method and device, electronic equipment and storage medium
CN114547417A (en) Media resource ordering method and electronic equipment
CN117743693A (en) Information recommendation method, device, equipment and storage medium
CN115719056A (en) Processing method, device, equipment, storage medium and product based on scoring template
CN115640461A (en) Product recommendation method and device, electronic equipment and storage medium thereof
CN117807287A (en) Label fusion method, device, electronic equipment and storage medium
CN117651167A (en) Resource recommendation method, device, equipment and storage medium
CN115170329A (en) Investment benefit evaluation method for science and technology project
CN114841747A (en) Attribute analysis system, training method, attribute analysis method, and attribute analysis device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination