CN110727797A - Label generation method and device, electronic equipment and computer readable medium - Google Patents

Label generation method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN110727797A
CN110727797A CN201910877832.4A CN201910877832A CN110727797A CN 110727797 A CN110727797 A CN 110727797A CN 201910877832 A CN201910877832 A CN 201910877832A CN 110727797 A CN110727797 A CN 110727797A
Authority
CN
China
Prior art keywords
target
taste
document
distribution
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910877832.4A
Other languages
Chinese (zh)
Inventor
马玉昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201910877832.4A priority Critical patent/CN110727797A/en
Publication of CN110727797A publication Critical patent/CN110727797A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/38Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/383Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a label generation method, a label generation device, electronic equipment and a computer readable medium. An embodiment of the method comprises: acquiring user behavior data of a plurality of users; extracting target words from user behavior data of each user respectively, and acquiring taste labels of the target words, wherein the target words comprise names of shops and/or names of merchants; summarizing the extracted target words into documents corresponding to each user, taking the taste labels as themes, and fitting a theme model based on each document and each taste label, wherein the theme model comprises the distribution of the taste labels of each document; and determining the target taste label of the user corresponding to each document based on the taste label distribution of each document. This embodiment improves the accuracy of the taste label.

Description

Label generation method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a label generation method, a label generation device, electronic equipment and a computer readable medium.
Background
In the scenes of user comment and user dish ordering, the user taste label is one of important means for accurately depicting the preference of a user. Through accurate excavation of the taste labels of the users, more accurate shop dishes or merchants can be recommended to the users, and therefore the experience effect of the users in ordering can be improved.
In the existing method, the taste labels of the shop dishes or the merchants are generally directly obtained, and then the taste labels of the shop dishes or the merchants, which are subjected to behaviors (such as dish ordering behavior, order placing behavior, praise behavior, collection behavior, and the like) generated by the user, are used as the taste labels of the user. However, this method may result in more taste labels for users, and the taste labels of the users are similar, so that there is a problem that the taste labels are not accurate enough.
Disclosure of Invention
The embodiment of the application provides a label generation method, a label generation device, electronic equipment and a computer readable medium, which are used for improving the pertinence of taste labels.
In a first aspect, an embodiment of the present application provides a tag generation method, where the method includes: acquiring user behavior data of a plurality of users; extracting target words from user behavior data of each user respectively, and acquiring taste labels of the target words, wherein the target words comprise names of shops and/or names of merchants; summarizing the extracted target words into documents corresponding to each user, taking the taste labels as themes, and fitting a theme model based on each document and each taste label, wherein the theme model comprises the distribution of the taste labels of each document; and determining the target taste label of the user corresponding to each document based on the taste label distribution of each document.
In a second aspect, an embodiment of the present application provides a tag generation apparatus, including: an acquisition unit configured to acquire user behavior data of a plurality of users; the summarizing unit is configured to extract target words from user behavior data of users respectively and acquire taste labels of the target words, wherein the target words comprise names of shops and/or names of merchants; a fitting unit configured to fit a topic model based on each document and each taste label with the taste label as a topic, wherein the topic model includes a taste label distribution of each document; and the determining unit is configured to determine the target taste label of the user corresponding to each document based on the taste label distribution of each document.
In a third aspect, an embodiment of the present application provides a tag generation method, where the method includes: acquiring target user behavior data of a target user; extracting target words from the target user behavior data, and summarizing the extracted target words into a target document, wherein the target words comprise names of shops and/or names of merchants; determining the theme distribution of the target document based on the target word distribution of the target document and the target word distribution of the theme in a pre-fitted theme model, wherein the theme in the theme model is a taste label; and determining a target taste label of the target user based on the theme distribution of the target document.
In a fourth aspect, an embodiment of the present application provides a tag generation apparatus, where the apparatus includes: an acquisition unit configured to acquire target user behavior data of a target user; the summarizing unit is configured to extract target words from the target user behavior data and summarize the extracted target words into a target document, wherein the target words comprise names of the shop and the merchant; the first determining unit is configured to determine the theme distribution of the target document based on the target word distribution of the target document and the target word distribution of the theme in a pre-fitted theme model, wherein the theme in the theme model is a taste label; a second determination unit configured to determine a target taste label of the target user based on the topic distribution of the target document.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the label generation method described in the above first or third aspect.
In a sixth aspect, the present application provides a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the tag generation method described in the first aspect or the third aspect.
According to the tag generation method, the tag generation device, the electronic equipment and the computer readable medium, the user behavior data of a plurality of users are obtained, so that the target words are respectively extracted from the user behavior data, and the taste tags of the target words are obtained. Here, the target word includes a shop name and/or a merchant name. And then summarizing the extracted target words into documents corresponding to each user, fitting a topic model based on each document and each taste label by taking the taste label as a topic, wherein the topic model comprises the taste label distribution of each document. And finally, determining the target taste label of the user corresponding to each document based on the taste label distribution of each document. Thus, the taste label distribution of each document is fitted through the topic model. Because the topic model can accurately and pertinently fit the topic (namely the taste label) of the document, the determined taste label of each user is more pertinent.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow diagram of one embodiment of a label generation method according to the present application;
FIG. 2 is a flow diagram of one embodiment of a label generation method according to the present application;
FIG. 3 is a flow diagram of one embodiment of a label generation method according to the present application;
FIG. 4 is a schematic block diagram of one embodiment of a label generation apparatus according to the present application;
FIG. 5 is a schematic block diagram of one embodiment of a label generation apparatus according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1, a flow 100 of one embodiment of a tag generation method according to the present application is shown. The label generation method comprises the following steps:
step 101, user behavior data of a plurality of users is obtained.
In this embodiment, an execution subject (e.g., an electronic device such as a server) of the tag generation method may acquire user behavior data of a plurality of users. The user behavior data may be behavior data generated when a user uses a specific client (e.g., a client having functions of food order review, dish ordering, etc.).
Here, the user behavior data may be generated when the user generates a user behavior. The user behavior may include, but is not limited to, a dish ordering behavior, an order placing behavior, a comment behavior, a browsing behavior, a collection behavior, a forwarding behavior, a praise behavior, and the like. In practice, the user behavior data may include, but is not limited to, names of shop dishes, names of merchants, etc. related to the user behavior. For example, the user places a list of shop dish B and shop dish C among the meals operated by merchant a. At this time, the user behavior data corresponding to the user behavior may include the name of the merchant a, the name of the shop dish B, and the name of the shop dish C.
And 102, extracting target words from the user behavior data of each user respectively, and acquiring the taste labels of the target words.
In this embodiment, the execution main body may extract the target words from the user behavior data of each user, and obtain the taste labels of the target words. The target word may include a name of a shop dish and/or a name of a merchant.
In one scenario, the business personnel of the merchant can manually set taste labels for the merchant and the shop dish which are operated by the business personnel. At this time, the execution main body may directly use the taste label set by the operator as the taste label of the corresponding target word.
As an example, a certain merchant a (e.g., named as "XX chuanka") operates chuanka, and the operator of the merchant may set the taste label of the merchant a to "spicy". The merchant A has the store dish 'Tungbao chicken dices' and 'brown sugar glutinous rice cakes', and the operator of the merchant A can set the taste label of the store dish 'Gongbao chicken dices' as 'hot' and the taste label of the store dish 'brown sugar glutinous rice cakes' as 'sweet'. At this time, the taste labels of the target words "XX museum" and "palace chicken dices" obtained by the execution main body are "hot", and the taste label of the target word "brown sugar glutinous rice cake" is "sweet".
In another scenario, the execution subject may query the taste of each target word from the internet (e.g., a menu website), and use the queried taste as a taste label.
Here, the taste label may be obtained in various ways, and is not limited to the described ways in the above scenarios.
Step 103, summarizing the extracted target words into documents corresponding to each user, fitting a topic model based on each document and each taste label by taking the taste label as a topic, wherein the topic model comprises the distribution of the taste label of each document.
In this embodiment, the execution subject may first group the extracted target words into documents corresponding to the respective users. Therefore, each user can correspond to one document, and the document of each user is formed by the target words related to the user behaviors (such as ordering behaviors, ordering behaviors and the like) of the user.
The executive body may then have taste labels as topics. Here, all taste labels may be used as a theme, and a part of taste labels may also be selected as a theme as required. In practice, when a part of the taste labels is selected as the theme, the number of themes may be preset empirically.
Finally, the executive may fit a topic model (topicmodel) based on each document and each taste label. Wherein the topic model may include taste label distributions for each document. Since the taste label is taken as the theme in the embodiment of the application, the distribution of the taste label of each document is the theme distribution of each document. In practice, the topic model is a statistical model for clustering (clustering) the implicit semantic structures (latent semantic structures) of a corpus in an unsupervised learning manner. Topic models are mainly used for semantic analysis problems (e.g. analyzing the topic of a document) in Natural language processing (Natural language processing).
In the present embodiment, the above-described execution subject may iteratively fit the topic model in the case where the document and taste label (i.e., topic) have been set. When the topic model converges (for example, the distribution of taste labels of each document obtained by the iteration is the same as or similar to the distribution of taste labels of each document obtained by the last iteration), or when the iteration times reaches a preset number, the fitted topic model can be obtained. At this time, the theme model includes the fitted taste label distribution of each document. During the iteration process, parameters of the topic model may be adjusted, which may include, but are not limited to, the number of iterations, a variance threshold of the distribution, parameters of the distribution, and the like.
Here, the execution subject described above may fit various types of topic models. Optionally, the topic model may be any one of the following: a document topic generation model (LDA), a Latent Semantic Analysis (LSA) model, and a Probabilistic Latent Semantic Analysis (PLSA) model. It should be noted that the topic model is not limited to the above list, and other existing topic models may be fitted.
Taking the LDA model as an example, the LDA model is a document theme generation model, and may also be referred to as a three-layer bayesian probability model. The LDA model comprises three layers of structures of words, themes and documents. The LDA model uses a bag of words (bag of words) method, which treats each document as a word frequency vector, thereby converting text information into digital information that is easy to model. Each document represents a probability distribution of topics, and each topic represents a probability distribution of words. The execution subject may first determine the distribution of the terms in each document, and then may iteratively fit the topic distribution of the document and the term distribution corresponding to the topic based on the term distribution in each document. When the model converges, the final topic distribution of each document can be obtained. And the distribution of the topics in the document and the distribution of the words corresponding to the topics are both polynomial distribution.
And 104, determining the target taste label of the user corresponding to each document based on the taste label distribution of each document.
In this embodiment, the execution body may determine the target taste label of the user corresponding to each document based on the taste label distribution of each document. In practice, the target taste label of the user corresponding to each document can be determined in various ways.
In some optional implementation manners of this embodiment, a preset number (for example, 5) of taste tags may be selected according to a descending order of the probability of the taste tags, and the selected taste tags are determined as the target taste tags of the user corresponding to the document.
In some optional implementation manners of this embodiment, a taste label with a probability greater than a certain preset value may be selected, and the selected taste label is determined as the target taste label of the user.
In some optional implementations of the embodiment, after obtaining the target taste label of each user, the executing entity may select a target user from the plurality of users. Here, one or more users may be randomly selected as target users, or a condition (for example, female, aged 20 to 30 years, etc.) may be set and a user satisfying the condition may be selected as a target user, which is not limited herein. Then, information matching the target taste label of the target user can be inquired. The information comprises shop and dish information and merchant information. And finally, pushing the information to the target user. As an example, if the target taste labels of the target user a are "spicy" and "sour", store dish information such as "hot and sour potato thread", "hot and sour cabbage", "bonito" and the like may be pushed to the target user a. Therefore, targeted information pushing is achieved.
According to the method provided by the embodiment of the application, the target words are respectively extracted from the user behavior data and the taste labels of the target words are obtained by obtaining the user behavior data of a plurality of users. Here, the target word includes a shop name and/or a merchant name. And then summarizing the extracted target words into documents corresponding to each user, fitting a topic model based on each document and each taste label by taking the taste label as a topic, wherein the topic model comprises the distribution of the taste label of each document. And finally, determining the target taste label of the user corresponding to each document based on the taste label distribution of each document. Thus, the taste label distribution of each document is fitted through the topic model. Because the topic model can accurately and pertinently fit the topic (namely the taste label) of the document, the determined taste label of each user is more pertinent.
With further reference to fig. 2, a flow 200 of yet another embodiment of a label generation method is shown. The label generation method comprises the following steps:
step 201, user behavior data of a plurality of users is obtained.
Step 201 of this embodiment can refer to step 101 of the embodiment shown in fig. 1, and is not described herein again.
Step 202, extracting target words from the user behavior data of each user respectively, and obtaining taste labels of each target word.
Step 202 of this embodiment can refer to step 102 of the embodiment shown in fig. 1, and is not described herein again.
Step 203, setting weights for the target words in the documents, and counting the distribution of the target words in the documents.
In the present embodiment, the execution subject described above may set a weight to each target word in each document. In practice, the weight may be further set according to the user behavior category corresponding to each target word. For example, if a document of a user includes a target word corresponding to an order placing behavior and a target word corresponding to an order ordering behavior, the weight of the target word corresponding to the order placing behavior may be set to a first numerical value, and the weight of the target word corresponding to the order ordering behavior may be set to a second numerical value smaller than the first numerical value.
As an example, after the user clicks on dish a and dish B, a purchase order is placed for dish B. At this time, the target word includes the name of dish a and the name of dish B. Since the user purchased dish B and did not purchase dish a, the weight of the name of dish B may be set to 0.6 and the weight of the name of dish a may be set to 0.4.
In this embodiment, since each document is composed of the target words, the number of the target words in each document can be counted to obtain the target word distribution of each document. For example, the target word distribution of each document may be expressed in the form of a matrix (i.e., a word frequency matrix). For example, if N documents are shared and M different target words are shared in the N documents, the frequency of occurrence of each target word in each document may be recorded by an N × M matrix, and each row is the target word distribution of one document, so as to obtain the target word distribution of each document.
In some optional implementations of this embodiment, the executing entity may determine the weight of each target word according to the following steps:
firstly, determining the behavior category corresponding to each user behavior data. The behavior categories may include at least one of: ordering behavior, commenting behavior, browsing behavior, collecting behavior, forwarding behavior and appropriating behavior.
And secondly, acquiring the weight of each user behavior data based on the preset weight of each behavior category. Here, the specific value of the weight of each behavior category may be preset empirically. As an example, the weights of the ordering behavior, the comment behavior, the browsing behavior, the collection behavior, the forwarding behavior, and the approval behavior may be a preset first value (e.g., 0.3), a preset second value (e.g., 0.2), a preset third value (e.g., 0.1), a preset fourth value (e.g., 0.1), a preset fifth value (e.g., 0.1), a preset sixth value (e.g., 0.1), and a preset seventh value (e.g., 0.1). Here, for a certain user behavior data, the weight of the behavior category corresponding to the user behavior data may be directly used as the weight of the user behavior data.
And thirdly, regarding each target word, taking the weight of the user behavior data to which the target word belongs as the weight of the target word.
In some optional implementations of the embodiment, the obtained weight of each user behavior data may also be related to a generation time of each user behavior data. At this time, the execution subject may determine the weight of each target word by:
the first step is to determine the behavior category corresponding to each user behavior data and the generation time of each user behavior data.
And secondly, acquiring the weight of the behavior type corresponding to each user behavior data based on the preset weight of each behavior type and the generation time of each user behavior data.
As an example, the acquired user behavior data of user a includes first user behavior data acquired at time a (e.g., three months ago) and second user behavior data acquired at time b (e.g., the last week). The behavior category corresponding to the first user behavior data is an order placing behavior, and the behavior category corresponding to the second user behavior data is a collecting behavior. The weight of the ordering behavior is a preset value of 0.3, and the weight of the collection behavior is a preset value of 0.1. Since the user behavior data generated later in time is closer to the current preference of the user, the weight of the first user behavior data may be decreased (e.g., to 0.25) based on the weight of the preset ordering behavior (0.3), while the weight of the second user behavior data may be increased (e.g., to 0.15) based on the weight of the preset favorites behavior (0.1).
The method of obtaining the weight of the user behavior data in conjunction with the generation time of the user behavior data may be preset as needed, and is not limited to the method in the above example.
And thirdly, regarding each target word, taking the weight of the user behavior data to which the target word belongs as the weight of the target word.
And step 204, fitting a theme model comprising the distribution of the target words of the taste labels and the distribution of the taste labels of the documents based on the distribution of the target words of the documents and the weight of the target words in the documents.
In this embodiment, the execution body may combine the target word distribution of each document and the weight of each target word in each document to fit a topic model including the target word distribution of each taste label and the taste label distribution of each document. In the embodiment of the application, the taste labels are taken as themes, so that the distribution of the taste labels of the documents is the theme distribution of the documents, and the distribution of the target words of the taste labels is the target word distribution of the themes.
Specifically, taking fitting the LDA model as an example, the executing entity may perform the following fitting steps for each document:
in the first step, a preset first Dirichlet distribution (Dirichlet distribution) is sampled to generate a sampled taste tag distribution of the document. Wherein, the distribution of the sampling taste labels is a polynomial distribution. Here, the parameters of the first dirichlet distribution may be randomly set.
And secondly, sampling the distribution of the sampling taste labels to obtain the sampling taste labels.
And thirdly, sampling a preset second Dirichlet distribution to generate a sampling target word distribution corresponding to the sampling taste label. Wherein, the distribution of the sampling target words is a polynomial distribution. Here, the parameters of the second dirichlet distribution may be randomly set.
In practice, a dirichlet distribution is a set of continuous multivariate probability distributions, which can be generally used as prior probabilities of bayesian statistics. In the bayesian probability theory, if the posterior probability and the prior probability satisfy the same distribution law, the prior distribution and the posterior distribution can be called conjugate distribution, and meanwhile, the prior distribution is called conjugate prior distribution of the likelihood function. Since the taste label distribution and the target word distribution of the subject of the document are both polynomial distributions, and the dirichlet distribution can satisfy the same distribution theory, the dirichlet distribution can be used as the conjugate prior distribution of the polynomial distributions (i.e., the conjugate prior distribution of the taste label distribution and the conjugate prior distribution of the target word distribution).
And fourthly, sampling the distribution of the sampling target words according to the weight of each target word in the document to obtain the sampling target words. Here, the weight of each target word may be regarded as the probability of selecting the target word. The more heavily weighted target words are easier to sample. Here, the execution subject may repeat the above steps until the document generation is completed.
And fifthly, summarizing the obtained sampling target words into a generated document corresponding to the document.
And after the fitting step is carried out on each document, a generated document corresponding to each document can be obtained. At this time, the execution body may fit the target word distribution of each taste tag and the taste tag distribution of each document using an Expectation-Maximization algorithm (EM) based on the target word distribution of each document and the sampling target word distribution of the generated document corresponding to each document. In practice, the Maximum expectation algorithm is an optimization algorithm that iteratively performs Maximum Likelihood Estimation (MLE). Because iterative rules are easy to implement and can flexibly consider hidden variables, the EM algorithm is widely applied to processing missing values of data and parameter estimation (e.g., fitting a topic model) of many machine learning algorithms.
It should be noted that other ways of fitting the LDA model may also be used, and the other conventional fitting ways used herein are not limited. For example, the LDA model may be fitted using an algorithm such as Gibbs Sampling (Gibbs Sampling), which is not described herein.
Step 205, for each document, selecting a preset number of taste labels according to the sequence of the probability of the taste labels from large to small, and determining the selected taste labels as the target taste labels of the user corresponding to the document.
In this embodiment, for each document, a preset number (for example, 5) of taste labels are selected according to the order from large probability to small probability of the taste labels, and the selected taste labels are determined as the target taste labels of the user corresponding to the document.
As can be seen from fig. 2, compared with the embodiment corresponding to fig. 1, the flow 200 of the tag generation method in this embodiment involves a step of fitting the combined target word weight to the topic model. Therefore, the empirical value can be combined in the process of fitting the theme model, and the accuracy of the distribution of the taste labels of the document is improved.
With further reference to fig. 3, a flow 300 of yet another embodiment of a label generation method is shown. The process 300 of the tag generation method includes the following steps:
step 301, obtaining target user behavior data of a target user.
In this embodiment, the executing body of the tag generation method may obtain target user behavior data of the target user. The target user can be any user to be subjected to taste label detection currently.
In some optional implementation manners of this embodiment, a user (e.g., a newly registered user) without setting a taste label may be used as a target user, and target user behavior data of the target user may be obtained.
In some optional implementation manners of this embodiment, a user whose user attribute information is changed may be taken as a target user, and target user behavior data of the target user may be obtained. Wherein the user attribute information includes at least one of: city of residence, marital status, health status, nature of work. As an example, when the user's city of residence is changed from beijing to chengdu, the user's taste may also be changed accordingly. At this time, the taste label of the user can be detected by taking the user as a target user, so that the latest taste label meeting the current state of the user is obtained.
In some optional implementation manners of this embodiment, a user who has set a taste label and has a set time of the taste label earlier than a preset time (for example, a day three months ago) may be taken as a target user, and target user behavior data of the target user may be obtained. Therefore, when the set time length of the taste label of the user is long, the taste label of the user can be detected again, and the latest taste label meeting the current state of the user is obtained.
In some optional implementation manners of this embodiment, when a taste label prediction request for a target user is received, target user behavior data of the target user may be obtained. Therefore, the user can actively detect the taste label, so that the user can know own taste preference.
Step 302, extracting target words from the target user behavior data, and summarizing the extracted target words into a target document.
In this embodiment, the execution subject may extract target words from the target user behavior data, and aggregate the extracted target words into a target document. The target word may include a name of a shop dish and/or a name of a merchant.
It should be noted that step 303 of this embodiment can refer to step 201 of the embodiment shown in fig. 1, and is not described herein again.
Step 303, determining the distribution of the topics of the target document based on the distribution of the target words of the target document and the distribution of the target words of the topics in the pre-fitted topic model.
In this embodiment, the execution body may store a pre-fitted topic model therein. The theme in the theme model is the taste label, so the theme distribution of the target document is the taste label distribution of the target document. The topic model may be fitted by using the method of any embodiment corresponding to fig. 1 or fig. 2, and details are not repeated here.
In practice, the process of fitting the topic model is a process of fitting the topic distribution of each document and the target word distribution of each topic based on the target word distributions of a plurality of documents. Therefore, when the target word distribution of a certain document is known, the topic distribution of the document can be directly calculated through the fitted target word distribution of each topic. Therefore, in the embodiment, the topic distribution of the target document can be determined directly based on the target word distribution of the target document and the target word distribution of the topic in the topic model which is fitted in advance.
By the aid of the fitted theme model, the taste labels of any user can be detected without manual analysis, and the detection efficiency of the taste labels of the user is improved.
Step 304, determining a target taste label of the target user based on the theme distribution of the target document.
Step 304 in this embodiment can refer to step 204 in the embodiment shown in fig. 1, and is not described herein again.
In some optional implementations of this embodiment, after determining the target taste label of the target user, the executing body may further query for information matching the target taste label. The information comprises shop and dish information and merchant information. Then, the information can be pushed to the target user. Therefore, targeted information pushing is achieved.
In the method provided by the embodiment of the application, the target user behavior data of the target user is obtained, so that the target words are extracted from the target user behavior data, and the extracted target words are collected into the target document. Here, the target word includes a shop name and/or a merchant name. Then, determining the distribution of the subjects of the target document based on the distribution of the target words of the target document and the distribution of the target words of the subjects in a pre-fitted subject model; and finally, based on the theme distribution of the target document, determining the target taste label of the target user. The process of fitting the topic model is a process of fitting the topic distribution of each document and the target word distribution of each topic based on the target word distributions of a plurality of documents. Therefore, when the target word distribution of a certain document is known, the topic distribution of the document can be directly calculated through the fitted target word distribution of each topic. Therefore, the detection efficiency of the taste label is improved. Meanwhile, the theme model can accurately and pertinently fit the theme of the document, so that the determined taste label is more accurate and more pertinent. In addition, the accuracy of the taste label is further improved because the characteristics with smaller granularity such as target words (including names of the shop dishes and/or the merchant) for generating the user behaviors are used in the process of fitting the topic model.
With further reference to fig. 4, as an implementation of the method shown in the above-mentioned figures, the present application provides an embodiment of a tag generation apparatus, which corresponds to the embodiment of the method shown in fig. 1, and which can be applied to various electronic devices.
As shown in fig. 4, the label generating apparatus 400 according to this embodiment includes: an acquisition unit 401 configured to acquire user behavior data of a plurality of users; a summarizing unit 402, configured to extract target words from user behavior data of each user, and obtain taste labels of each target word, wherein the target words include names of shop dishes and/or names of merchants; a fitting unit 403 configured to fit a topic model based on each document and each taste label with the taste label as a topic, wherein the topic model includes a taste label distribution of each document; a determining unit 404 configured to determine a target taste label of a user corresponding to each document based on the taste label distribution of each document.
In some optional implementations of the present embodiment, the fitting unit 403 is further configured to: setting weights for all target words in all documents, and counting the distribution of the target words of all documents; and fitting a theme model comprising the target word distribution of the taste labels and the taste label distribution of the documents based on the target word distribution of the documents and the weight of each target word in each document.
In some optional implementations of the present embodiment, the fitting unit 403 is further configured to: determining a behavior category corresponding to each user behavior data; acquiring the weight of each user behavior data based on the preset weight of each behavior category; and for each target word, taking the weight of the user behavior data to which the target word belongs as the weight of the target word.
In some optional implementations of the embodiment, the obtained weight of each user behavior data is further related to a generation time of each user behavior data.
In some optional implementations of the present embodiment, the fitting unit 403 is further configured to: for each document, the following fitting steps are performed: sampling a preset first Dirichlet distribution to generate a sampling taste label distribution of the document, wherein the sampling taste label distribution is a polynomial distribution; sampling the distribution of the sampling taste labels to obtain sampling taste labels; sampling a preset second Dirichlet distribution to generate a sampling target word distribution corresponding to the sampling taste label, wherein the sampling target word distribution is a polynomial distribution; sampling the distribution of the sampling target words according to the weight of each target word in the document to obtain sampling target words; summarizing the obtained sampling target words into a generated document corresponding to the document; and fitting the target word distribution of each taste label and the taste label distribution of each document by utilizing a maximum expectation algorithm based on the target word distribution of each document and the sampling target word distribution of the generated document corresponding to each document.
In some optional implementations of this embodiment, the determining unit 404 is further configured to: for each document, selecting a preset number of taste labels according to the sequence of the probability of the taste labels from large to small, and determining the selected taste labels as the target taste labels of the user corresponding to the document.
In some optional implementations of this embodiment, the apparatus further includes: a selecting unit configured to select a target user from the plurality of users; the query unit is configured to query information matched with the target taste label of the target user, wherein the information comprises shop and dish information and merchant information; and the pushing unit is configured to push the information to the target user.
The device provided by the above embodiment of the present application, by obtaining user behavior data of a plurality of users, extracts a target word from each user behavior data and obtains a taste label of each target word. Here, the target word includes a shop name and/or a merchant name. And then summarizing the extracted target words into documents corresponding to each user, fitting a topic model based on each document and each taste label by taking the taste label as a topic, wherein the topic model comprises the distribution of the taste label of each document. And finally, determining the target taste label of the user corresponding to each document based on the taste label distribution of each document. Thus, the taste label distribution of each document is fitted through the topic model. Because the topic model can accurately and pertinently fit the topic (namely the taste label) of the document, the determined taste label of each user is more pertinent.
With further reference to fig. 5, as an implementation of the method shown in the above-mentioned figures, the present application provides an embodiment of a tag generation apparatus, which corresponds to the embodiment of the method shown in fig. 3, and which can be applied to various electronic devices.
As shown in fig. 5, the label generating apparatus 500 according to this embodiment includes: an obtaining unit 501 configured to obtain target user behavior data of a target user; a summarizing unit 502 configured to extract target words from the target user behavior data and summarize the extracted target words into a target document, wherein the target words comprise names of shops and/or names of merchants; a first determining unit 503, configured to determine a distribution of topics of the target document based on the distribution of target words of the target document and a distribution of target words of topics in a pre-fitted topic model, where the topics in the topic model are taste labels; a second determining unit 504, configured to determine the target taste label of the target user based on the topic distribution of the target document.
In some optional implementations of this embodiment, the obtaining unit 501 is further configured to: and taking the user without the taste label as a target user, and acquiring the target user behavior data of the target user.
In some optional implementations of this embodiment, the obtaining unit 501 is further configured to: taking a user with a set taste label and the set time of the taste label earlier than the preset time as a target user, and acquiring target user behavior data of the target user;
in some optional implementations of this embodiment, the apparatus further includes: a query unit configured to: inquiring information matched with the target taste label, wherein the information comprises shop and dish information and merchant information; a pushing unit configured to: and pushing the information to the target user.
The device provided by the above embodiment of the present application, by obtaining the target user behavior data of the target user, extracts the target words from the target user behavior data, and summarizes the extracted target words as the target document. Here, the target word includes a shop name and/or a merchant name. Then, determining the distribution of the subjects of the target document based on the distribution of the target words of the target document and the distribution of the target words of the subjects in a pre-fitted subject model; and finally, based on the theme distribution of the target document, determining the target taste label of the target user. The process of fitting the topic model is a process of fitting the topic distribution of each document and the target word distribution of each taste label based on the target word distributions of a plurality of documents. Therefore, when the target word distribution of a certain document is known, the subject distribution of the document can be directly calculated through the fitted target word distribution of each taste label. Therefore, the detection efficiency of the taste label is improved. Meanwhile, the theme model can accurately and pertinently fit the theme of the document, so that the determined taste label is more accurate and more pertinent. In addition, the accuracy of the taste label is further improved because the characteristics with smaller granularity such as target words (including names of the shop dishes and/or the merchant) for generating the user behaviors are used in the process of fitting the topic model.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The units described may also be provided in a processor, where the names of the units do not in some cases constitute a limitation of the units themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring user behavior data of a plurality of users; extracting target words from user behavior data of each user respectively, and acquiring taste labels of the target words, wherein the target words comprise names of shops and/or names of merchants; summarizing the extracted target words into documents corresponding to each user, taking the taste labels as themes, and fitting a theme model based on each document and each taste label, wherein the theme model comprises the distribution of the taste labels of each document; and determining the target taste label of the user corresponding to each document based on the taste label distribution of each document.
Further, the one or more programs, when executed by the apparatus, may also cause the apparatus to: acquiring target user behavior data of a target user; extracting target words from the target user behavior data, and summarizing the extracted target words into a target document, wherein the target words comprise names of shops and/or names of merchants; determining the topic distribution of the target document based on the target word distribution of the target document and the target word distribution of the topics in a pre-fitted topic model, wherein the topics in the topic model are taste labels; and determining a target taste label of the target user based on the theme distribution of the target document.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (15)

1. A method of tag generation, the method comprising:
acquiring user behavior data of a plurality of users;
extracting target words from user behavior data of each user respectively, and acquiring taste labels of the target words, wherein the target words comprise names of shops and/or names of merchants;
summarizing the extracted target words into documents corresponding to each user, taking the taste labels as themes, and fitting a theme model based on each document and each taste label, wherein the theme model comprises the distribution of the taste labels of each document;
and determining the target taste label of the user corresponding to each document based on the taste label distribution of each document.
2. The label generation method of claim 1, wherein fitting a topic model based on each document and each taste label comprises:
setting weights for all target words in all documents, and counting the distribution of the target words of all documents;
and fitting a theme model comprising the target word distribution of the taste labels and the taste label distribution of the documents based on the target word distribution of the documents and the weight of each target word in each document.
3. The tag generation method according to claim 2, wherein setting a weight for each target word in each document includes:
determining a behavior category corresponding to each user behavior data;
acquiring the weight of each user behavior data based on the preset weight of each behavior category;
and for each target word, taking the weight of the user behavior data to which the target word belongs as the weight of the target word.
4. The label generation method of claim 3, wherein the weight of each acquired user behavior data is further related to a generation time of each user behavior data.
5. The method of claim 2, wherein fitting a topic model including a word distribution corresponding to each taste label and a taste label distribution of each document based on the target word distribution of each document and the weight of each target word in each document comprises:
for each document, the following fitting steps are performed: sampling a preset first Dirichlet distribution to generate a sampling taste label distribution of the document, wherein the sampling taste label distribution is a polynomial distribution; sampling the distribution of the sampling taste labels to obtain sampling taste labels; sampling a preset second Dirichlet distribution to generate a sampling target word distribution corresponding to the sampling taste label, wherein the sampling target word distribution is a polynomial distribution; sampling the distribution of the sampling target words according to the weight of each target word in the document to obtain sampling target words; summarizing the obtained sampling target words into a generated document corresponding to the document;
and fitting the target word distribution of each taste label and the taste label distribution of each document by utilizing a maximum expectation algorithm based on the target word distribution of each document and the sampling target word distribution of the generated document corresponding to each document.
6. The tag generation method according to claim 1, wherein determining the target taste tag of the user corresponding to each document based on the taste tag distribution of each document comprises:
for each document, selecting a preset number of taste labels according to the sequence of the probability of the taste labels from large to small, and determining the selected taste labels as the target taste labels of the user corresponding to the document.
7. The tag generation method according to claim 1, wherein after determining the target taste tag of the user corresponding to each document based on the taste tag distribution of each document, the method further comprises:
selecting a target user from the plurality of users;
inquiring information matched with the target taste label of the target user, wherein the information comprises shop and dish information and merchant information;
and pushing the information to the target user.
8. A label generation apparatus, characterized in that the apparatus comprises:
an acquisition unit configured to acquire user behavior data of a plurality of users;
the system comprises a summarizing unit, a processing unit and a display unit, wherein the summarizing unit is configured to extract target words from user behavior data of users respectively and acquire taste labels of the target words, and the target words comprise names of shops and/or names of merchants;
a fitting unit configured to fit a topic model based on each document and each taste label with the taste label as a topic, wherein the topic model includes a taste label distribution of each document;
and the determining unit is configured to determine the target taste label of the user corresponding to each document based on the taste label distribution of each document.
9. A method of tag generation, the method comprising:
acquiring target user behavior data of a target user;
extracting target words from the target user behavior data, and summarizing the extracted target words into a target document, wherein the target words comprise names of shops and/or names of merchants;
determining the topic distribution of the target document based on the target word distribution of the target document and the target word distribution of the topics in a pre-fitted topic model, wherein the topics in the topic model are taste labels;
and determining a target taste label of the target user based on the theme distribution of the target document.
10. The tag generation method according to claim 9, wherein the obtaining target user behavior data of the target user includes:
and taking the user without the set taste label as a target user, and acquiring target user behavior data of the target user.
11. The tag generation method according to claim 9, wherein the obtaining target user behavior data of the target user includes:
and taking the user with the set taste label and the set time of the taste label earlier than the preset time as a target user, and acquiring target user behavior data of the target user.
12. The tag generation method of claim 9, wherein after said determining a target taste tag for said target user, said method further comprises:
inquiring information matched with the target taste label, wherein the information comprises shop and dish information and merchant information;
and pushing the information to the target user.
13. A label generation apparatus, characterized in that the apparatus comprises:
an acquisition unit configured to acquire target user behavior data of a target user;
the summarizing unit is configured to extract target words from the target user behavior data and summarize the extracted target words into a target document, wherein the target words comprise names of shops and/or names of merchants;
a first determining unit, configured to determine a target word distribution of the target document based on the target word distribution of the target document and a target word distribution of a subject in a pre-fitted subject model, wherein the subject in the subject model is a taste label;
a second determination unit configured to determine a target taste label of the target user based on the topic distribution of the target document.
14. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the label generation method of any of claims 1-7 or the label generation method of any of claims 9-12.
15. A computer-readable medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the label generation method of any one of claims 1 to 7 or the label generation method of any one of claims 9 to 12.
CN201910877832.4A 2019-09-17 2019-09-17 Label generation method and device, electronic equipment and computer readable medium Withdrawn CN110727797A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910877832.4A CN110727797A (en) 2019-09-17 2019-09-17 Label generation method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910877832.4A CN110727797A (en) 2019-09-17 2019-09-17 Label generation method and device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN110727797A true CN110727797A (en) 2020-01-24

Family

ID=69219108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910877832.4A Withdrawn CN110727797A (en) 2019-09-17 2019-09-17 Label generation method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN110727797A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325255A (en) * 2020-02-13 2020-06-23 拉扎斯网络科技(上海)有限公司 Specific crowd delineating method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102004774A (en) * 2010-11-16 2011-04-06 清华大学 Personalized user tag modeling and recommendation method based on unified probability model
CN103970863A (en) * 2014-05-08 2014-08-06 清华大学 Method and system for excavating interest of microblog users based on LDA theme model
CN107193892A (en) * 2017-05-02 2017-09-22 东软集团股份有限公司 A kind of document subject matter determines method and device
CN108288229A (en) * 2018-03-02 2018-07-17 北京邮电大学 A kind of user's portrait construction method
US20180293507A1 (en) * 2017-04-06 2018-10-11 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for extracting keywords based on artificial intelligence, device and readable medium
CN109783615A (en) * 2019-01-25 2019-05-21 王小军 Based on word to user's portrait method and system of Di Li Cray process
WO2019153551A1 (en) * 2018-02-12 2019-08-15 平安科技(深圳)有限公司 Article classification method and apparatus, computer device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102004774A (en) * 2010-11-16 2011-04-06 清华大学 Personalized user tag modeling and recommendation method based on unified probability model
CN103970863A (en) * 2014-05-08 2014-08-06 清华大学 Method and system for excavating interest of microblog users based on LDA theme model
US20180293507A1 (en) * 2017-04-06 2018-10-11 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for extracting keywords based on artificial intelligence, device and readable medium
CN107193892A (en) * 2017-05-02 2017-09-22 东软集团股份有限公司 A kind of document subject matter determines method and device
WO2019153551A1 (en) * 2018-02-12 2019-08-15 平安科技(深圳)有限公司 Article classification method and apparatus, computer device and storage medium
CN108288229A (en) * 2018-03-02 2018-07-17 北京邮电大学 A kind of user's portrait construction method
CN109783615A (en) * 2019-01-25 2019-05-21 王小军 Based on word to user's portrait method and system of Di Li Cray process

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325255A (en) * 2020-02-13 2020-06-23 拉扎斯网络科技(上海)有限公司 Specific crowd delineating method and device, electronic equipment and storage medium
CN111325255B (en) * 2020-02-13 2021-11-19 拉扎斯网络科技(上海)有限公司 Specific crowd delineating method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20210027146A1 (en) Method and apparatus for determining interest of user for information item
US11562012B2 (en) System and method for providing technology assisted data review with optimizing features
CN109145280B (en) Information pushing method and device
US20180181988A1 (en) Method and apparatus for pushing information
US10762288B2 (en) Adaptive modification of content presented in electronic forms
US11537845B2 (en) Neural networks for information extraction from transaction data
CN109299994B (en) Recommendation method, device, equipment and readable storage medium
CN109711887B (en) Generation method and device of mall recommendation list, electronic equipment and computer medium
CN107077486A (en) Affective Evaluation system and method
US20170032417A1 (en) Detecting and generating online behavior from a clickstream
CN109189935B (en) APP propagation analysis method and system based on knowledge graph
CN108932625B (en) User behavior data analysis method, device, medium and electronic equipment
CN111104590A (en) Information recommendation method, device, medium and electronic equipment
CN111429161B (en) Feature extraction method, feature extraction device, storage medium and electronic equipment
CN107729473B (en) Article recommendation method and device
CN114077661A (en) Information processing apparatus, information processing method, and computer readable medium
CN112800109A (en) Information mining method and system
CN112116426A (en) Method and device for pushing article information
CN111680213B (en) Information recommendation method, data processing method and device
CN111225009B (en) Method and device for generating information
CN111444424A (en) Information recommendation method and information recommendation system
CN108595580B (en) News recommendation method, device, server and storage medium
CN110727797A (en) Label generation method and device, electronic equipment and computer readable medium
US11755979B2 (en) Method and system for finding a solution to a provided problem using family tree based priors in Bayesian calculations in evolution based optimization
CN107357847B (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200124