CN114547429A - Data recommendation method and device, server and storage medium - Google Patents

Data recommendation method and device, server and storage medium Download PDF

Info

Publication number
CN114547429A
CN114547429A CN202011325270.1A CN202011325270A CN114547429A CN 114547429 A CN114547429 A CN 114547429A CN 202011325270 A CN202011325270 A CN 202011325270A CN 114547429 A CN114547429 A CN 114547429A
Authority
CN
China
Prior art keywords
feature
data
sample
multimedia data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011325270.1A
Other languages
Chinese (zh)
Inventor
肖严
赵惜墨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011325270.1A priority Critical patent/CN114547429A/en
Publication of CN114547429A publication Critical patent/CN114547429A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a data recommendation method, a data recommendation device, a server and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: the method comprises the steps of obtaining target data characteristics corresponding to target multimedia data to be recommended, calling a characteristic conversion model, carrying out characteristic conversion processing on the target data characteristics to obtain converted predicted operation characteristics, determining the matching degree of each user and the target multimedia data according to user characteristics and predicted operation characteristics corresponding to a plurality of users, selecting at least one target user from the plurality of users according to the matching degrees corresponding to the plurality of users, and recommending the target multimedia data to the at least one target user. The method comprises the steps of expressing the operation which is possibly executed by a user on the multimedia data by the aid of the predicted operation characteristics of the multimedia data, determining the user matched with the multimedia data, and subsequently recommending the multimedia data to the matched user, so that accurate recommendation of the multimedia data is realized, accuracy of data recommendation is guaranteed, and cold start of the multimedia data is realized.

Description

Data recommendation method and device, server and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data recommendation method, an apparatus, a server, and a storage medium.
Background
With the development of computer technology and internet technology, multimedia data in the internet is more and more abundant. Because the multimedia data in the internet is too much, the user cannot check the multimedia data one by one, and therefore, the multimedia data which are interested in the user are generally recommended for the user.
In the related art, multimedia data with data characteristics matching with user characteristics is generally determined according to data characteristics of a plurality of multimedia data to be recommended and user characteristics of a target user, so that the multimedia data is recommended for the user. Since only the data characteristics of the multimedia data are considered in the above manner, the cold start of the multimedia data cannot be realized.
Disclosure of Invention
The disclosure provides a data recommendation method, a data recommendation device, a server and a storage medium, which improve the accuracy of data recommendation.
According to an aspect of an embodiment of the present disclosure, there is provided a data recommendation method, including:
acquiring target data characteristics corresponding to target multimedia data to be recommended;
calling a feature conversion model, and performing feature conversion processing on the target data features to obtain converted predicted operation features, wherein the predicted operation features are used for expressing predicted operations executed on the target multimedia data;
determining the matching degree of each user and the target multimedia data according to the user characteristics corresponding to the plurality of users and the predicted operation characteristics;
and selecting at least one target user from the plurality of users according to the matching degrees corresponding to the plurality of users, and recommending the target multimedia data to the at least one target user.
In some embodiments, the method further comprises:
calling the characteristic conversion model, and performing characteristic conversion processing on first sample data characteristics corresponding to first sample multimedia data to obtain converted first sample operation characteristics;
calling a feature discrimination model, and performing discrimination processing on the first sample operation feature to obtain a discrimination identifier of the first sample operation feature, wherein the discrimination identifier is used for indicating whether the first sample operation feature belongs to a data feature or an operation feature;
and training the feature conversion model according to the first sample operation feature and the distinguishing identification.
In some embodiments, the training the feature transformation model according to the first sample operation feature and the discrimination indicator includes:
determining a first loss value of the feature conversion model according to the first sample operation feature and the distinguishing identifier;
and training the feature conversion model according to the first loss value.
In some embodiments, after the training the feature transformation model according to the first sample operation feature and the discriminant identifier, the method further includes:
calling the characteristic conversion model, and performing characteristic conversion processing on second sample data characteristics corresponding to second sample multimedia data to obtain converted second sample operation characteristics;
calling the characteristic discrimination model, respectively discriminating the second sample data characteristic and the second sample operation characteristic, and determining discrimination identifiers of the second sample data characteristic and the second sample operation characteristic, wherein the discrimination identifiers are used for indicating whether the corresponding characteristics belong to data characteristics or operation characteristics;
and training the characteristic discrimination model according to the second sample data characteristic, the second sample operation characteristic, the second sample data characteristic and the discrimination identifier of the second sample operation characteristic.
In some embodiments, the training the feature discrimination model according to the discrimination identifier of the second sample data feature, the second sample operation feature, the second sample data feature, and the second sample operation feature includes:
determining a second loss value of the feature discrimination model according to the discrimination identifier of the second sample data feature, the second sample operation feature, the second sample data feature and the second sample operation feature;
and training the characteristic discrimination model according to the second loss value.
In some embodiments, after the training the feature discrimination model according to the discrimination indicator of the second sample data feature, the second sample operation feature, the second sample data feature, and the second sample operation feature, the method further includes:
calling the feature conversion model, and performing feature conversion processing on third sample data features corresponding to the third sample multimedia data to obtain converted third sample operation features;
calling the characteristic discrimination model, performing discrimination processing on the third sample operation characteristic, and determining a discrimination identifier of the third sample operation characteristic, wherein the discrimination identifier is used for indicating whether the third sample operation characteristic belongs to a data characteristic or an operation characteristic;
and training the feature conversion model according to the third sample operation features and the distinguishing identification.
In some embodiments, the obtaining target data characteristics corresponding to target multimedia data to be recommended includes:
and calling a feature extraction model, and performing feature extraction on the target multimedia data to obtain target data features corresponding to the target multimedia data.
In some embodiments, the invoking the feature extraction model to perform feature extraction on the target multimedia data to obtain target data features corresponding to the target multimedia data includes:
calling the text feature extraction submodel, and performing feature extraction on the target multimedia data to obtain text features of the target multimedia data;
calling the image feature extraction submodel, and performing feature extraction on the target multimedia data to obtain the image features of the target multimedia data;
and splicing the text characteristic and the image characteristic to obtain the target data characteristic of the target multimedia data.
In some embodiments, the determining, according to the user characteristics corresponding to the multiple users and the predicted operation characteristics, a matching degree of each user with the target multimedia data includes:
and determining the matching degree of each user and the target multimedia data according to the user characteristics corresponding to the plurality of users, the target data characteristics and the predicted operation characteristics.
In some embodiments, the determining the matching degree of each user with the target multimedia data according to the user characteristics, the target data characteristics and the predicted operation characteristics corresponding to the plurality of users includes:
and for the user characteristics of any user, matching the user characteristics, the target data characteristics and the predicted operation characteristics to obtain the matching degree of the user and the target multimedia data.
In some embodiments, the selecting at least one target user from the multiple users according to the matching degrees corresponding to the multiple users and recommending the target multimedia data to the at least one target user includes:
determining recommendation parameters corresponding to the multiple users according to the matching degrees corresponding to the multiple users and the resource quantity corresponding to the target multimedia data;
and selecting at least one target user from the plurality of users according to the recommendation parameters corresponding to the plurality of users, and recommending the target multimedia data to the at least one target user.
According to still another aspect of the embodiments of the present disclosure, there is provided a data recommendation apparatus, the apparatus including:
the characteristic acquisition unit is configured to acquire target data characteristics corresponding to target multimedia data to be recommended;
a transformation processing unit configured to perform feature transformation processing on the target data feature to obtain a transformed prediction operation feature, where the prediction operation feature is used to represent a predicted operation performed on the target multimedia data;
the determining unit is configured to determine the matching degree of each user and the target multimedia data according to the user characteristics corresponding to the plurality of users and the predicted operation characteristics;
and the recommending unit is configured to select at least one target user from the multiple users according to the matching degrees corresponding to the multiple users and recommend the target multimedia data to the at least one target user.
In some embodiments, the apparatus further comprises:
the transformation processing unit is configured to call the feature transformation model, and perform feature transformation processing on a first sample data feature corresponding to a first sample multimedia data to obtain a transformed first sample operation feature;
a distinguishing processing unit, configured to invoke a feature distinguishing model, and perform distinguishing processing on the first sample operation feature to obtain a distinguishing identifier of the first sample operation feature, where the distinguishing identifier is used to indicate whether the first sample operation feature belongs to a data feature or an operation feature;
and the first training unit is configured to train the feature conversion model according to the first sample operation feature and the discrimination identifier.
In some embodiments, the first training unit comprises:
a first determining subunit, configured to determine a first loss value of the feature conversion model according to the first sample operation feature and the discrimination identifier;
a first training subunit configured to train the feature conversion model according to the first loss value.
In some embodiments, the apparatus further comprises:
the transformation processing unit is configured to call the feature transformation model, perform feature transformation processing on second sample data features corresponding to second sample multimedia data, and obtain transformed second sample operation features;
the distinguishing processing unit is further configured to call the feature distinguishing model, respectively distinguish the second sample data feature and the second sample operation feature, and determine distinguishing identifiers of the second sample data feature and the second sample operation feature, where the distinguishing identifiers are used to indicate whether corresponding features belong to data features or operation features;
and the second training unit is configured to train the feature discrimination model according to the second sample data feature, the second sample operation feature, the second sample data feature and the discrimination identifier of the second sample operation feature.
In some embodiments, the second training unit comprises:
a second determining subunit, configured to determine a second loss value of the feature discrimination model according to the second sample data feature, the second sample operation feature, the second sample data feature, and the discrimination identifier of the second sample operation feature;
a second training subunit configured to train the feature discrimination model according to the second loss value.
In some embodiments, the apparatus further comprises:
the transformation processing unit is configured to call the feature transformation model, perform feature transformation processing on third sample data features corresponding to third sample multimedia data, and obtain transformed third sample operation features;
the distinguishing processing unit is further configured to call the feature distinguishing model, distinguish the third sample operation feature, and determine a distinguishing identifier of the third sample operation feature, where the distinguishing identifier is used to indicate whether the third sample operation feature belongs to a data feature or an operation feature;
and the third training unit is configured to train the feature conversion model according to the third sample operation feature and the discrimination identifier.
In some embodiments, the feature acquisition unit includes:
and the feature extraction subunit is configured to invoke a feature extraction model, perform feature extraction on the target multimedia data, and obtain target data features corresponding to the target multimedia data.
In some embodiments, the feature extraction model includes a text feature extraction submodel and an image feature extraction submodel, and the feature extraction submodel is configured to invoke the text feature extraction submodel to perform feature extraction on the target multimedia data to obtain text features of the target multimedia data; calling the image feature extraction submodel, and performing feature extraction on the target multimedia data to obtain the image features of the target multimedia data; and splicing the text characteristic and the image characteristic to obtain the target data characteristic of the target multimedia data.
In some embodiments, the determining unit includes:
and the third determining subunit is configured to determine the matching degree of each user and the target multimedia data according to the user characteristics corresponding to the plurality of users, the target data characteristics and the predicted operation characteristics.
In some embodiments, the third determining subunit is configured to, for a user feature of any user, perform matching processing on the user feature, the target data feature, and the predicted operation feature to obtain a matching degree between the user and the target multimedia data.
In some embodiments, the recommending unit is configured to determine the recommending parameters corresponding to the multiple users according to the matching degrees corresponding to the multiple users and the number of resources corresponding to the target multimedia data; and selecting at least one target user from the plurality of users according to the recommendation parameters corresponding to the plurality of users, and recommending the target multimedia data to the at least one target user.
According to still another aspect of the embodiments of the present disclosure, there is provided a server including:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the data recommendation method of the first aspect.
According to yet another aspect of the embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein instructions of the storage medium, when executed by a processor of a server, enable the server to perform the data recommendation method of the above aspect.
According to yet another aspect of the embodiments of the present disclosure, there is provided a computer program product, wherein instructions of the computer program product, when executed by a processor of a server, enable the server to perform the data recommendation method of the above aspect.
According to the data recommendation method, the data recommendation device, the server and the storage medium, operations which are possibly executed by a user on the multimedia data are represented by the predicted operation characteristics of the multimedia data, so that the user which is possibly operated on the multimedia data is determined according to the predicted operation characteristics, namely the user matched with the multimedia data, the multimedia data are recommended to the matched user subsequently, accurate recommendation of the multimedia data is achieved, accuracy of data recommendation is guaranteed no matter whether the operations executed on the multimedia data can be obtained, even if the operations executed on the multimedia data are not obtained, the multimedia data can be recommended, and cold start of the multimedia data is achieved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic diagram illustrating one implementation environment in accordance with an example embodiment.
FIG. 2 is a flow chart illustrating a method of data recommendation, according to an example embodiment.
FIG. 3 is a flow chart illustrating a method of data recommendation, according to an example embodiment.
FIG. 4 is a flow diagram illustrating a method for obtaining data characteristics according to an example embodiment.
FIG. 5 is a flow diagram illustrating a method for obtaining operational characteristics corresponding to data characteristics, according to an example embodiment.
FIG. 6 is a flow chart illustrating a method of obtaining a degree of match according to an example embodiment.
FIG. 7 is a flow chart illustrating a method of recommending data for a user in accordance with an exemplary embodiment.
FIG. 8 is a flow chart illustrating a method of data recommendation, according to an example embodiment.
FIG. 9 illustrates a method for training a feature transformation model, according to an example embodiment.
FIG. 10 is a flow diagram illustrating a method of obtaining discriminative identification of features of an input according to an exemplary embodiment.
Fig. 11 is a block diagram illustrating a data recommendation device according to an example embodiment.
FIG. 12 is a block diagram illustrating a data recommendation device according to an example embodiment.
Fig. 13 is a block diagram illustrating a terminal according to an example embodiment.
FIG. 14 is a block diagram illustrating a server in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the description of the above-described figures are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
As used in this disclosure, the terms "at least one," "a plurality," "each," "any," at least one includes one, two, or more than two, and a plurality includes two or more than two, each referring to each of the corresponding plurality, and any referring to any one of the plurality. For example, the plurality of users includes 3 users, each of the 3 users refers to each of the 3 users, and any one of the 3 users refers to any one of the 3 users, which may be a first user, a second user, or a third user.
It should be noted that the user information (including but not limited to user device information, user personal information, etc.) referred to in the present disclosure is information authorized by the user or sufficiently authorized by each party.
FIG. 1 is a schematic illustration of an implementation environment provided in accordance with an example embodiment, the implementation environment comprising: the terminal 101 and the server 102 are connected through a network, and the terminal 101 and the server 102 can interact through the network connection.
In some embodiments, the terminal is a mobile phone, a tablet computer, a computer, or the like. In some embodiments, the server is a server, or a server cluster composed of several servers, or a cloud computing service center.
The server 102 has a data processing function and a data recommendation function, and can process multimedia data and recommend the multimedia data to a user. The terminal 101 has a multimedia data display function, and can display the multimedia data recommended by the server 102 for the user to view.
In some embodiments, the terminal 101 installs an application served by the server 102, for example, the application is a shopping application, a video application, or the like. The terminal 101 is capable of running an application and presenting multimedia data in the application to a user. The server 102 can recommend multimedia data for the user to be exposed in the application for viewing by the user.
The method provided by the embodiment of the disclosure can be applied to various scenes.
For example, in an advertisement recommendation scenario.
When a new advertisement is delivered, because the new advertisement is not delivered or the delivery time is short, the operation performed by the user on the advertisement cannot be obtained, for example, the number of times the user views multimedia data cannot be obtained, or the click rate of the user on the multimedia data cannot be obtained, and the like, by using the method provided by the embodiment of the disclosure, the advertisement is recommended to the matched user through the obtained predicted operation characteristics, the advertisement recommendation accuracy is improved, and thus the cold start of the advertisement is realized.
Fig. 2 is a flowchart illustrating a data recommendation method according to an exemplary embodiment, referring to fig. 2, the method is applied in a server, and includes the following steps:
in step 201, the server obtains target data characteristics corresponding to target multimedia data to be recommended.
In step 202, the server calls the feature transformation model, and performs feature transformation processing on the target data features to obtain transformed predicted operation features, where the predicted operation features are used to represent predicted operations performed on the target multimedia data.
In step 203, the server determines the matching degree between each user and the target multimedia data according to the user characteristics and the predicted operation characteristics corresponding to the plurality of users.
In step 204, the server selects at least one target user from the multiple users according to the matching degrees corresponding to the multiple users, and recommends the target multimedia data to the at least one target user.
According to the method provided by the embodiment of the disclosure, the operation which is possibly executed by the user on the multimedia data is represented by the predicted operation characteristic of the multimedia data, so that the user which is possibly executed on the multimedia data is determined according to the predicted operation characteristic, namely, the user is matched with the multimedia data, the multimedia data is subsequently recommended to the matched user, the accurate recommendation of the multimedia data is realized, the accuracy of data recommendation is ensured no matter whether the operation executed on the multimedia data is acquired, the multimedia data can be recommended even if the operation executed on the multimedia data is not acquired, and the cold start of the multimedia data is realized.
In some embodiments, the method further comprises:
calling a feature conversion model, and performing feature conversion processing on first sample data features corresponding to first sample multimedia data to obtain converted first sample operation features;
calling a characteristic distinguishing model, and distinguishing the first sample operation characteristic to obtain a distinguishing identifier of the first sample operation characteristic, wherein the distinguishing identifier is used for indicating whether the first sample operation characteristic belongs to a data characteristic or an operation characteristic;
and training the characteristic conversion model according to the first sample operation characteristic and the distinguishing mark.
In some embodiments, training the feature transformation model according to the first sample operation feature and the discriminant identifier includes:
determining a first loss value of the feature conversion model according to the first sample operation feature and the distinguishing identifier;
and training the feature conversion model according to the first loss value.
In some embodiments, after training the feature transformation model according to the first sample operation feature and the discriminant identifier, the method further includes:
calling a characteristic conversion model, and performing characteristic conversion processing on second sample data characteristics corresponding to second sample multimedia data to obtain converted second sample operation characteristics;
calling a characteristic discrimination model, performing discrimination processing on the second sample data characteristic and the second sample operation characteristic respectively, determining discrimination identifiers of the second sample data characteristic and the second sample operation characteristic, wherein the discrimination identifiers are used for indicating whether the corresponding characteristics belong to data characteristics or operation characteristics;
and training the characteristic discrimination model according to the second sample data characteristic, the second sample operation characteristic, the second sample data characteristic and the discrimination identifier of the second sample operation characteristic.
In some embodiments, training the feature discrimination model according to the discrimination identifier of the second sample data feature, the second sample operation feature, the second sample data feature, and the second sample operation feature includes:
determining a second loss value of the characteristic discrimination model according to the second sample data characteristic, the second sample operation characteristic, the second sample data characteristic and the discrimination identifier of the second sample operation characteristic;
and training the characteristic discrimination model according to the second loss value.
In some embodiments, after the training of the feature classification model according to the second sample data feature, the second sample operation feature, the second sample data feature, and the classification identifier of the second sample operation feature, the method further includes:
calling a feature conversion model, and performing feature transformation processing on third sample data features corresponding to third sample multimedia data to obtain transformed third sample operation features;
calling a characteristic discrimination model, performing discrimination processing on the third sample operation characteristic, and determining a discrimination identifier of the third sample operation characteristic, wherein the discrimination identifier is used for indicating whether the third sample operation characteristic belongs to a data characteristic or an operation characteristic;
and training the feature conversion model according to the third sample operation features and the distinguishing identification.
In some embodiments, obtaining target data characteristics corresponding to target multimedia data to be recommended includes:
and calling a feature extraction model, and performing feature extraction on the target multimedia data to obtain target data features corresponding to the target multimedia data.
In some embodiments, the feature extraction model includes a text feature extraction submodel and an image feature extraction submodel, the feature extraction model is called, the feature extraction is performed on the target multimedia data, and the target data feature corresponding to the target multimedia data is obtained, including:
calling a text feature extraction submodel, and performing feature extraction on the target multimedia data to obtain text features of the target multimedia data;
calling an image feature extraction sub-model, and performing feature extraction on the target multimedia data to obtain image features of the target multimedia data;
and splicing the text characteristic and the image characteristic to obtain the target data characteristic of the target multimedia data.
In some embodiments, determining a matching degree of each user with the target multimedia data according to user characteristics corresponding to a plurality of users and the predicted operation characteristics includes:
and determining the matching degree of each user and the target multimedia data according to the user characteristics, the target data characteristics and the prediction operation characteristics corresponding to the plurality of users.
In some embodiments, determining a matching degree of each user with the target multimedia data according to the user characteristics, the target data characteristics and the predicted operation characteristics corresponding to the plurality of users includes:
and for the user characteristics of any user, processing the user characteristics, the target data characteristics and the predicted operation characteristics to obtain the matching degree of the user and the target multimedia data.
In some embodiments, selecting at least one target user from the multiple users according to the matching degrees corresponding to the multiple users, and recommending the target multimedia data to the at least one target user, includes:
determining recommendation parameters corresponding to a plurality of users according to the matching degrees corresponding to the plurality of users and the resource quantity corresponding to the target multimedia data;
and selecting at least one target user from the multiple users according to the recommendation parameters corresponding to the multiple users, and recommending the target multimedia data to the at least one target user.
Fig. 3 is a flowchart illustrating a data recommendation method according to an exemplary embodiment, referring to fig. 3, the method is applied in a server, and includes the following steps:
in step 301, the server invokes a feature extraction model to perform feature extraction on the target multimedia data, so as to obtain target data features corresponding to the target multimedia data.
In the embodiment of the present disclosure, the target multimedia data is multimedia data to be recommended, which has a short delivery time or has not been delivered yet. In some embodiments, the targeted multimedia data is an advertisement, video, novel, or the like. Due to the fact that the target multimedia data is put in for a short time, few operations executed by the user on the target multimedia data are acquired, and the target multimedia data cannot be recommended accurately according to the operations executed by the user on the target multimedia data, or the operations executed by the user on the target multimedia data, such as the number of times the user views the multimedia data, or the click rate of the user on the multimedia data, cannot be acquired because the target multimedia data is not put in yet. Therefore, the operation performed by the user on the target multimedia data is predicted to obtain the operation characteristics corresponding to the target multimedia data, so that the target multimedia data can be accurately recommended according to the operation characteristics subsequently, and the cold start of the target multimedia data is realized.
The feature extraction model is used for extracting data features of the multimedia data, and representing the corresponding multimedia data by the data features, wherein in some embodiments, the target data features comprise data feature vectors or data feature matrixes. And extracting target data characteristics corresponding to the target multimedia data through a characteristic extraction model so as to ensure the accuracy of the target data characteristics, and then acquiring the predicted operation characteristics of the target multimedia data through the target data characteristics.
Because the server stores a plurality of multimedia data, the process of selecting the target multimedia from the stored multimedia data comprises the following two modes:
the first mode is as follows: and storing a plurality of multimedia data and the putting duration of each multimedia data in the database, and taking the multimedia data with the putting duration less than the reference duration as target multimedia data according to the putting durations of the plurality of multimedia data.
Wherein, the putting duration represents the duration of putting the multimedia data out. The reference time period is any time period, such as 10 hours, 2 days, etc. After any multimedia data is released, the user can view the multimedia data.
When the multimedia data is just released, the operations executed by the user on the multimedia data in the multimedia data releasing process are limited, so that the operations executed by the user on the multimedia data cannot be accurately obtained, and the accurate recommendation on the multimedia data cannot be realized according to the obtained operations, therefore, the multimedia data with releasing time less than reference time is taken as the target multimedia data.
The second mode is as follows: and storing a plurality of multimedia data and the operation times corresponding to each multimedia data in the database, and taking the multimedia data with the operation times smaller than the reference times as target multimedia data according to the operation times corresponding to the plurality of multimedia data.
After any multimedia data is released, any user executes one operation on the multimedia data, and the operation times corresponding to the multimedia data is increased by 1. In some embodiments, the number of operations is a number of clicks.
When the number of times of the operation performed on the multimedia data by the user is low, the accurate recommendation on the multimedia data cannot be realized according to the operation performed on the multimedia data by the user, so that the multimedia data with the release time length smaller than the reference time length is used as the target multimedia data.
In some embodiments, the feature extraction model comprises a text feature extraction sub-model and an image feature extraction sub-model, and step 301 comprises the following steps 3011-3013:
3011. and calling the text feature extraction submodel, and performing feature extraction on the target multimedia data to obtain the text features of the target multimedia data.
In the embodiment of the present disclosure, the target multimedia data includes text and image, for example, the multimedia data is a video, and the video includes text and image. Respectively extracting the characteristics of the target multimedia data from the text dimension and the image dimension, and splicing the obtained text characteristics and the image characteristics to enrich the data contained in the target data characteristics corresponding to the target multimedia data, thereby improving the accuracy of the target data characteristics. The process of obtaining the target data features through the text feature extraction submodel and the image feature extraction submodel is shown in fig. 4.
The text feature extraction submodel is used for extracting text features in the multimedia data. In some embodiments, the text feature extraction submodel is BERT (Bidirectional Encoder representation), or other model, or the like. The text feature is used to represent text contained in the target multimedia data, and in some embodiments, the text feature comprises a text feature vector or a text feature matrix.
The text feature extraction submodel is a model used for extracting text features in the multimedia data, and the text features in the target multimedia data are extracted by the text feature extraction submodel, so that the accuracy of the text features is ensured.
3012. And calling the image feature extraction submodel to extract the features of the target multimedia data to obtain the image features of the target multimedia data.
The image feature extraction submodel is used for extracting image features in the target multimedia data. In some embodiments, the image feature extraction sub-model is VGG-16(Visual Geometry Group Network), or other models, etc. The image features are used to represent an image contained in the target multimedia data, and in some embodiments, the image features include an image feature vector or an image feature matrix.
The image feature extraction submodel is a model used for extracting image features in the multimedia data, and the image features in the target multimedia data are extracted by using the image feature extraction submodel, so that the accuracy of the image features is ensured.
3013. And splicing the text characteristic and the image characteristic to obtain the target data characteristic of the target multimedia data.
Because the text features and the image features describe the target multimedia data from different dimensions, the text features and the image features are spliced together to serve as the target data features of the target multimedia data, data contained in the target data features are enriched, and the accuracy of the target data features is improved.
In some embodiments, the text feature is a text feature vector and the image feature is an image feature vector, then step 3013 includes: and splicing the text characteristic vector and the image characteristic vector to obtain a target data characteristic vector of the target multimedia data.
In some embodiments, the sum of the number of feature dimensions included in the text feature vector and the number of feature dimensions included in the image feature vector is equal to the number of feature dimensions included in the target data feature vector. For example, the text feature vector includes 4 feature dimensions, the text feature vector is (1, 1, 1, 1), the image feature vector includes 3 feature dimensions, the image feature vector is (2, 2, 2), and the obtained target data feature vector includes 7 feature dimensions, the target data feature vector is (1, 1, 1, 1, 2, 2, 2), or the target data feature vector is (2, 2, 2, 1, 1, 1, 1).
It should be noted that, in the embodiment of the present disclosure, the target data feature corresponding to the target multimedia data is obtained through the feature extraction model, and in another embodiment, the server can obtain the target data feature corresponding to the target multimedia data to be recommended in other manners.
In step 302, the server invokes a feature transformation model to perform feature transformation processing on the target data features to obtain transformed predicted operation features.
The characteristic conversion model is used for obtaining the predicted operation characteristics of the multimedia data, and the predicted operation characteristics are used for representing the predicted operation performed on the target multimedia data. For example, the predicted operational characteristic represents a number of times the user viewed the multimedia data, or the operational characteristic represents a click rate of the user to the multimedia data. In some embodiments, the form of the predicted operation feature includes, but is not limited to, an operation feature vector or an operation feature matrix.
And transforming the target data characteristics used for representing the target multimedia data through the characteristic transformation model to obtain transformed prediction characteristics so as to represent the predicted operation of the user on the target multimedia data, so that the user matched with the target multimedia data can be determined according to the predicted operation of the user on the target multimedia data, the accurate recommendation of the multimedia data is ensured, and the cold start of the multimedia data is realized.
In some embodiments, the target data feature is a data feature vector, the predicted operation feature is an operation feature vector, and the number of feature dimensions of the data feature vector is equal to the number of feature dimensions of the operation feature vector.
In some embodiments, the target data feature is a data feature vector, the feature transformation model includes at least two hidden layers, and step 302 includes: based on the first hidden layer in the at least two hidden layers, carrying out feature transformation processing on data features, inputting the features output by the first hidden layer into the next hidden layer, based on the next hidden layer, carrying out feature transformation processing on the features output by the previous hidden layer, and taking the features output by the most enough hidden layer in the at least two hidden layers as the prediction operation features.
And performing feature transformation on the target data features through a plurality of hidden layers in the feature transformation model to predict the operation executed by the user on the target multimedia data, and obtaining the predicted operation features corresponding to the target data features, namely the predicted operation features corresponding to the target multimedia data. As shown in fig. 5, the predicted operation features corresponding to the target data features are obtained through two hidden layers in the feature transformation model.
In some embodiments, each hidden layer corresponds to a feature transformation matrix, and different hidden layers correspond to different feature transformation matrices. And performing characteristic transformation on the target data characteristics through the characteristic transformation matrixes corresponding to the plurality of hidden layers to obtain the predicted operation characteristics of the target multimedia data.
It should be noted that, in the embodiment of the present disclosure, the predicted operation feature corresponding to the target multimedia data is obtained through the feature conversion model as an example, but in another embodiment, the step 302 does not need to be executed, and the feature conversion process can be performed on the target data feature in another way to obtain the converted predicted operation feature.
In step 303, the server determines the matching degree between each user and the target multimedia data according to the user characteristics, the target data characteristics and the predicted operation characteristics corresponding to the plurality of users.
The matching degree is used for representing the preference degree of the user to the multimedia data, the higher the matching degree is, the higher the preference degree of the user to the multimedia data is, and the lower the matching degree is, the lower the preference degree of the user to the multimedia data is. The user features are used to describe the user, and in some embodiments, the user features comprise a user feature vector or a user feature matrix. In some embodiments, the user characteristics of the user are obtained by performing characteristic extraction on the user information. The user information includes gender, age, residence, occupation, hobby, and the like.
Through the user characteristics, the target data characteristics and the prediction operation characteristics, the preference degree of each user to the target multimedia data can be determined, namely the matching degree of each user and the target multimedia data is determined, so that the target multimedia data can be recommended to the matched users according to the matching degree of each user and the target multimedia data.
In some embodiments, this step 303 comprises: and for the user characteristics of any user, matching the user characteristics, the target data characteristics and the prediction operation characteristics to obtain the matching degree of the user and the target multimedia data.
The target data characteristics and the prediction operation characteristics describe the target multimedia data from different dimensions, and the user characteristics describe the user, so that the matching degree among the user characteristics, the target data characteristics and the prediction operation characteristics is determined by matching the user characteristics, the target data characteristics and the prediction operation characteristics, and the matching degree between the user and the target multimedia data is obtained.
In some embodiments, a feature matching model is called, and the user features, the target data features and the predicted operation features are matched to obtain the matching degree of the user and the target multimedia data. The characteristic matching model is used for obtaining the matching degree of the user and the multimedia data. After the user characteristics, the target data characteristics corresponding to the multimedia data and the predicted operation characteristics are obtained, the matching degree of the user and the multimedia data can be obtained through the characteristic matching model. As shown in fig. 6, the user characteristics, the target data characteristics, and the predicted operation characteristics are input into the characteristic matching model, the characteristics are matched through the hidden layer in the characteristic matching model, and the matching degree between the user and the target multimedia data is output through the output layer.
In some embodiments, the user feature is a user feature vector, the target data feature is a data feature vector, the predicted operation feature is an operation feature vector, the user feature vector, the data feature vector and the operation feature vector all include feature values with the same number of dimensions, the feature values with the same dimensions of the user feature vector, the data feature vector and the operation feature vector are counted to obtain a fusion feature vector, and the fusion feature values of a plurality of feature dimensions of the fusion feature vector are counted to obtain the matching degree of the user and the target multimedia data.
In some embodiments, when feature values of the same dimension of the user feature vector, the data feature vector and the operation feature vector are counted, the sum of the feature values of the same dimension is used as a fusion feature value of the corresponding dimension; or, carrying out weighted average processing on the feature values of the same dimensionality to obtain a fusion feature value of the corresponding dimensionality.
In some embodiments, when processing the fusion eigenvalue of multiple eigen dimensions of the fusion eigenvector, performing weighted average processing on the fusion eigenvalue of multiple eigen dimensions to obtain the matching degree; alternatively, the sum of the fused feature values of the plurality of feature dimensions is used as the matching degree.
In some embodiments, the target multimedia data corresponds to a specific user characteristic indicating that the target multimedia data is to be recommended to a user meeting the specific user characteristic, and before step 303, the method includes: and the server selects a reference user with the user characteristics consistent with the designated user characteristics as the user to be recommended for the target multimedia data according to the user characteristics corresponding to the plurality of reference users and the designated user characteristics.
For example, the specified user characteristics are: the female user is selected from the multiple reference users and serves as the user to be recommended for the target multimedia data; or, the specified user characteristics are: and if the user is resident in the XX city, selecting the user who is resident in the XX city from the multiple reference users as the user to be recommended for the target multimedia data.
In step 304, the server selects at least one target user from the multiple users according to the matching degrees corresponding to the multiple users, and recommends the target multimedia data to the at least one target user.
And recommending the target multimedia data to the target user by selecting the user matched with the target multimedia data from the plurality of users as the target user according to the matching degree of each user and the multimedia data.
In some embodiments, in the matching degrees corresponding to the multiple users, the matching degree corresponding to the target user is greater than the matching degrees corresponding to other users in the multiple users.
In some embodiments, this step 304 includes: according to the matching degrees corresponding to the multiple users and the resource quantity corresponding to the target multimedia data, determining recommendation parameters corresponding to the multiple users, selecting at least one target user from the multiple users according to the recommendation parameters corresponding to the multiple users, and recommending the target multimedia data to the at least one target user.
The resource quantity is the quantity of resources which can be obtained by recommending the target multimedia data. The recommendation parameter is used for representing the possibility of recommending to the user, the higher the recommendation parameter is, the higher the possibility of representing that the target multimedia resource is recommended to the corresponding user is, and the lower the recommendation parameter is, the lower the possibility of representing that the target multimedia resource is recommended to the corresponding user is. The target user to be recommended is selected from the multiple users according to the recommendation parameters, so that under the condition of ensuring that the target multimedia data is accurately recommended, a large amount of resources can be obtained.
In some embodiments, for any user, the product of the user matching degree and the number of resources corresponding to the target multimedia data is used as the recommendation parameter corresponding to the user.
In some embodiments, the recommendation parameters corresponding to the multiple users are arranged in a descending order, and at least one target user is selected. And the recommendation parameter of the target user is larger than the recommendation parameters of other users in the plurality of users.
It should be noted that, in the embodiment of the present disclosure, the target multimedia data is recommended to the target user according to the data characteristics, the operation characteristics, and the user characteristics, but in another embodiment, the target multimedia data is recommended directly according to the operation characteristics without executing step 304.
When the multimedia data is recommended, if the multimedia data has been released for a long time, the operation of the user on the multimedia data, for example, the number of times the user views the multimedia data, or the click rate of the user on the multimedia data, can be acquired, and when the multimedia data is recommended to other users, the matched user can be determined according to the acquired operation of the user on the multimedia data, so that the multimedia data is recommended to the determined user. If the multimedia data is multimedia data which is not released yet or multimedia data which is released but is released within a time period, the operation executed by the user on the multimedia data cannot be obtained, or the operation executed by the user on the multimedia data is less, so that the multimedia data cannot be accurately recommended according to the operation executed by the user on the multimedia data, the multimedia data cannot be recommended to the user, the operation executed by the user on the multimedia data is further lacked, a vicious circle is formed, the multimedia data cannot be accurately recommended all the time, and cold start on the multimedia data cannot be realized. Therefore, in order to realize the cold start of the multimedia data, by the method provided by the disclosure, the operation of the user on the multimedia data is predicted under the condition that the operation performed on the multimedia data cannot be obtained, and the matched user is determined according to the predicted operation of the user on the multimedia data, and the multimedia data is recommended to the matched user, so that the accurate recommendation of the multimedia data is realized, the accuracy of the data recommendation is ensured, and the cold start of the multimedia data is realized.
According to the method provided by the embodiment of the disclosure, the operation which is possibly executed by the user on the multimedia data is represented by the predicted operation characteristic of the multimedia data, so that the user which is possibly executed on the multimedia data is determined according to the predicted operation characteristic, namely, the user is matched with the multimedia data, the multimedia data is subsequently recommended to the matched user, the accurate recommendation of the multimedia data is realized, the accuracy of data recommendation is ensured no matter whether the operation executed on the multimedia data is acquired, the multimedia data can be recommended even if the operation executed on the multimedia data is not acquired, and the cold start of the multimedia data is realized.
And, according to the matching degree of the user and the multimedia data, recommending the multimedia data so as to improve the accuracy of data recommendation.
Based on the foregoing embodiments, the present disclosure provides a method for recommending data to a user, as shown in fig. 7, applied to a server, where the method includes:
in step 701, the server selects multimedia data, of which the designated user characteristics conform to the target user characteristics, from the plurality of multimedia data according to the target user characteristics corresponding to the target user and the designated user characteristics corresponding to the plurality of multimedia data, as first reference multimedia data to be recommended.
This step is similar to the scheme in which the server selects the user to be recommended for the target multimedia data from the plurality of reference users in step 303, and is not described herein again.
In step 702, the server selects a plurality of second reference multimedia data according to the number of resources corresponding to the selected plurality of first reference multimedia data.
The number of resources corresponding to the second reference multimedia data is greater than the number of resources corresponding to other reference multimedia data in the plurality of first reference multimedia data.
In the process that the server selects the second reference multimedia resource, the lightweight model can be called, and the second reference multimedia resource data with large resource quantity is selected from the resource quantity corresponding to the plurality of first reference multimedia data.
In step 703, the server invokes a feature extraction model to perform feature extraction on each second reference multimedia data, respectively, to obtain a target data feature corresponding to each second reference multimedia data.
This step is similar to step 301 described above and will not be described herein again.
In step 704, the server invokes the feature transformation model to perform feature transformation on the target data feature corresponding to each second reference multimedia data, so as to obtain a predicted operation feature corresponding to each second reference multimedia data.
This step is similar to step 302 described above and will not be described further herein.
In step 705, for any second reference multimedia data, the server determines the matching degree between the target user and the second reference multimedia data according to the target user characteristics, the target data characteristics corresponding to the second reference multimedia data, and the predicted operation characteristics.
This step is similar to step 303 described above and will not be described herein again.
In step 706, the server determines a recommended parameter of each second reference multimedia resource according to the matching degree and the resource quantity corresponding to each second reference multimedia data.
This step is similar to the scheme for obtaining the recommended parameters corresponding to the multiple users in step 304, and is not described herein again.
In step 707, the server selects at least one target multimedia resource from the plurality of second reference multimedia resources according to the recommendation parameters of the plurality of second reference multimedia resources to recommend the target user.
And the recommendation parameter corresponding to the target multimedia resource is larger than the recommendation parameters of other multimedia resources in the plurality of second reference multimedia resources. And selecting the target multimedia resource to be recommended from the plurality of second reference multimedia resources according to the recommendation parameter, so that the matched target multimedia resource is recommended to the user under the condition of ensuring that a large number of resources are obtained.
In the embodiment of the present disclosure, after the multimedia data is accurately recommended to the user, the corresponding resource amount can be obtained, for example, after the advertisement is accurately recommended to the user, a certain benefit can be obtained. Therefore, when the multimedia data is recommended to the target user, under the condition that the operation executed on the multimedia data to be recommended cannot be acquired, in order to realize the cold start of the multimedia data and acquire a large number of resources, the corresponding multimedia data with a large number of resources is selected, and the operation executed on each multimedia resource data by the user is predicted, so that the multimedia resource matched with the target user can be determined, the matched multimedia resource is recommended to the target user, the accuracy of multimedia data recommendation is ensured, a large number of resources can be acquired, and the cold start of the multimedia resource is also realized.
According to the method provided by the embodiment of the disclosure, the operation which is possibly executed by the user on the multimedia data is represented by the predicted operation characteristic of the multimedia data, so that the user which is possibly executed on the multimedia data is determined according to the predicted operation characteristic, namely, the user is matched with the multimedia data, the multimedia data is subsequently recommended to the matched user, the accurate recommendation of the multimedia data is realized, the accuracy of data recommendation is ensured no matter whether the operation executed on the multimedia data is acquired, the multimedia data can be recommended even if the operation executed on the multimedia data is not acquired, and the cold start of the multimedia data is realized.
Based on the steps in the foregoing embodiment, an embodiment of the present disclosure provides a data recommendation process, as shown in fig. 8, where the recommendation process includes:
1. and through the user characteristics of the target user, a plurality of first reference multimedia data corresponding to the target user are oriented.
2. And determining a plurality of second reference multimedia data with large resource quantity by recalling the plurality of first reference multimedia data.
3. And recalling each second reference multimedia data through the neural network model, and determining the recommended parameters of each second reference multimedia data.
4. And bidding the plurality of second reference multimedia data according to the recommendation parameters of the plurality of second reference multimedia data, selecting the target multimedia data, and recommending the target multimedia data to the target user.
On the basis of the embodiments shown in fig. 3 and fig. 7, before the feature transformation model is called, the feature transformation model needs to be trained, and the training process is described in detail in the following embodiments.
Fig. 9 is a training method of a feature transformation model provided in an embodiment of the present disclosure, and with reference to fig. 9, when applied to a server, the method includes:
in step 901, the server invokes a feature transformation model to perform feature transformation processing on the first sample data feature corresponding to the first sample multimedia data, so as to obtain a transformed first sample operation feature.
In the disclosed embodiment, the feature conversion model and the feature discrimination model are alternately and iteratively trained in a mode of performing countermeasure training by adopting the feature conversion model and the feature discrimination model to obtain an accurate feature conversion model. For example, the feature conversion model is trained, then the feature discrimination model is trained based on the trained feature conversion model, then the feature conversion model is trained through the trained feature discrimination model, then the feature conversion model is trained, and the above processes are repeated to respectively train the feature conversion model and the feature discrimination model, so as to improve the accuracy of the feature conversion model.
The first sample multimedia data is any multimedia data. Acquiring a first sample data characteristic corresponding to the first sample multimedia data, similar to the step 301; step 901 is similar to step 302 described above and will not be described further herein.
In step 902, the server invokes a feature identification model to perform identification processing on the first sample operation feature, so as to obtain an identification identifier of the first sample operation feature.
The characteristic distinguishing model is used for distinguishing whether the input characteristic is a data characteristic or an operation characteristic. The discrimination indicator is used to indicate whether the first sample operational characteristic belongs to a data characteristic or an operational characteristic. In some embodiments, the discriminant is identified as a probability, representing a probability that the input feature is a data feature, or representing a probability that the input feature is an operational feature. For example, the value range of the probability is (0,1), if the probability is 0, the feature indicating the judgment is an operation feature, and if the probability is 1, the feature indicating the judgment is a data feature; alternatively, if the probability is 0, it indicates that the feature indicating the discrimination is the data feature, and if the probability is 1, it indicates that the feature indicating the discrimination is the operation feature. The process of obtaining the discrimination indication of the input features through the feature discrimination model is shown in fig. 10.
And performing discrimination processing on the first sample operation characteristic through a characteristic discrimination model to determine whether the first sample operation characteristic generated by the characteristic conversion model belongs to a data characteristic or an operation characteristic, so as to determine whether the first sample operation characteristic generated by the characteristic conversion model is accurate according to the discrimination identification.
In step 903, the server trains the feature transformation model according to the first sample operation feature and the discrimination identifier.
And determining whether the first sample operation characteristic generated by the characteristic conversion model is accurate or not through the sample operation characteristic and the distinguishing identifier, so that the characteristic conversion model is adjusted according to the distinguishing identifier in the following process, and the accuracy of the characteristic conversion model is improved.
In some embodiments, this step 903 comprises: and determining a first loss value of the feature conversion model according to the first sample operation feature and the distinguishing identifier, and training the feature conversion model according to the first loss value.
Wherein the first loss value is used to represent the inaccuracy of the current feature conversion model. And training the characteristic conversion model through the first loss value to reduce the loss value of the characteristic conversion model, so that the accuracy of the characteristic conversion model is improved.
In some embodiments, the first sample data characteristic, the first sample operating characteristic, and the first penalty value satisfy the following relationship:
Figure BDA0002794034140000191
wherein,
Figure BDA0002794034140000192
representing a first loss value; g (z) represents a first sample operating characteristic; d (G (z)) represents a distinguishing mark of the first sample operation characteristic; z to Pz(z) a distribution of first sample data characteristics z of a plurality of first sample multimedia data; e () represents the average of the loss values of the plurality of sample multimedia data.
It should be noted that, in the embodiment of the present disclosure, when training the feature transformation model, only one training is taken as an example for description, and in another embodiment, the feature transformation model is trained multiple times through multiple first sample multimedia data.
In some embodiments, the feature transformation model is iteratively trained through a plurality of first sample multimedia data, and the training of the feature transformation model is stopped in response to a loss value obtained in a current training turn being smaller than a reference loss value; or stopping training the feature conversion model in response to the training round of the feature conversion model reaching the reference round.
Wherein, the feature conversion model is trained once through a first sample multimedia data, which represents a training turn. The reference loss value is any value, such as 0.3 or 0.4. The reference round is any round, for example, the reference round is 10 or 15, and the like.
After the feature conversion model is trained for multiple times through the multiple first sample multimedia data, when the condition that the feature conversion model stops training is reached, the condition that the sample operation features generated by the feature conversion model belong to the operation features or the data features cannot be identified based on the current feature judgment model is represented, so that the feature judgment model needs to be trained subsequently, and the feature conversion model can be trained subsequently through the trained feature judgment model.
In step 904, the server invokes a feature transformation model to perform feature transformation on the second sample data features corresponding to the second sample multimedia data, so as to obtain transformed second sample operation features.
Wherein the second sample multimedia data is different from the first sample multimedia data. This step 904 is similar to the step 901 described above and will not be described further herein.
In step 905, the server invokes the feature identification model, and performs identification processing on the second sample data feature and the second sample operation feature respectively to determine identification marks of the second sample data feature and the second sample operation feature.
Wherein the discrimination indication is used to indicate whether the corresponding feature belongs to a data feature or an operational feature.
Since the feature identification model is used for identifying whether the input features belong to data features or operation features, in order to ensure the accuracy of the feature identification model, the identification model is required to perform identification processing on sample data features and sample operation features when the identification model is trained, so that the identification model after subsequent training can distinguish the data features from the operation features.
And calling a feature discrimination model, and performing discrimination processing on the second sample data feature and the second sample operation feature respectively, which is similar to the step 902 and is not described herein again.
In step 906, the server trains the feature discrimination model according to the second sample data feature, the second sample operation feature, the second sample data feature, and the discrimination identifier of the second sample operation feature.
And determining whether the characteristic discrimination model discrimination characteristics are accurate or not through the second sample data characteristics, the second sample operation characteristics, the second sample data characteristics and the discrimination identification of the second sample operation characteristics, so that the characteristic discrimination model can be adjusted to enhance the discrimination performance of the characteristic discrimination model and improve the accuracy of the characteristic discrimination model.
In some embodiments, this step 906 includes: and taking the second sample data characteristics as a positive sample, and training the characteristic discrimination model according to the second sample data characteristics, the second sample operation characteristics, the second sample data characteristics and the discrimination identifier of the second sample operation characteristics.
In some embodiments, this step 906 includes: and determining a second loss value of the feature discrimination model according to the second sample data feature, the second sample operation feature, the second sample data feature and the discrimination identifier of the second sample operation feature, and training the feature discrimination model according to the second loss value.
Wherein the second loss value is used for representing the inaccuracy of the current feature discrimination model. And training the feature discrimination model through the second loss value to reduce the loss value of the feature discrimination model and enhance the discrimination performance of the feature discrimination model, thereby improving the accuracy of the feature discrimination model.
In some embodiments, the second sample data characteristic, the second sample operating characteristic and the second penalty value satisfy the following relationship:
Figure BDA0002794034140000211
wherein,
Figure BDA0002794034140000212
representing a second loss value; x to Pdata(z) represents a distribution of a plurality of second sample multimedia data x and second sample data features of the second sample multimedia data; z to Pz(z) a distribution of second sample data characteristics z representing a plurality of second sample multimedia data; d, (x) the distinguishing mark of the second sample data characteristic; g (z) represents a second sampleThe present operational characteristics; d (G (z)) represents a distinguishing mark of the second sample operation characteristic; e () represents the average of the loss values of the plurality of sample multimedia data.
It should be noted that, in the embodiment of the present disclosure, when training the feature recognition model, only one training is taken as an example for description, and in another embodiment, the feature recognition model is trained multiple times through multiple second sample multimedia data.
In some embodiments, the feature discrimination model is iteratively trained through a plurality of second sample multimedia data, and the training of the feature discrimination model is stopped in response to a loss value obtained in a current training round being smaller than a reference loss value; or stopping training the feature discrimination model in response to the training round of the feature discrimination model reaching the reference round.
And training the characteristic discrimination model once through second sample multimedia data to represent a training turn. After the feature discrimination model is trained for multiple times through the second sample multimedia data, when the training stopping condition of the feature discrimination model is reached, the feature discrimination model represents the current feature conversion model, and can accurately distinguish sample data features from sample operation features generated by the current feature conversion model, so that the feature conversion model can be continuously trained through the currently obtained feature discrimination model subsequently.
In step 907, the server invokes a feature transformation model to perform feature transformation processing on third sample data features corresponding to the third sample multimedia data, so as to obtain transformed third sample operation features.
The third sample multimedia data is different from the first sample multimedia data and the second sample multimedia data.
Step 907 is similar to step 901 described above and will not be described again here.
In step 908, the server invokes the feature identification model to perform identification processing on the third sample operation feature, and determines an identification flag of the third sample operation feature.
Wherein the discrimination indicator is used to indicate whether the third sample operating characteristic belongs to the data characteristic or the operating characteristic. Step 908 is similar to step 902 described above and will not be described further herein.
In step 909, the server trains the feature transformation model according to the third sample operation features and the discrimination identifier.
This step 909 is similar to the step 903 described above and will not be described again.
It should be noted that, in the embodiment of the present disclosure, the feature conversion model is trained first, then the feature determination model is trained, and finally the feature conversion model is trained as an example, while in another embodiment, after step 909, the feature determination model is trained again based on the trained feature conversion model, then the feature conversion model is trained based on the trained feature determination model, the above process is repeated, the feature determination model and the feature conversion model are alternately and iteratively trained, and in response to the feature conversion model and the feature determination model reaching a balanced state, the alternately and iteratively training of the feature conversion model and the feature determination model is stopped.
In some embodiments, after the feature conversion model and the feature classification model are subjected to a plurality of alternate iterative training rounds, when the feature conversion model is trained again, the loss value of the feature conversion model is converged, and when the feature discrimination model is trained again, the loss value of the feature discrimination model is converged, which indicates that the current feature conversion model and the current feature discrimination model have reached a balanced state, and therefore, the alternate iterative training of the feature conversion model and the feature discrimination model is stopped.
In the process of training the feature transformation model, the feature transformation model is used as a Generator, the feature discrimination model is used as a Discriminator, a GAN (generic adaptive Network) is formed by the feature transformation model and the feature discrimination model to generate a countermeasure Network, the countermeasure Network is trained to obtain an accurate feature transformation model, so that the obtained feature transformation model can accurately predict the operation on the multimedia data, and then the accurate recommendation of the multimedia data can be ensured according to the predicted operation on the multimedia data, so as to realize the cold start of the multimedia data.
According to the method provided by the embodiment of the disclosure, the feature conversion model and the feature discrimination model are alternately and iteratively trained in a mode of performing countermeasure training by adopting the feature conversion model and the feature discrimination model, so that the feature conversion model and the feature discrimination model reach a balanced state, and an accurate feature conversion model is obtained.
Fig. 11 is a block diagram illustrating a data recommendation device according to an example embodiment. Referring to fig. 11, the apparatus includes:
a feature obtaining unit 1101 configured to obtain a target data feature corresponding to target multimedia data to be recommended;
a transformation processing unit 1102 configured to invoke the feature transformation model, perform feature transformation processing on the target data feature, and obtain a transformed predicted operation feature, where the predicted operation feature is used to represent a predicted operation performed on the target multimedia data;
a determining unit 1103 configured to determine, according to user characteristics corresponding to a plurality of users and the predicted operation characteristics, a matching degree of each user with the target multimedia data;
a recommending unit 1104 configured to select at least one target user from the multiple users according to the matching degrees corresponding to the multiple users, and recommend the target multimedia data to the at least one target user.
In some embodiments, referring to fig. 12, the apparatus further comprises:
a transformation processing unit 1102, configured to invoke a feature transformation model, perform feature transformation processing on a first sample data feature corresponding to the first sample multimedia data, to obtain a transformed first sample operation feature;
a distinguishing processing unit 1105 configured to invoke a feature distinguishing model, and perform distinguishing processing on the first sample operation feature to obtain a distinguishing identifier of the first sample operation feature, where the distinguishing identifier is used to indicate whether the first sample operation feature belongs to a data feature or an operation feature;
a first training unit 1106, configured to train the feature transformation model according to the first sample operation feature and the discrimination identifier.
In some embodiments, referring to fig. 12, the first training unit 1106 includes:
a first determining subunit 1161 configured to determine a first loss value of the feature transformation model according to the first sample operation feature and the discrimination identifier;
a first training subunit 1162 configured to train the feature conversion model according to the first loss value.
In some embodiments, referring to fig. 12, the apparatus further comprises:
the transformation processing unit 1102 is configured to invoke a feature transformation model, perform feature transformation processing on second sample data features corresponding to second sample multimedia data, and obtain transformed second sample operation features;
the distinguishing processing unit 1105 is further configured to invoke a feature distinguishing model, respectively distinguish the second sample data feature and the second sample operation feature, and determine distinguishing identifiers of the second sample data feature and the second sample operation feature, where the distinguishing identifiers are used to indicate whether the corresponding features belong to data features or operation features;
the second training unit 1107 is configured to train the feature discrimination model according to the second sample data feature, the second sample operation feature, the second sample data feature, and the discrimination identifier of the second sample operation feature.
In some embodiments, referring to fig. 12, second training unit 1107 comprises:
a second determining subunit 1171, configured to determine a second loss value of the feature discrimination model according to the discrimination identifier of the second sample data feature, the second sample operation feature, the second sample data feature, and the second sample operation feature;
a second training subunit 1172 configured to train the feature discrimination model according to the second loss value.
In some embodiments, referring to fig. 12, the apparatus further comprises:
the transformation processing unit 1102 is configured to invoke a feature transformation model, perform feature transformation processing on third sample data features corresponding to the third sample multimedia data, and obtain transformed third sample operation features;
the discrimination processing unit 1105 is further configured to invoke a feature discrimination model, perform discrimination processing on the third sample operation feature, and determine a discrimination identifier of the third sample operation feature, where the discrimination identifier is used to indicate whether the third sample operation feature belongs to a data feature or an operation feature;
and a third training unit 1108 configured to train the feature transformation model according to the third sample operation feature and the discrimination identifier.
In some embodiments, referring to fig. 12, the feature acquisition unit 1101 includes:
and the feature extraction subunit 1111 is configured to invoke a feature extraction model, perform feature extraction on the target multimedia data, and obtain target data features corresponding to the target multimedia data.
In some embodiments, the feature extraction model includes a text feature extraction sub-model and an image feature extraction sub-model, and the feature extraction sub-unit 1111 is configured to invoke the text feature extraction sub-model to perform feature extraction on the target multimedia data, so as to obtain text features of the target multimedia data; calling an image feature extraction sub-model, and performing feature extraction on the target multimedia data to obtain image features of the target multimedia data; and splicing the text characteristic and the image characteristic to obtain the target data characteristic of the target multimedia data.
In some embodiments, referring to fig. 12, the determining unit 1103 includes:
the third determining subunit 1131 is configured to determine a matching degree between each user and the target multimedia data according to the user characteristics, the target data characteristics, and the predicted operation characteristics corresponding to the multiple users.
In some embodiments, the third determining subunit 1131 is configured to, for the user characteristics of any user, perform matching processing on the user characteristics, the target data characteristics, and the predicted operation characteristics to obtain a matching degree between the user and the target multimedia data.
In some embodiments, the recommending unit 1104 is configured to determine recommendation parameters corresponding to a plurality of users according to the matching degrees corresponding to the plurality of users and the number of resources corresponding to the target multimedia data; and selecting at least one target user from the multiple users according to the recommendation parameters corresponding to the multiple users, and recommending the target multimedia data to the at least one target user.
With regard to the apparatus in the above-described embodiment, the specific manner in which each unit performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
Fig. 13 is a block diagram illustrating a structure of a terminal 1300 according to an example embodiment. The terminal 1300 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1300 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Terminal 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one program code for execution by processor 1301 to implement the data recommendation methods provided by method embodiments of the present disclosure.
In some embodiments, terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. Processor 1301, memory 1302, and peripheral interface 1303 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, display screen 1305, camera assembly 1306, audio circuitry 1307, positioning assembly 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. In some embodiments, the radio frequency circuitry 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication) related circuits, which are not limited by this disclosure.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1305 is a touch display screen, the display screen 1305 also has the ability to capture touch signals on or over the surface of the display screen 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1305 may be one, disposed on the front panel of terminal 1300; in other embodiments, display 1305 may be at least two, either on different surfaces of terminal 1300 or in a folded design; in other embodiments, display 1305 may be a flexible display disposed on a curved surface or on a folded surface of terminal 1300. Even further, the display 1305 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. In some embodiments, camera assembly 1306 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. The microphones may be provided in a plurality, respectively, at different portions of the terminal 1300 for the purpose of stereo sound collection or noise reduction. The microphone may also be an array microphone or an omni-directional acquisition microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The positioning component 1308 is used for positioning the current geographic position of the terminal 1300 for implementing navigation or LBS (Location Based Service). The Positioning component 1308 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 1309 is used to provide power to various components in terminal 1300. The power source 1309 may be alternating current, direct current, disposable or rechargeable. When the power source 1309 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1301 may control the display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1312 may detect the body direction and the rotation angle of the terminal 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to acquire a 3D motion of the user with respect to the terminal 1300. Processor 1301, based on the data collected by gyroscope sensor 1312, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1313 may be located on the side frame of terminal 1300 and/or underneath display 1305. When the pressure sensor 1313 is disposed on the side frame of the terminal 1300, a user's holding signal to the terminal 1300 may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 1313. When the pressure sensor 1313 is disposed at a lower layer of the display screen 1305, the processor 1301 controls an operability control on the UI interface according to a pressure operation of the user on the display screen 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1314 is used for collecting the fingerprint of the user, and the processor 1301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor 1301 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1314 may be disposed on the front, back, or side of the terminal 1300. When a physical button or vendor Logo is provided on the terminal 1300, the fingerprint sensor 1314 may be integrated with the physical button or vendor Logo.
The optical sensor 1315 is used to collect the ambient light intensity. In one embodiment, the processor 1301 may control the display brightness of the display screen 1305 according to the ambient light intensity collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the display screen 1305 is reduced. In another embodiment, the processor 1301 can also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
A proximity sensor 1316, also known as a distance sensor, is disposed on a front panel of terminal 1300. The proximity sensor 1316 is used to gather the distance between the user and the front face of the terminal 1300. In one embodiment, the processor 1301 controls the display 1305 to switch from the bright screen state to the dark screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 gradually decreases; the display 1305 is controlled by the processor 1301 to switch from the rest state to the bright state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 is gradually increasing.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting with respect to terminal 1300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
Fig. 14 is a schematic structural diagram of a server according to an exemplary embodiment, where the server 1400 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1401 and one or more memories 1402, where the memory 1402 stores at least one program code, and the at least one program code is loaded and executed by the processors 1401 to implement the methods provided by the above method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
The server 1400 may be used for executing the steps executed by the server in the data recommendation method.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, which when executed by a processor of a server, enables the server to perform the steps performed by the server in the above data recommendation method. In some embodiments, the storage medium may be a non-transitory computer readable storage medium, for example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a server, characterized in that the server includes:
one or more processors;
volatile or non-volatile memory for storing one or more processor-executable instructions;
wherein the one or more processors are configured to execute the steps executed by the server in the data recommendation method.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, and when executed by a processor of a server, the instructions in the storage medium enable the server to perform the steps performed by the server in the data recommendation method.
In an exemplary embodiment, a computer program product is also provided, wherein instructions of the computer program product, when executed by a processor of a server, enable the server to perform the steps performed by the server in the data recommendation method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for recommending data, the method comprising:
acquiring target data characteristics corresponding to target multimedia data to be recommended;
calling a feature conversion model, and performing feature conversion processing on the target data features to obtain converted predicted operation features, wherein the predicted operation features are used for expressing predicted operations executed on the target multimedia data;
determining the matching degree of each user and the target multimedia data according to the user characteristics corresponding to the plurality of users and the predicted operation characteristics;
and selecting at least one target user from the plurality of users according to the matching degrees corresponding to the plurality of users, and recommending the target multimedia data to the at least one target user.
2. The method of claim 1, further comprising:
calling the characteristic conversion model, and performing characteristic conversion processing on first sample data characteristics corresponding to first sample multimedia data to obtain converted first sample operation characteristics;
calling a feature discrimination model, and performing discrimination processing on the first sample operation feature to obtain a discrimination identifier of the first sample operation feature, wherein the discrimination identifier is used for indicating whether the first sample operation feature belongs to a data feature or an operation feature;
and training the feature conversion model according to the first sample operation feature and the distinguishing mark.
3. The method of claim 2, wherein the training the feature transformation model according to the first sample operation feature and the discriminant identifier comprises:
determining a first loss value of the feature conversion model according to the first sample operation feature and the distinguishing identifier;
and training the feature conversion model according to the first loss value.
4. The method of claim 2, wherein after the training of the feature transformation model according to the first sample operation feature and the discriminant identifier, the method further comprises:
calling the characteristic conversion model, and performing characteristic conversion processing on second sample data characteristics corresponding to second sample multimedia data to obtain converted second sample operation characteristics;
calling the characteristic discrimination model, respectively discriminating the second sample data characteristic and the second sample operation characteristic, and determining discrimination identifiers of the second sample data characteristic and the second sample operation characteristic, wherein the discrimination identifiers are used for indicating whether the corresponding characteristics belong to data characteristics or operation characteristics;
and training the characteristic discrimination model according to the second sample data characteristic, the second sample operation characteristic, the second sample data characteristic and the discrimination identifier of the second sample operation characteristic.
5. The method of claim 4, wherein said training the feature discrimination model according to the discrimination indicator of the second sample data feature, the second sample operational feature, the second sample data feature, and the second sample operational feature comprises:
determining a second loss value of the feature discrimination model according to the discrimination identifier of the second sample data feature, the second sample operation feature, the second sample data feature and the second sample operation feature;
and training the characteristic discrimination model according to the second loss value.
6. The method of claim 4, wherein after training the feature discrimination model according to the discrimination indicator of the second sample data feature, the second sample operational feature, the second sample data feature, and the second sample operational feature, the method further comprises:
calling the feature conversion model, and performing feature conversion processing on third sample data features corresponding to the third sample multimedia data to obtain converted third sample operation features;
calling the characteristic distinguishing model, distinguishing the third sample operation characteristic, and determining a distinguishing identifier of the third sample operation characteristic, wherein the distinguishing identifier is used for indicating whether the third sample operation characteristic belongs to a data characteristic or an operation characteristic;
and training the feature conversion model according to the third sample operation features and the distinguishing identification.
7. The method according to claim 1, wherein the obtaining of the target data feature corresponding to the target multimedia data to be recommended comprises:
and calling a feature extraction model, and performing feature extraction on the target multimedia data to obtain target data features corresponding to the target multimedia data.
8. A data recommendation apparatus, characterized in that the apparatus comprises:
the characteristic acquisition unit is configured to acquire target data characteristics corresponding to target multimedia data to be recommended;
the transformation processing unit is configured to call a feature transformation model, perform feature transformation processing on the target data features, and obtain transformed prediction operation features, wherein the prediction operation features are used for representing predicted operations performed on the target multimedia data;
the determining unit is configured to determine the matching degree of each user and the target multimedia data according to the user characteristics corresponding to the plurality of users and the predicted operation characteristics;
and the recommending unit is configured to select at least one target user from the multiple users according to the matching degrees corresponding to the multiple users and recommend the target multimedia data to the at least one target user.
9. A server, characterized in that the server comprises:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the data recommendation method of any one of claims 1-7.
10. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of a server, enable the server to perform a data recommendation method as recited in any one of claims 1-7.
CN202011325270.1A 2020-11-23 2020-11-23 Data recommendation method and device, server and storage medium Pending CN114547429A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011325270.1A CN114547429A (en) 2020-11-23 2020-11-23 Data recommendation method and device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011325270.1A CN114547429A (en) 2020-11-23 2020-11-23 Data recommendation method and device, server and storage medium

Publications (1)

Publication Number Publication Date
CN114547429A true CN114547429A (en) 2022-05-27

Family

ID=81659125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011325270.1A Pending CN114547429A (en) 2020-11-23 2020-11-23 Data recommendation method and device, server and storage medium

Country Status (1)

Country Link
CN (1) CN114547429A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115203577A (en) * 2022-09-14 2022-10-18 北京达佳互联信息技术有限公司 Object recommendation method, and training method and device of object recommendation model
CN117932089A (en) * 2024-03-25 2024-04-26 南京中医药大学 Knowledge graph-based data analysis method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388739A (en) * 2017-08-03 2019-02-26 合信息技术(北京)有限公司 The recommended method and device of multimedia resource
CN109740068A (en) * 2019-01-29 2019-05-10 腾讯科技(北京)有限公司 Media data recommended method, device and storage medium
CN110704727A (en) * 2019-08-30 2020-01-17 中国平安人寿保险股份有限公司 Information pushing method and device and computer equipment
CN111523682A (en) * 2020-07-03 2020-08-11 支付宝(杭州)信息技术有限公司 Method and device for training interactive prediction model and predicting interactive object
CN111931062A (en) * 2020-08-28 2020-11-13 腾讯科技(深圳)有限公司 Training method and related device of information recommendation model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388739A (en) * 2017-08-03 2019-02-26 合信息技术(北京)有限公司 The recommended method and device of multimedia resource
CN109740068A (en) * 2019-01-29 2019-05-10 腾讯科技(北京)有限公司 Media data recommended method, device and storage medium
CN110704727A (en) * 2019-08-30 2020-01-17 中国平安人寿保险股份有限公司 Information pushing method and device and computer equipment
CN111523682A (en) * 2020-07-03 2020-08-11 支付宝(杭州)信息技术有限公司 Method and device for training interactive prediction model and predicting interactive object
CN111931062A (en) * 2020-08-28 2020-11-13 腾讯科技(深圳)有限公司 Training method and related device of information recommendation model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115203577A (en) * 2022-09-14 2022-10-18 北京达佳互联信息技术有限公司 Object recommendation method, and training method and device of object recommendation model
CN117932089A (en) * 2024-03-25 2024-04-26 南京中医药大学 Knowledge graph-based data analysis method

Similar Documents

Publication Publication Date Title
CN110222789B (en) Image recognition method and storage medium
CN110865754B (en) Information display method and device and terminal
CN111104980B (en) Method, device, equipment and storage medium for determining classification result
CN110163066B (en) Multimedia data recommendation method, device and storage medium
CN108320756B (en) Method and device for detecting whether audio is pure music audio
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN111897996A (en) Topic label recommendation method, device, equipment and storage medium
CN111083516A (en) Live broadcast processing method and device
CN111739517A (en) Speech recognition method, speech recognition device, computer equipment and medium
CN111738365B (en) Image classification model training method and device, computer equipment and storage medium
CN112084811A (en) Identity information determining method and device and storage medium
CN113918767A (en) Video clip positioning method, device, equipment and storage medium
US20230095250A1 (en) Method for recommending multimedia resource and electronic device
CN113613028A (en) Live broadcast data processing method, device, terminal, server and storage medium
CN111613213A (en) Method, device, equipment and storage medium for audio classification
CN114547429A (en) Data recommendation method and device, server and storage medium
CN110837557A (en) Abstract generation method, device, equipment and medium
CN114691860A (en) Training method and device of text classification model, electronic equipment and storage medium
CN109829067B (en) Audio data processing method and device, electronic equipment and storage medium
CN111563201A (en) Content pushing method, device, server and storage medium
CN111641853B (en) Multimedia resource loading method and device, computer equipment and storage medium
CN114385854A (en) Resource recommendation method and device, electronic equipment and storage medium
CN115905374A (en) Application function display method and device, terminal and storage medium
CN113377976A (en) Resource searching method and device, computer equipment and storage medium
CN112132472A (en) Resource management method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination