CN117493688A - Teaching information recommendation method and device based on user portrait and electronic equipment - Google Patents

Teaching information recommendation method and device based on user portrait and electronic equipment Download PDF

Info

Publication number
CN117493688A
CN117493688A CN202311555105.9A CN202311555105A CN117493688A CN 117493688 A CN117493688 A CN 117493688A CN 202311555105 A CN202311555105 A CN 202311555105A CN 117493688 A CN117493688 A CN 117493688A
Authority
CN
China
Prior art keywords
user
information
user identification
browsing
identification result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311555105.9A
Other languages
Chinese (zh)
Inventor
龚旭东
高敏
叶展召
董振胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Vany Technology Co ltd
Original Assignee
Shanghai Vany Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Vany Technology Co ltd filed Critical Shanghai Vany Technology Co ltd
Priority to CN202311555105.9A priority Critical patent/CN117493688A/en
Publication of CN117493688A publication Critical patent/CN117493688A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • G06Q50/2057Career enhancement or continuing education service

Abstract

The embodiment of the disclosure discloses a teaching information recommendation method and device based on user portraits and electronic equipment. One embodiment of the method comprises the following steps: inputting each user portrait in the user portrait set into a user identification model to generate a user identification result, thereby obtaining a user identification result set; classifying each user identification result in the user identification result set to obtain a user identification result set; for each user identification result group in the user identification result group set, collecting browsing data information of each user corresponding to the user identification result group for a target teaching page; inputting the teaching page real-time browsing data and the teaching page historical browsing data sequence into a student user browsing intention recognition model to obtain student user browsing intention recognition information; and pushing the associated teaching information to each user according to the student user browsing intention identification information. According to the embodiment, teaching information meeting the requirements can be pushed to students according to the user portrait.

Description

Teaching information recommendation method and device based on user portrait and electronic equipment
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a teaching information recommendation method and device based on user portraits and electronic equipment.
Background
With the increasing number of college students, pushing teaching information for different types of students has become a current urgent problem. At present, teaching information (such as information of selected repair or information of expansion) is pushed to students, and the mode is generally adopted as follows: the students select teaching information autonomously and push relevant knowledge to the students.
However, with the above-described manner, there are generally the following technical problems:
firstly, students are difficult to select proper teaching information, so that browsing time of student users is wasted;
second, when pushing teaching information to students, actual portrait features (e.g., academic subject information) of the students are not considered, so that classification pushing is not performed on the students, and the pushed information is inaccurate, so that pushing resources are wasted.
Thirdly, when learning knowledge information is pushed to students, the knowledge information cannot be pushed according to actual favorites of the students, and pushing resources are wasted.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a user portrait based teaching information recommendation method, apparatus, electronic device, and computer readable medium to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a user portrait based teaching information recommendation method, the method including: acquiring a preset user portrait training sample set; according to the user portrait training sample set, carrying out model training on the initial user identification model to obtain a user identification model; acquiring user portraits of each user to be recommended, and obtaining a user portrayal set; inputting each user portrait in the user portrait set into the user identification model to generate a user identification result set, wherein the user identification result in the user identification result set represents the subject information of the user; classifying each user identification result in the user identification result set to obtain a user identification result set; for each user identification result group in the user identification result group set, the following processing steps are executed: collecting browsing data information of each user corresponding to the user identification result group for a target teaching page, wherein the browsing click data information comprises: the teaching page real-time browsing data and the corresponding teaching page historical browsing data sequence; inputting the teaching page real-time browsing data and the teaching page historical browsing data sequence into a pre-trained student user browsing intention recognition model to obtain student user browsing intention recognition information; and pushing each teaching information associated with the target teaching page to each user according to the student user browsing intention identification information.
In a second aspect, some embodiments of the present disclosure provide a user portrait based educational information recommendation apparatus, the apparatus comprising: the first acquisition unit is configured to acquire a preset user portrait training sample set; the training unit is configured to perform model training on the initial user identification model according to the user portrait training sample set to obtain a user identification model; the second acquisition unit is configured to acquire user portraits of each user to be recommended, and a user portrayal set is obtained; an input unit configured to input each user portrait in the user portrait set into the user identification model to generate a user identification result, thereby obtaining a user identification result set, wherein the user identification result in the user identification result set represents the subject information of the user; the classification unit is configured to classify each user identification result in the user identification result set to obtain a user identification result set; a pushing unit configured to perform, for each user identification result group in the user identification result group set, the following processing steps: collecting browsing data information of each user corresponding to the user identification result group for a target teaching page, wherein the browsing click data information comprises: the teaching page real-time browsing data and the corresponding teaching page historical browsing data sequence; inputting the teaching page real-time browsing data and the teaching page historical browsing data sequence into a pre-trained student user browsing intention recognition model to obtain student user browsing intention recognition information; and pushing each teaching information associated with the target teaching page to each user according to the student user browsing intention identification information.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: according to the teaching information recommendation method based on the user portraits, which is disclosed by the embodiment of the invention, teaching information meeting requirements can be pushed to students according to the user portraits, so that browsing time of the students is prevented from being wasted. Specifically, the reason why the browsing time of the student user is wasted is that: students are difficult to select proper teaching information, and browsing time of student users is wasted. Based on this, in some embodiments of the present disclosure, a user portrait based teaching information recommendation method includes first obtaining a preset user portrait training sample set. And secondly, carrying out model training on the initial user identification model according to the user portrait training sample set to obtain a user identification model. And then, obtaining the user portrait of each user to be recommended, and obtaining a user portrait set. And then, inputting each user portrait in the user portrait set into the user identification model to generate a user identification result, thereby obtaining a user identification result set. Wherein the user recognition result in the user recognition result set represents the subject information of the user. Thus, classification of student users is facilitated. Therefore, teaching information meeting the requirements can be pushed to students according to the user portrait. And then, classifying each user identification result in the user identification result set to obtain a user identification result set. Finally, for each user identification result group in the user identification result group set, the following processing steps are executed: and collecting browsing data information of each user corresponding to the user identification result group for the target teaching page. Wherein, the browsing click data information includes: the teaching page real-time browsing data and the corresponding teaching page historical browsing data sequence; inputting the teaching page real-time browsing data and the teaching page historical browsing data sequence into a pre-trained student user browsing intention recognition model to obtain student user browsing intention recognition information; and pushing each teaching information associated with the target teaching page to each user according to the student user browsing intention identification information. Therefore, the intention of the student can be identified according to the click data of the student on the teaching page. Thus, the recognition time of the intention of the student is shortened. And relevant knowledge information is pushed conveniently according to student intention.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of a user portrayal-based tutorial information recommendation method in accordance with the present disclosure;
FIG. 2 is a schematic structural view of some embodiments of a user portrayal-based educational information recommendation apparatus according to the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 is a flow chart of some embodiments of a user portrait based tutorial information recommendation method according to the present disclosure. A flow 100 of some embodiments of a user portrait based tutorial information recommendation method according to the present disclosure is shown. The teaching information recommending method based on the user portrait comprises the following steps:
Step 101, acquiring a preset user portrait training sample set.
In some embodiments, an executing body (for example, a computing device) of the user portrait based teaching information recommendation method may obtain a preset user portrait training sample set from a terminal device through a wired connection or a wireless connection. The user portrait training sample may represent a user portrait of a student user. For example, the user portrait training samples may include: student name, grade, academic, subject, book of interest, etc.
And 102, performing model training on the initial user identification model according to the user portrait training sample set to obtain a user identification model.
In some embodiments, the executing entity may perform model training on the initial user identification model according to the user portrait training sample set to obtain a user identification model. For example, the initial user identification model may be a user identification model for which the model has not been trained. The user recognition model may be a neural network model that generates user recognition results. The user identification results may represent the user's academic, discipline, and teaching information of possible interest.
In practice, the execution subject may perform model training on the initial user identification model by the following steps to obtain a user identification model:
And determining feature feedback information corresponding to each training sample feature in the training sample feature set according to the user portrait training sample set. Wherein each training sample feature in the training sample feature set may be a feature that subsequently affects the generation of a user recognition result. For example, the training sample feature set may include: information acquisition characteristics, learning duration characteristics and user credit characteristics. The information acquisition feature may represent a dimensional feature (a certain field, a certain discipline) of each acquired teaching information. The learning duration feature may represent a learning duration of each teaching information. The user credit feature may represent a credit for each subject of the user. The feature feedback information may characterize monotonicity (i.e., a transformation relationship between information) between the corresponding feature variables of the user portrayal training sample and the corresponding target variables. The characteristic feedback information may include: positive feedback information and negative feedback information. The forward feedback information may characterize forward monotonicity between the user portrayal training sample corresponding feature variable and the user recognition result corresponding target variable (i.e., as the value of the sample feature corresponding feature variable increases, the value of the corresponding target variable increases). The negative feedback information may characterize a negative monotonicity between the sample feature-corresponding feature variable and the user recognition result-corresponding target variable (i.e., as the value of the sample feature-corresponding feature variable increases, the value of the corresponding target variable decreases). First, feature value interval information corresponding to the user portrait training sample set may be determined based on the user portrait training sample set. The characteristic value interval information can represent the value of the sample characteristics in the user portrait training sample set. And then, carrying out section information segmentation on the characteristic value section information to obtain a section segmentation information set. The number of the section segment information included in the section segment information set may be preset. For example, the number may be 10. And then, determining a user identification result corresponding to each section segmentation information in the section segmentation information set to obtain a user identification result set. And finally, determining the feature feedback information corresponding to the training sample features according to the user identification result set. For example, first, the interval segment information sets may be information-ordered in order of decreasing values, to obtain an interval segment information sequence. And then, sorting the user identification result sets according to the corresponding relation between the interval segmentation information and the user identification result to obtain a user identification result sequence. And finally, determining characteristic feedback information corresponding to the sample characteristics according to the size of each section information in the section information sequence and the size of each user identification result in the user identification result sequence.
Second, based on the user portrayal training sample set, the following training steps are performed:
and a first sub-step of inputting a user portrait training sample characteristic information subset which corresponds to the characteristic feedback information in the user portrait training sample characteristic information set as positive feedback information into a first initial full-connection network included in the initial user identification model to obtain a first output result. Wherein the user portrayal training sample feature information set corresponds to a target user portrayal training sample in the user portrayal training sample set. The first initial fully-connected network may be a first fully-connected network for which the model has not been trained. The first fully connected network may include at least one fully connected layer. For example, the first fully connected network comprises: at least one fully connected layer connected in parallel. The first output result may represent sample feature semantic information corresponding to a user portrayal training sample feature information subset of the positive feedback information. The first output result may be a result in a matrix form. The target user profile training samples may be user profile training samples randomly drawn from a set of user profile training samples. The user portrait training sample feature information set may be a feature information set included in the target user portrait training sample that corresponds to the training sample feature set.
And a second sub-step of inputting the user portrait training sample characteristic information subset which corresponds to the negative feedback information in the user portrait training sample characteristic information set into a second initial full-connection network included in the initial user identification model to obtain a second output result. The second initial fully-connected network may be a second fully-connected network for which the model has not been trained. The second fully connected network may include at least one fully connected layer. For example, the second fully connected network includes: at least one fully connected layer connected in parallel. The second output result may represent sample feature semantic information corresponding to the user portrayal training sample feature information subset of the negative feedback information. The second output result may be a result in a matrix form.
And a third sub-step of inputting the first output result and the second output result into a third initial full-connection network included in the initial user identification model to obtain a third output result. The third initial fully-connected network may be a third fully-connected network for which the model has not been trained. The third fully connected network may include at least one fully connected layer. For example, the third fully connected network includes: at least one fully connected layer connected in parallel.
And a fourth sub-step of generating a loss value corresponding to the third output result based on a preset loss function. The predetermined loss function may be a cross entropy loss function. The third output result may be input to a preset loss function to obtain a loss value.
And a fifth sub-step of determining the initial user recognition model as a trained user recognition model in response to determining that the loss value satisfies a preset loss condition. The preset loss condition may be: the loss value is smaller than or equal to a preset threshold value.
Optionally, in response to determining that the loss value meets the preset loss condition, updating model parameters of the initial user identification model, and removing the target user portrait training samples from the user portrait training sample set, so as to obtain an updated user portrait training sample set as a user portrait training sample set, and executing the training step again.
In some embodiments, the executing entity may update the model parameters of the initial user identification model in response to determining that the loss value satisfies the preset loss condition, and remove the target user portrait training samples from the user portrait training sample set, to obtain an updated user portrait training sample set as a user portrait training sample set, and execute the training step again.
The above related matters serve as an invention point of the present disclosure, and solve the second technical problem mentioned in the background art, namely "pushing resources are wasted. ". Factors wasting push resources are often as follows: when pushing teaching information to students, actual portrait features (e.g., academic subject information) of the students are not considered, so that classification pushing is not performed on the students, and the pushed information is inaccurate. If the above factors are solved, the effect of reducing the waste of push resources can be achieved. To achieve this, first, a subset of user portrayal training sample feature information corresponding to the feature feedback information in the set of user portrayal training sample feature information is input into a first initial fully connected network included in the initial user identification model, and a first output result is obtained. Wherein the user portrayal training sample feature information set corresponds to a target user portrayal training sample in the user portrayal training sample set. And then, inputting the user portrait training sample characteristic information subset which corresponds to the characteristic feedback information in the user portrait training sample characteristic information set as negative feedback information into a second initial full-connection network included in the initial user identification model to obtain a second output result. And then, inputting the first output result and the second output result into a third initial full-connection network included in the initial user identification model to obtain a third output result. And generating a loss value corresponding to the third output result based on a preset loss function. And finally, in response to determining that the loss value meets a preset loss condition, determining the initial user identification model as a trained user identification model. Thus, the user portraits can be identified and classified through a trained user identification model. Therefore, student users with the same type are conveniently classified into one type, classified pushing is realized, the accuracy of pushing information is improved, and the waste of pushing resources is reduced.
And 103, obtaining the user portrait of each user to be recommended, and obtaining a user portrait set.
In some embodiments, the executing body may acquire, by way of wired connection or wireless connection, a user portrait of each user to be recommended from the terminal device, so as to obtain a user portrait set. The user may refer to a student user. The user representation may include: student name, grade, academic, subject, book of interest, etc.
And 104, inputting each user portrait in the user portrait set into the user identification model to generate a user identification result, thereby obtaining a user identification result set.
In some embodiments, the executing entity may input each user portrait in the user portrait set into the user identification model to generate a user identification result, so as to obtain a user identification result set. Wherein the user recognition result in the user recognition result set represents the subject information of the user. And, the user recognition result may also represent instructional information of interest to the student user.
Step 105, classifying each user identification result in the user identification result set to obtain a user identification result set.
In some embodiments, the executing body may perform classification processing on each user identification result in the user identification result set to obtain a user identification result set. That is, the user recognition results with the same user recognition result may be classified into one type, to obtain a user recognition result group set.
Step 106, for each user identification result group in the user identification result group set, executing the following processing steps:
step 1061, collecting browsing data information of each user corresponding to the target teaching page of the user identification result set.
In some embodiments, the executing body may collect, by using a wired or wireless connection, browsing data information of each user corresponding to the user identification result set for the target teaching page. Wherein, the browsing click data information includes: the teaching page real-time browsing data and the corresponding teaching page historical browsing data sequence. The teaching page real-time browsing data may be real-time browsing data corresponding to a student when browsing a target teaching page. Specifically, the real-time browsing data may include: real-time buried point identification, real-time teaching page identification, real-time knowledge class identification and real-time search word identification. The real-time knowledge class identification may represent a discipline section classification of the knowledge. The teaching page historical browsing data sequence may be a historical browsing data sequence corresponding to a student within a preset period of time before browsing the target teaching page. For example, the sequence of teaching page historical browsing data may be a browsing data set of a student within 12 hours before browsing a target teaching page. For example, browsing the data set may include: the method comprises the steps of embedding point identification of student clicking behaviors, teaching page identification, knowledge class identification, search word identification and browsing time characteristic data.
Optionally, a teaching page real-time browsing data set and a student browsing intention information set are acquired.
In some embodiments, the executing entity may obtain the teaching page real-time browsing data set and the student browsing intention information set. Wherein, teaching page real-time browsing data includes: the teaching page real-time browsing data, a corresponding teaching page historical browsing data sequence and a corresponding student user attribute information set. One teaching page real-time browsing data corresponds to one student browsing intention information.
Optionally, selecting the real-time browsing data of the teaching page from the real-time browsing data set of the teaching page as the real-time browsing data of the target teaching page, and executing the following training steps:
firstly, inputting teaching page real-time browsing data and teaching page historical browsing data sequences which are included in the target teaching page real-time browsing data into an initial word embedding network which is included in an initial student user browsing intention recognition model to obtain target teaching page real-time browsing word embedding information and target teaching page historical browsing word embedding information sequences. The initial student user browsing intent recognition model may be an untrained completed student user browsing intent recognition model. The initial word embedding network may be an untrained word embedding network.
And secondly, inputting the historical browsing word embedded information sequence of the target teaching page into a high-dimensional browsing characteristic extraction network included in the initial student user browsing intention recognition model to obtain a high-dimensional browsing behavior characteristic information sequence of the target teaching page. The high-dimensional browsing behavior feature extraction network included in the initial student user browsing intent recognition model may be an untrained high-dimensional browsing behavior feature extraction network.
And thirdly, inputting the high-dimensional browsing behavior characteristic information sequence of the target teaching page into an initial teaching page history browsing integral characteristic extraction network included in the initial student user browsing intention recognition model to generate target teaching page history browsing integral characteristic information. The initial teaching page history browsing global feature extraction network may be an untrained teaching page history browsing global feature extraction network.
And fourthly, inputting the historical browsing integral characteristic information of the target teaching page and the real-time browsing word embedded information of the target teaching page into an initial characteristic information cross processing network included in the initial student user browsing intention recognition model to obtain characteristic cross information of the target teaching page. The initial feature information cross-processing network may be an untrained feature information cross-processing network.
And fifthly, inputting the student user attribute information set included in the real-time click data of the target teaching page into the initial word embedding network to obtain a target student user attribute characteristic information set.
And sixthly, carrying out information fusion processing on the target teaching page feature intersection information and the target student user attribute feature information set to obtain target fusion feature information.
And seventh, inputting the target fusion characteristic information into an initial intention recognition information output layer included in the initial student user browsing intention recognition model so as to output initial student user browsing intention recognition information. The initial intention recognition information output layer may be an untrained intention recognition information output layer.
Eighth, generating intention loss information according to the student browsing intention information corresponding to the target teaching page real-time browsing data and the initial student user browsing intention identification information. And determining intention loss information between the student browsing intention information corresponding to the target teaching page real-time browsing data and the initial student user browsing intention identification information by using a cross entropy loss function.
And ninth, determining the initial student user browsing intention recognition model as a trained student user browsing intention recognition model in response to determining that the intention loss information meets a preset condition. The preset condition may refer to: the value of the intention loss information representation is larger than or equal to a preset threshold value.
The above related matters serve as an invention point of the present disclosure, and solve the third technical problem mentioned in the background art, which wastes push resources. ". Factors wasting push resources are often as follows: when learning knowledge information is pushed to students, knowledge information cannot be pushed according to actual preferences of the students. If the above factors are solved, the effect of reducing the waste of push resources can be achieved. In order to achieve the effect, firstly, teaching page real-time browsing data and teaching page historical browsing data sequences which are included in the target teaching page real-time browsing data are input into an initial word embedding network which is included in an initial student user browsing intention recognition model, and target teaching page real-time browsing word embedding information and target teaching page historical browsing word embedding information sequences are obtained. And secondly, inputting the target teaching page history browsing word embedded information sequence into a high-dimensional browsing characteristic extraction network included in the initial student user browsing intention recognition model to obtain a target teaching page high-dimensional browsing behavior characteristic information sequence. And then, inputting the high-dimensional browsing behavior characteristic information sequence of the target teaching page into an initial teaching page history browsing integral characteristic extraction network included in the initial student user browsing intention recognition model so as to generate target teaching page history browsing integral characteristic information. And then, inputting the historical browsing integral characteristic information of the target teaching page and the real-time browsing word embedded information of the target teaching page into an initial characteristic information cross processing network included in the initial student user browsing intention recognition model to obtain characteristic cross information of the target teaching page. And then, inputting the student user attribute information set included in the real-time click data of the target teaching page into the initial word embedding network to obtain the target student user attribute characteristic information set. Then, carrying out information fusion processing on the target teaching page feature intersection information and the target student user attribute feature information set to obtain target fusion feature information; and inputting the target fusion characteristic information into an initial intention identification information output layer included in the initial student user browsing intention identification model so as to output initial student user browsing intention identification information. Finally, generating intention loss information according to student browsing intention information corresponding to the target teaching page real-time browsing data and the initial student user browsing intention identification information; and in response to determining that the intention loss information meets a preset condition, determining the initial student user browsing intention recognition model as a trained student user browsing intention recognition model. Therefore, the trained word embedded network, the high-dimensional browsing feature extraction network and the teaching page history browsing integral feature extraction network and feature information cross processing network which are included in the student user browsing intention recognition model can be utilized to accurately generate student user browsing intention recognition information. Therefore, the associated knowledge information can be accurately pushed to the students according to the learning intention of the students, so that the waste of pushing resources is avoided.
It should be noted that, the teaching page real-time browsing data/teaching page historical browsing data may include a click operation behavior, a page turning operation behavior, and browsing duration data of the page.
Step 1062, inputting the real-time browsing data of the teaching page and the historical browsing data sequence of the teaching page into a pre-trained student user browsing intention recognition model to obtain student user browsing intention recognition information.
In some embodiments, the executing body may input the real-time browsing data of the teaching page and the historical browsing data sequence of the teaching page into a pre-trained student user browsing intention recognition model to obtain student user browsing intention recognition information. Wherein, the student user browsing intention recognition model comprises: word embedding network, high-dimensional browsing feature extraction network, teaching page history browsing integral feature extraction network and feature information cross processing network. The student user browsing intention recognition model may be a model that generates student user browsing intention recognition information. For example, the student user browsing intention identification information may be intention information of the student browsing a teaching page. For example, the student user browsing intention identification information may be intention information of a learning teaching page corresponding to a knowledge point. The word embedding network may be a model that performs word embedding processing on the browsing data. For example, the word Embedding network may be an Embedding layer. The teaching page real-time browsing word embedded information may be an original vector representation of the teaching page real-time browsing data. The teaching page history browsing word embedded information in the teaching page history browsing word embedded information sequence corresponds to the student history browsing data in the student history browsing data sequence one by one. The real-time browsing data of the teaching page is the same as the teaching page clicked correspondingly by the corresponding student history browsing data sequence. The teaching page history browsing word embedded information in the teaching page history browsing word embedded information sequence may be an original vector of student history browsing data. The student historical browsing data sequence can be a data sequence after data normalization processing according to a time window corresponding to a preset time period.
In practice, the execution subject may input the teaching page real-time browsing data and the teaching page historical browsing data sequence into a pre-trained student user browsing intention recognition model to obtain student user browsing intention recognition information through the following steps:
the first step, inputting the teaching page real-time browsing data and the teaching page historical browsing data sequence into the word embedding network to obtain teaching page real-time browsing word embedding information and a teaching page historical browsing word embedding information sequence.
And secondly, inputting the history browsing word embedded information sequence of the teaching page into the high-dimensional browsing characteristic extraction network to obtain a high-dimensional browsing behavior characteristic information sequence of the teaching page. The high-dimensional browsing characteristic extraction network can be a model for generating high-dimensional browsing behavior characteristic information of the teaching page. The teaching page high-dimensional browsing behavior feature information can be higher-order vector characterization of the corresponding browsing behavior of the student historical browsing data. The teaching page high-dimensional browsing behavior characteristic information in the teaching page high-dimensional browsing behavior characteristic information sequence corresponds to teaching page history browsing word embedded information in the teaching page history browsing word embedded information sequence one by one. The high-dimensional browsing feature extraction network may be a multi-layer, serially connected recurrent neural network. For example, the high-dimensional browsing feature extraction network may be a Self-attention network (Self-attention network).
And thirdly, inputting the teaching page high-dimensional browsing behavior characteristic information sequence into the teaching page history browsing integral characteristic extraction network to generate teaching page history browsing integral characteristic information. The teaching page history browsing global feature extraction network may be a network that generates teaching page history browsing global feature information. The teaching page history browsing overall characteristic information can represent an overall representation vector of student history browsing data sequences corresponding to student browsing behaviors. For example, the educational page history browsing global feature extraction network may be a time-decay based attention mechanism network (attention unit with time decay).
And fourthly, inputting the history browsing integral characteristic information of the teaching page and the real-time browsing word embedded information of the teaching page into the characteristic information cross processing network to obtain the characteristic cross information of the teaching page. The feature information cross processing network may be a model for performing information cross processing on feature information. The teaching page feature intersection information may include: the teaching page history browses the cross characteristic information between the integral characteristic information and the real-time browsing word embedded information of the teaching page. For example, the characteristic information cross-processing network may be a multi-layer serial connected convolutional neural network.
In practice, the fifth step may include:
and a first sub-step, performing characteristic information cross-multiplication processing on the history browsing integral characteristic information of the teaching page and the real-time browsing word embedded information of the teaching page by using the characteristic information cross-processing network to obtain cross-multiplication characteristic information. For example, the above feature information cross processing network may be used to perform a vector cross process on the teaching page history browsing global feature information corresponding vector and the teaching page real-time browsing word embedding information corresponding vector, so as to generate a cross vector as the cross feature information.
And a second sub-step, performing characteristic information subtraction processing on the history browsing integral characteristic information of the teaching page and the real-time browsing word embedded information of the teaching page by using the characteristic information cross processing network to obtain subtracted characteristic information. For example, the above feature information cross processing network may be used to perform a vector subtraction process on the corresponding vector of the history browsing global feature information of the teaching page and the corresponding vector of the real-time browsing word embedding information of the teaching page, so as to generate a subtraction vector as the subtraction feature information.
And a third sub-step, carrying out feature fusion on the cross feature information and the subtraction feature information to generate teaching page feature intersection information. Vector stitching can be performed on the vectors corresponding to the cross feature information and the vectors corresponding to the subtraction feature information, so that a stitched vector serving as teaching page feature intersection information is obtained.
Fifthly, generating student user browsing intention identification information according to the teaching page feature intersection information. For example, the teaching page feature intersection information can be decoded through a decoding network included in the student user browsing intention recognition model to obtain student user browsing intention recognition information.
In practice, the fifth step may include:
and a first sub-step of acquiring a student user attribute information set corresponding to the real-time browsing data of the teaching page. For example, student user attributes may include: student gender, student age, student grade, and student class.
And a second sub-step of inputting each student user attribute information in the student user attribute information set into the word embedding network to generate student user attribute characteristic information and obtain a student user attribute characteristic information set. Wherein the student user attribute feature information may characterize feature information of student user attributes. For example, student user attribute feature information may be in the form of a vector.
And a third sub-step, carrying out information fusion on the student user attribute characteristic information set and the teaching page characteristic intersection information to obtain fusion characteristic information.
And a fourth sub-step of inputting the fused characteristic information into an intention recognition information output layer included in the student user browsing intention recognition model to output student user browsing intention recognition information.
Step 1063, pushing each piece of teaching information associated with the target teaching page to each user according to the student user browsing intention identification information.
In some embodiments, the executing entity may push respective teaching information associated with the target teaching page to the respective users according to the student user browsing intention identification information. The student user browsing intention identification information may represent a learning intention of the student user browsing the target teaching page. For example, student user browsing intent identification information may indicate that a student user is interested in a certain type of instructional information. And pushing the teaching information related to the target teaching page to the user terminal of each user. That is, the subject type of each teaching information is the same as the subject knowledge type corresponding to the target teaching page.
With further reference to FIG. 2, as an implementation of the method illustrated in the above figures, the present disclosure provides some embodiments of a user portrayal-based tutorial information recommendation device, corresponding to those method embodiments illustrated in FIG. 1, which may be particularly applicable in a variety of electronic devices.
As shown in fig. 2, the user portrait based educational information recommendation apparatus 200 of some embodiments includes: a first acquisition unit 201, a training unit 202, a second acquisition unit 203, an input unit 204, a classification unit 205, and a pushing unit 206. Wherein, the first obtaining unit 201 is configured to obtain a preset user portrait training sample set; a training unit 202 configured to perform model training on the initial user recognition model according to the user portrait training sample set to obtain a user recognition model; a second obtaining unit 203 configured to obtain a user portrait of each user to be recommended, resulting in a user portrait set; an input unit 204 configured to input each user portrait in the user portrait set into the user identification model to generate a user identification result, thereby obtaining a user identification result set, wherein the user identification result in the user identification result set represents the subject information of the user; a classification unit 205 configured to classify each user identification result in the user identification result set to obtain a user identification result set; a pushing unit 206 configured to perform, for each user identification result group in the set of user identification result groups, the following processing steps: collecting browsing data information of each user corresponding to the user identification result group for a target teaching page, wherein the browsing click data information comprises: the teaching page real-time browsing data and the corresponding teaching page historical browsing data sequence; inputting the teaching page real-time browsing data and the teaching page historical browsing data sequence into a pre-trained student user browsing intention recognition model to obtain student user browsing intention recognition information; and pushing each teaching information associated with the target teaching page to each user according to the student user browsing intention identification information.
It will be appreciated that the elements described in the user portrayal-based tutorial information recommendation device 200 correspond to the steps of the method described with reference to fig. 1. Thus, the operations, features and advantages described above with respect to the method are equally applicable to the user portrait based teaching information recommendation device 200 and the units contained therein, and are not described herein.
Referring now to fig. 3, a schematic diagram of an electronic device (e.g., computing device) 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic devices in some embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), and the like, as well as stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 3 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various suitable actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM303, various programs and task data required for the operation of the electronic device 300 are also stored. The processing device 301, the ROM302, and the RAM303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange task data. While fig. 3 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 3 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 309, or from storage device 308, or from ROM 302. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that, the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a task data signal that propagates in baseband or as part of a carrier wave, in which computer-readable program code is carried. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital task data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a preset user portrait training sample set; according to the user portrait training sample set, carrying out model training on the initial user identification model to obtain a user identification model; acquiring user portraits of each user to be recommended, and obtaining a user portrayal set; inputting each user portrait in the user portrait set into the user identification model to generate a user identification result set, wherein the user identification result in the user identification result set represents the subject information of the user; classifying each user identification result in the user identification result set to obtain a user identification result set; for each user identification result group in the user identification result group set, the following processing steps are executed: collecting browsing data information of each user corresponding to the user identification result group for a target teaching page, wherein the browsing click data information comprises: the teaching page real-time browsing data and the corresponding teaching page historical browsing data sequence; inputting the teaching page real-time browsing data and the teaching page historical browsing data sequence into a pre-trained student user browsing intention recognition model to obtain student user browsing intention recognition information; and pushing each teaching information associated with the target teaching page to each user according to the student user browsing intention identification information.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including a product oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor comprising: the device comprises a first acquisition unit, a training unit, a second acquisition unit, an input unit, a classification unit and a pushing unit. The names of these units do not limit the unit itself in some cases, and for example, the first acquisition unit may also be described as "a unit that acquires a preset user portrait training sample set".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (6)

1. A teaching information recommendation method based on user portraits comprises the following steps:
acquiring a preset user portrait training sample set;
according to the user portrait training sample set, carrying out model training on an initial user identification model to obtain a user identification model;
acquiring user portraits of each user to be recommended, and obtaining a user portrayal set;
inputting each user portrait in the user portrait set into the user identification model to generate a user identification result, and obtaining a user identification result set, wherein the user identification result in the user identification result set represents the subject information of the user;
classifying each user identification result in the user identification result set to obtain a user identification result set;
for each user identification result group in the user identification result group set, executing the following processing steps:
collecting browsing data information of each user corresponding to the user identification result group for a target teaching page, wherein the browsing click data information comprises: the teaching page real-time browsing data and the corresponding teaching page historical browsing data sequence;
inputting the teaching page real-time browsing data and the teaching page historical browsing data sequence into a pre-trained student user browsing intention recognition model to obtain student user browsing intention recognition information;
And pushing the teaching information associated with the target teaching page to each user according to the student user browsing intention identification information.
2. The method of claim 1, wherein the model training the initial user recognition model according to the user portrait training sample set to obtain a user recognition model comprises:
determining feature feedback information corresponding to each training sample feature in a training sample feature set according to the user portrait training sample set;
based on the user portrayal training sample set, the following training steps are performed:
inputting a user portrait training sample feature information subset which corresponds to the feature feedback information in the user portrait training sample feature information set and is positive feedback information into a first initial fully connected network included in the initial user identification model to obtain a first output result, wherein the user portrait training sample feature information set corresponds to a target user portrait training sample in the user portrait training sample set;
inputting a user portrait training sample characteristic information subset which corresponds to the negative feedback information in the user portrait training sample characteristic information set into a second initial full-connection network included in the initial user identification model to obtain a second output result;
Inputting the first output result and the second output result into a third initial full-connection network included in the initial user identification model to obtain a third output result;
generating a loss value corresponding to the third output result based on a preset loss function;
and in response to determining that the loss value meets a preset loss condition, determining the initial user identification model as a trained user identification model.
3. The method of claim 2, wherein the method further comprises:
and in response to determining that the loss value meets the preset loss condition, updating model parameters of the initial user identification model, removing the target user portrait training sample from the user portrait training sample set, obtaining an updated user portrait training sample set as a user portrait training sample set, and executing the training step again.
4. A user portrayal-based teaching information recommendation device comprising:
the first acquisition unit is configured to acquire a preset user portrait training sample set;
the training unit is configured to perform model training on the initial user identification model according to the user portrait training sample set to obtain a user identification model;
The second acquisition unit is configured to acquire user portraits of each user to be recommended, and a user portrayal set is obtained;
an input unit configured to input each user portrait in the user portrait set into the user identification model to generate a user identification result, thereby obtaining a user identification result set, wherein the user identification result in the user identification result set represents the subject information of the user;
the classification unit is configured to classify each user identification result in the user identification result set to obtain a user identification result set;
a pushing unit configured to perform, for each user identification result group in the user identification result group set, the following processing steps: collecting browsing data information of each user corresponding to the user identification result group for a target teaching page, wherein the browsing click data information comprises: the teaching page real-time browsing data and the corresponding teaching page historical browsing data sequence; inputting the teaching page real-time browsing data and the teaching page historical browsing data sequence into a pre-trained student user browsing intention recognition model to obtain student user browsing intention recognition information; and pushing the teaching information associated with the target teaching page to each user according to the student user browsing intention identification information.
5. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-3.
6. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-3.
CN202311555105.9A 2023-11-20 2023-11-20 Teaching information recommendation method and device based on user portrait and electronic equipment Pending CN117493688A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311555105.9A CN117493688A (en) 2023-11-20 2023-11-20 Teaching information recommendation method and device based on user portrait and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311555105.9A CN117493688A (en) 2023-11-20 2023-11-20 Teaching information recommendation method and device based on user portrait and electronic equipment

Publications (1)

Publication Number Publication Date
CN117493688A true CN117493688A (en) 2024-02-02

Family

ID=89674357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311555105.9A Pending CN117493688A (en) 2023-11-20 2023-11-20 Teaching information recommendation method and device based on user portrait and electronic equipment

Country Status (1)

Country Link
CN (1) CN117493688A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704510A (en) * 2019-10-12 2020-01-17 中森云链(成都)科技有限责任公司 User portrait combined question recommendation method and system
WO2021184174A1 (en) * 2020-03-17 2021-09-23 深圳市欢太科技有限公司 Information push method and apparatus, and electronic device and storage medium
CN113807926A (en) * 2021-09-26 2021-12-17 北京沃东天骏信息技术有限公司 Recommendation information generation method and device, electronic equipment and computer readable medium
CN114663198A (en) * 2022-04-25 2022-06-24 未鲲(上海)科技服务有限公司 Product recommendation method, device and equipment based on user portrait and storage medium
US20220309405A1 (en) * 2020-10-14 2022-09-29 Ennew Digital Technology Co., Ltd Combined-learning-based internet of things data service method and apparatus, device and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704510A (en) * 2019-10-12 2020-01-17 中森云链(成都)科技有限责任公司 User portrait combined question recommendation method and system
WO2021184174A1 (en) * 2020-03-17 2021-09-23 深圳市欢太科技有限公司 Information push method and apparatus, and electronic device and storage medium
US20220309405A1 (en) * 2020-10-14 2022-09-29 Ennew Digital Technology Co., Ltd Combined-learning-based internet of things data service method and apparatus, device and medium
CN113807926A (en) * 2021-09-26 2021-12-17 北京沃东天骏信息技术有限公司 Recommendation information generation method and device, electronic equipment and computer readable medium
CN114663198A (en) * 2022-04-25 2022-06-24 未鲲(上海)科技服务有限公司 Product recommendation method, device and equipment based on user portrait and storage medium

Similar Documents

Publication Publication Date Title
CN110633423B (en) Target account identification method, device, equipment and storage medium
CN113505206B (en) Information processing method and device based on natural language reasoning and electronic equipment
CN114385780B (en) Program interface information recommendation method and device, electronic equipment and readable medium
CN112149604A (en) Training method of video feature extraction model, video recommendation method and device
CN116128055A (en) Map construction method, map construction device, electronic equipment and computer readable medium
CN116894188A (en) Service tag set updating method and device, medium and electronic equipment
CN115270717A (en) Method, device, equipment and medium for detecting vertical position
CN113033707B (en) Video classification method and device, readable medium and electronic equipment
CN111459780B (en) User identification method and device, readable medium and electronic equipment
CN113220922B (en) Image searching method and device and electronic equipment
CN117493688A (en) Teaching information recommendation method and device based on user portrait and electronic equipment
CN113592607A (en) Product recommendation method and device, storage medium and electronic equipment
CN111581455A (en) Text generation model generation method and device and electronic equipment
CN111368204A (en) Content pushing method and device, electronic equipment and computer readable medium
CN113177174B (en) Feature construction method, content display method and related device
CN116629984B (en) Product information recommendation method, device, equipment and medium based on embedded model
CN116974684B (en) Map page layout method, map page layout device, electronic equipment and computer readable medium
CN116503849B (en) Abnormal address identification method, device, electronic equipment and computer readable medium
CN116827894B (en) Method, device, equipment and medium for sending comment information of broadcasting play user
CN117743555B (en) Reply decision information transmission method, device, equipment and computer readable medium
CN112860999B (en) Information recommendation method, device, equipment and storage medium
CN113283115B (en) Image model generation method and device and electronic equipment
CN116049529A (en) Content recommendation method, device, medium and electronic equipment
CN117557822A (en) Image classification method, apparatus, electronic device, and computer-readable medium
CN117076920A (en) Model training method, information generating method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination