CN113742522B - Video recall method, device, electronic equipment and storage medium - Google Patents

Video recall method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113742522B
CN113742522B CN202010478591.9A CN202010478591A CN113742522B CN 113742522 B CN113742522 B CN 113742522B CN 202010478591 A CN202010478591 A CN 202010478591A CN 113742522 B CN113742522 B CN 113742522B
Authority
CN
China
Prior art keywords
video
sample
equipment
target
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010478591.9A
Other languages
Chinese (zh)
Other versions
CN113742522A (en
Inventor
陈昕
江鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010478591.9A priority Critical patent/CN113742522B/en
Publication of CN113742522A publication Critical patent/CN113742522A/en
Application granted granted Critical
Publication of CN113742522B publication Critical patent/CN113742522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a video recall method, a device, an electronic apparatus and a storage medium, wherein the method comprises the following steps: responding to a video acquisition request sent by target equipment, and detecting whether the target equipment authorizes acquisition of application program information on the target equipment, wherein a platform account of the target equipment is an account of which video interaction behaviors generated on a current platform do not meet preset conditions; under the condition that the target equipment authorizes to acquire the application information on the target equipment, acquiring the target application information of the installed application in the target equipment; and screening out a video set matched with the platform account from the video library at least based on the target application program information, wherein the screened video set is used for determining videos sent to target equipment.

Description

Video recall method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of video recall, and in particular, to a video recall method, apparatus, electronic device, and storage medium.
Background
In the video playing application, the video which is interested by the user is recommended to the target equipment in a targeted way, so that the playing rate of the recommended video can be effectively improved. Before recommending the video for the target equipment, the video set is determined in the video library through video recall, and then the video set is screened and sent to the target equipment.
In the related art, recall modes such as behavior recall, visual recall and semantic recall generally require historical data such as viewing records of a platform account number logged in by a target device to complete video recall, so that when the target device uses a video playing application for the first time or the platform account number not logged in and the like does not exist or cannot acquire the historical data, accuracy of recalled videos is often poor due to difficulty in knowing viewing preferences of users corresponding to the target device, and user retention rate is low. In addition, for semantic recall, the recall mode can be realized based on metadata information highly semantically related to the video to be recalled, and the requirement on video information in a video library is high, so that recall logic is complex, and the recall efficiency is low due to large operand.
Disclosure of Invention
The present disclosure provides a video recall method, apparatus, electronic device, and storage medium to solve at least the above technical problems in the related art. The technical scheme of the present disclosure is as follows:
According to a first aspect of an embodiment of the present disclosure, a video recall method is provided, including:
responding to a video acquisition request sent by target equipment, and detecting whether the target equipment authorizes acquisition of application program information on the target equipment, wherein a platform account of the target equipment is an account of which video interaction behaviors occurring on a current platform do not meet preset conditions;
Acquiring target application information of an installed application in the target device under the condition that the target device authorizes the acquisition of the application information on the target device;
And screening a video set matched with the platform account from a video library at least based on the target application program information, wherein the video set is used for determining videos sent to the target equipment.
Optionally, the screening the video set matched with the platform account from the video library at least based on the target application information includes:
acquiring the equipment identifier of the target equipment;
Determining a device characteristic of the target device based on the device identification and the target application information;
And searching N videos with highest similarity with the equipment characteristics from the video library to obtain the video set, wherein N is a positive integer.
Optionally, the determining the device feature of the target device based on the device identifier and the target application information includes:
Determining an identification feature vector corresponding to the equipment identification and an application feature vector corresponding to the target application program information;
and calculating an equipment characteristic vector based on the identification characteristic vector and the application characteristic vector, wherein the equipment characteristic vector is used for representing the equipment characteristics of the target equipment.
Optionally, the searching N videos with the highest similarity to the device features from the video library includes:
Calculating the similarity between the device feature vector corresponding to the target device and the video feature vector imported in the similarity index, wherein the video feature vector corresponds to the video in the video library;
And determining the videos with the similarity larger than a preset similarity threshold value or the similarity arranged in the preset number N before as N videos with the highest similarity with the equipment characteristics.
Optionally, the account of the same type as the platform account is an account in which the video interaction behavior occurring on the current platform does not meet a preset condition, where the preset condition includes: the number of the video interaction behaviors is larger than a preset number.
Optionally, the screening the video set matched with the platform account from the video library at least based on the target application information includes:
Inputting at least the target application information into a pre-trained video recall model, wherein the video recall model is used for screening a video set matched with the platform account from a video library;
The video recall model is trained through a training sample comprising a positive sample and a negative sample, wherein the positive sample is generated based on basic information of accounts of the same type as the platform account and information of videos with video interaction behaviors, and the negative sample is generated based on basic information of accounts of the same type as the platform account and information of videos sampled from the video library.
Optionally, the training process of the video recall model includes:
Acquiring the training sample, wherein the training sample comprises sample application program information corresponding to an installed application program in sample equipment and a video identifier corresponding to a sample video, and the video identifier corresponding to the sample video is marked with an actual hit result of the sample equipment on the sample video;
inputting the training sample into a video recall model, calculating a feature vector corresponding to the training sample through the video recall model, and outputting a predicted hit result of the sample equipment on the sample video;
Model parameters of the video recall model are adjusted based on a difference between the predicted hit result and the actual hit result.
Optionally, the acquiring the training sample includes:
Acquiring a video identifier which is recommended to sample equipment corresponding to the account of the same type as the platform account and corresponds to a sample video of which the corresponding account has video interaction behavior, and forming a positive sample with a hit actual hit result between sample application program information corresponding to the sample equipment where the corresponding account is located;
And acquiring a video identifier corresponding to the video sampled in the video library, and combining the video identifier with sample application program information corresponding to each sample device to form a negative sample with a missed actual hit result.
Optionally, the calculating, by the video recall model, the feature vector corresponding to the training sample and outputting a predicted hit result of the sample device on the sample video includes:
determining an equipment characteristic vector corresponding to the sample equipment based on the sample application program information, wherein the equipment characteristic vector is used for representing equipment characteristics of the target equipment;
calculating a sample video feature vector corresponding to the sample video based on the video identifier corresponding to the sample video;
And converting the similarity between the equipment feature vector and the sample video feature vector into a predicted hit result of the sample equipment on the sample video, wherein the similarity is used for representing the similarity degree between the sample equipment feature vector and the sample video feature vector.
According to a second aspect of an embodiment of the present disclosure, a video recall device is provided, including:
The authorization detection module is configured to respond to a video acquisition request sent by target equipment and detect whether the target equipment authorizes acquisition of application information on the target equipment, wherein a platform account of the target equipment is an account of which video interaction behavior occurring on a current platform does not meet preset conditions;
An application information acquisition module configured to acquire target application information of an installed application in the target device, in a case where the target device authorizes acquisition of the application information on the target device;
And the video screening module is configured to screen a video set matched with the platform account from a video library at least based on the target application program information, wherein the video set is used for determining videos sent to the target equipment.
Optionally, the video filtering module includes:
a device identifier obtaining unit configured to obtain a device identifier of the target device;
a device feature determination unit configured to determine a device feature of the target device based on the device identification and the target application information;
And the video searching unit is configured to search N videos with highest similarity to the equipment characteristics from the video library to obtain the video set, wherein N is a positive integer.
Optionally, the device feature determining unit is further configured to:
Determining an identification feature vector corresponding to the equipment identification and an application feature vector corresponding to the target application program information;
and calculating an equipment characteristic vector based on the identification characteristic vector and the application characteristic vector, wherein the equipment characteristic vector is used for representing the equipment characteristics of the target equipment.
Optionally, the video search unit is further configured to:
Calculating the similarity between the device feature vector corresponding to the target device and the video feature vector imported in the similarity index, wherein the video feature vector corresponds to the video in the video library;
And determining the videos with the similarity larger than a preset similarity threshold value or the similarity arranged in the preset number N before as N videos with the highest similarity with the equipment characteristics.
Optionally, the account of the same type as the platform account is an account in which the video interaction behavior occurring on the current platform does not meet a preset condition, where the preset condition includes: the number of the video interaction behaviors is larger than a preset number.
Optionally, the video filtering module further includes:
a model input unit configured to input at least the target application information into a pre-trained video recall model, wherein the video recall model is used for screening out a video set matched with the platform account from a video library;
The video recall model is trained through a training sample comprising a positive sample and a negative sample, wherein the positive sample is generated based on basic information of accounts of the same type as the platform account and information of videos with video interaction behaviors, and the negative sample is generated based on basic information of accounts of the same type as the platform account and information of videos sampled from the video library.
Optionally, the training process of the video recall model includes:
Acquiring the training sample, wherein the training sample comprises sample application program information corresponding to an installed application program in sample equipment and a video identifier corresponding to a sample video, and the video identifier corresponding to the sample video is marked with an actual hit result of the sample equipment on the sample video;
inputting the training sample into a video recall model, calculating a feature vector corresponding to the training sample through the video recall model, and outputting a predicted hit result of the sample equipment on the sample video;
Model parameters of the video recall model are adjusted based on a difference between the predicted hit result and the actual hit result.
Optionally, the acquiring the training sample includes:
Acquiring a video identifier which is recommended to sample equipment corresponding to the account of the same type as the platform account and corresponds to a sample video of which the corresponding account has video interaction behavior, and forming a positive sample with a hit actual hit result between sample application program information corresponding to the sample equipment where the corresponding account is located;
And acquiring a video identifier corresponding to the video sampled in the video library, and combining the video identifier with sample application program information corresponding to each sample device to form a negative sample with a missed actual hit result.
Optionally, the calculating, by the video recall model, the feature vector corresponding to the training sample and outputting a predicted hit result of the sample device on the sample video includes:
determining an equipment characteristic vector corresponding to the sample equipment based on the sample application program information, wherein the equipment characteristic vector is used for representing equipment characteristics of the target equipment;
calculating a sample video feature vector corresponding to the sample video based on the video identifier corresponding to the sample video;
And converting the similarity between the equipment feature vector and the sample video feature vector into a predicted hit result of the sample equipment on the sample video, wherein the similarity is used for representing the similarity degree between the sample equipment feature vector and the sample video feature vector.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, including:
A processor;
A memory for storing the processor-executable instructions;
Wherein the processor is configured to execute the instructions to implement a video recall method as described in any of the embodiments above.
According to a fourth aspect of embodiments of the present disclosure, a storage medium is provided, which when executed by a processor of an electronic device, enables the electronic device to perform the video recall method described in any of the embodiments above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
According to the embodiment of the disclosure, on one hand, for the target device, application program information of an application program installed on the target device is used as a basis for judging viewing preferences of a user corresponding to the target device, and history data of a platform account number logged in by the target device is not required to be acquired, so that even if the target device uses a video playing application for the first time or does not exist or cannot acquire the history data without logging in the platform account number and the like, a recall video conforming to the viewing preferences of the user corresponding to the target device can still be accurately determined, accuracy of the recall video is improved, and the retention rate of a new user is improved. On the other hand, the detailed metadata information of the video in the video library is not needed, so that recall logic is simple, the operation amount is small when the recall model is used for carrying out video recall, and recall efficiency is improved to a certain extent.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a flow chart of a video recall method shown in accordance with an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a training process for a video recall model, shown in accordance with an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a video recall process shown in accordance with an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of another video recall process shown in accordance with an embodiment of the present disclosure;
FIG. 5 is a block diagram of an embodiment of a video recall device, shown in accordance with an embodiment of the present disclosure;
Fig. 6 is a block diagram of another electronic device shown in accordance with an embodiment of the present disclosure.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, the target device related to the present disclosure may include, but is not limited to, an electronic device such as a mobile phone, a tablet computer, a wearable device, a personal computer, and the like. The target device can call a video playing application to play the video recommended by the corresponding server, wherein the video playing application can be an application program installed in the terminal, or can be a webpage application integrated in a browser, the video recommended by the server, or any video involved in the disclosure can be a short video, such as a video clip, a scene short play and the like; but also long videos such as movies, television shows, etc., which the present disclosure does not limit.
At least one embodiment of the present disclosure provides a video recall method applied to a server. The video recall method, as illustrated in fig. 1, may include:
Step 102, in response to a video acquisition request sent by a target device, detecting whether the target device authorizes acquisition of application information on the target device, wherein a platform account of the target device is an account of which video interaction behavior occurring on a current platform does not meet preset conditions.
It should be noted that, the video acquisition request related to the disclosure is not an acquisition request for a specific video, for example, a video acquisition request may be sent to a server by a target device when a user just opens the video playing application or randomly refreshes a video, and such a video acquisition request does not explicitly specify a video to be acquired to the server, so after receiving the video acquisition request sent by the target device, the server performs recall processing on a video in a video library for the target device, and the obtained recall video in the video set is used for recommending to the target device for viewing by a corresponding user, and meanwhile, the determined recall video should be ensured to conform to the viewing preference of the user as much as possible.
In an embodiment, the preset condition may be that the number of video interaction behaviors is greater than a preset number, for example, the number of video interaction behaviors may be zero, where the platform account of the target device has not yet undergone video interaction behaviors, which corresponds to a situation that the target device uses the video playing application for the first time or the target device has not yet logged into the platform account. In the case that the target device uses the video playing application for the first time, the history data of the target device is not recorded in the server; in the case where the target device has not logged into the platform account, even if the server holds the history data generated by the platform account in which the target device has logged, it is difficult for the server to know the target device at this time, and thus it is impossible to associate the history data held by the server with the target device. In both cold start scenarios, the behavior of video interactions occurring at the platform account of the target device is insufficient to reflect the video viewing preferences of the user to whom the target device corresponds.
In step 104, in case the target device authorizes the acquisition of the application information on the target device, acquiring target application information of an installed application in the target device.
In an embodiment, authorization information carried in the video acquisition request may be parsed, and whether the target device is authorized to acquire application information of the target device may be determined according to the authorization information. Or sending an authorization inquiry request to the target device, and determining whether the target device is authorized to acquire the application information according to an authorization condition response message returned by the target device. Of course, the application information acquisition request may also be directly sent to the target device, and whether the target device is authorized to acquire the application information of the target device is determined according to the response message returned by the target device: if the response message carries the target application information, the target equipment is authorized to acquire the application information of the target equipment. The present disclosure is not limited in the specific manner in which it is determined whether the target device has been authorized to obtain its own application information.
In one embodiment, to avoid possible public opinion risks, the target device may present an application information acquisition permission request to the user to obtain user authorization when the video playback application is installed or used, and collect relevant application information of an installed application program in the device if the user confirms the authorization.
In an embodiment, the server is authorized to acquire the application information on the target device, and the corresponding target application information is acquired for the installed application in the target device. The target application information may include information such as an application identifier and/or an application name of an application installed in the target device. The target application information may be obtained in various ways; for example, the target application information carried by the video acquisition request may be extracted from the video acquisition request, where the target application information corresponds to an application installed when the target device sends the request; or after receiving the video acquisition request, sending a target application information acquisition instruction to target equipment and receiving target application information returned by the target equipment; the pre-stored target application program information corresponding to the video acquisition request sent by the target device at the previous time can be directly used as the current target application program information. Generally, the closer the application corresponding to the target application information is to the application installed in the target device when the video acquisition request is sent, the more the video in the video set screened from the video library will be in accordance with the viewing preference of the user of the target device. Accordingly, after the target application information of the target device is acquired, the information can be stored in a local cache or other relevant storage space of the server, so that when the next video acquisition request of the target device is received and the target application information corresponding to the next video acquisition request is difficult to acquire in real time, the stored target application information is taken as the target application information corresponding to the next video acquisition request to participate in video recall, and recall failure caused by incapability of acquiring the target application information in real time is avoided.
In step 106, a video set matching the platform account is selected from a video library based at least on the target application information, wherein the video set is used to determine video to send to the target device.
The process of screening the video set matched with the platform account from the video library is a process of carrying out recall processing on the video in the video library, wherein the recall video screened from the video library is used for forming the video set.
In an embodiment, the device identifier of the target device may be first obtained, then the device characteristics of the target device may be determined based on the obtained device identifier and the target application information, and then N videos with the highest similarity to the device characteristics may be searched from the video library, so as to construct a video set, where N is a positive integer. By combining the equipment identification of the target equipment and the target application program information to determine the equipment characteristics of the target equipment, the equipment characteristics can be ensured to uniquely correspond to the target equipment, so that the condition of the installed application program in the target equipment is reflected more accurately, and the follow-up process can be ensured to acquire a video set which is more in line with the watching preference of the user corresponding to the target equipment.
In an embodiment, the device identifier of the target device may be an identifier of the device, a MAC (MEDIA ACCESS Control bit) address, or factory number, which can uniquely represent the device information of the target device. The device identifier may be obtained in various manners, for example, the device identifier carried by the device identifier may be extracted from the video obtaining request; or after receiving the video acquisition request, sending a device identifier acquisition instruction to the target device and receiving a device identifier returned by the target device; the pre-stored device identification information may also be queried for a device identification corresponding to the video acquisition request. Similarly, after the device identifier of the target device is obtained, the device identifier can be stored and used directly after receiving the video obtaining request sent by the target device next time, so that temporary obtaining is avoided.
In an embodiment, an identification feature vector corresponding to the device identifier and an application feature vector corresponding to the target application program information may be generated respectively, and then a device feature vector is calculated based on the identification feature vector and the application feature vector, where the device feature vector is used to characterize a device feature of the target device. It will be appreciated that the above identified feature vectors and applied feature vectors should be generated according to a consistent feature vector rule, for example, having the same feature vector dimension, where the two feature vectors correspond to corresponding feature meanings, etc., to ensure that the subsequently calculated device feature vectors have an explicit feature meaning. By identifying the feature vector and applying the feature vector to calculate the device feature vector of the target device, the calculated device feature vector can accurately represent the device feature of the target device, and further the accuracy of video recall of the target device is improved. Or the application feature vector corresponding to the target application program information may be directly determined as the device feature vector corresponding to the target device, and the subsequent processing procedure is the same as above, which is not repeated here.
In an embodiment, the similarity between the device feature vector corresponding to the target device and the video feature vector already imported in the similarity index may be calculated first, where the video feature vector corresponds to the video in the video library, and then the video with the similarity greater than the preset similarity threshold is determined as N videos with the highest similarity to the device feature, or the video with the similarity arranged in the previous preset number is determined as N videos with the highest similarity to the device feature (i.e., N videos with the highest similarity to the target device). Because the similarity index can be used for processing the videos in the video library in advance, and then the similarity between each video and the equipment feature vector is calculated, the timeliness of similarity calculation is ensured, and the speed of video recall is improved. In another embodiment, the dot product, cosine similarity, or PC coefficient feature vector similarity between the device feature vector and the video feature vectors of the videos in the video library may be calculated, and then the video with the feature vector similarity greater than the preset similarity threshold is determined to be N videos with the highest device feature similarity, or the video with the similarity arranged in the previous preset number is determined to be N videos with the highest device feature similarity (i.e., N videos with the highest device similarity).
In the above embodiment, all or part of the video feature vectors in the video library may be imported into the similarity index, and then the similarity between the target device feature vector corresponding to the target device and the imported candidate video feature vectors in the similarity index may be calculated, and the candidate video with similarity greater than the preset similarity threshold or with similarity arrangement in the previous preset number may be determined as the video to be recalled. The partial video may be a video recommended to a device of the same model as the target device in the video library, a video recommended to a device located in the same area as the IP (Internet Protocol ) address of the target device in the video library, or all videos under a certain video tag. The similarity index may be a Faiss similarity index or a FLANN (Fast Approximate Nearest Neighbor Search Library, fast nearest neighbor search function library) index, etc. The method comprises the steps of firstly carrying out primary screening on massive videos in a video library by using similarity index, and carrying out recall processing on only partial videos (to-be-recalled videos) with larger similarity between video feature vectors and target equipment feature vectors, so that recall processing on all the massive videos in the video library in response to a certain video acquisition request is avoided, the number of videos subjected to recall processing is effectively reduced, invalid recall processing on videos with weaker relevance is avoided, and the recall processing efficiency is improved. It may be appreciated that in the above embodiment, the similarity index used for determining the video to be recalled and the similarity index used for performing recall calculation on the video to be recalled may be the same index or different indexes, which is not limited in this disclosure.
In an embodiment, the videos in the video library may be screened to construct a video set by a pre-trained model, for example, at least the target application information may be input into the pre-trained model for screening the video set matching the platform account from the video library, the model being trained by a training sample comprising a positive sample and a negative sample, wherein the positive sample is generated based on the basic information of the account of the same type as the platform account and the information of the video having undergone the video interaction, and the negative sample is generated based on the basic information of the account of the same type as the platform account and the information of the video sampled from the video library. The recall video is screened from the video library by utilizing the pre-trained model, so that the processing steps in video recall are simplified to reduce the workload of video recall, the accuracy of video recall can be ensured, and the screened recall video is more in accordance with the video watching preference of the user corresponding to the target equipment.
Further, in an embodiment, the training process of the video recall model may include: acquiring a training sample, wherein the training sample comprises sample application program information corresponding to an installed application program in sample equipment and a video identifier corresponding to a sample video, and the video identifier corresponding to the sample video is marked with an actual hit result of the sample equipment on the sample video; inputting the training sample into a video recall model, calculating a feature vector corresponding to the training sample through the video recall model, and outputting a predicted hit result of sample equipment on the sample video; and adjusting model parameters of the video recall model based on the difference between the output predicted hit result and the actual hit result. And calculating a predicted hit result of the sample equipment on the sample video by using the sample application program information and the sample video characteristics, so that the predicted hit result is ensured to be related to an application program installed in the sample equipment, and therefore, the video recall model adjusted based on the predicted hit result and the actual hit result is more in line with the actual hit condition of the sample video, and the accuracy of the video recall model is further improved.
Training the video recall model requires obtaining training samples, which may include positive and negative samples. In one embodiment, the training sample acquisition process may include: acquiring a video identifier corresponding to a sample video recommended to sample equipment corresponding to an account of the same type as the platform account and subjected to video interaction with the account of the sample equipment, wherein the video identifier and sample application program information corresponding to the sample equipment where the account of the sample equipment is located form a positive sample with a hit actual hit result; and acquiring a video identifier corresponding to the video sampled in the video library, and combining the video identifier with sample application program information corresponding to each sample device to form a negative sample with a missed actual hit result.
The actual hit result of the noted sample device on the sample video is used for indicating whether the sample video is played by the sample device or not: if the sample video is played by the sample equipment, the actual hit result of the marked video identification corresponding to the sample video is played; otherwise, if the sample video is not played by the sample device, the actual hit result of the marked video identifier corresponding to the sample video is not played. Therefore, the marked actual hit result corresponding to the positive sample is played, and the marked actual hit result corresponding to the negative sample is not played. The training samples comprise video identifications corresponding to sample videos recommended to sample equipment, not recommended to the sample equipment and the like under various conditions, so that the accuracy of a video recall model trained by using the samples can be ensured.
In an embodiment, the sample video may include a specific sample video recommended to the sample device, wherein the specific sample video is a sample video recommended to the sample device in response to a video acquisition request issued by the sample device for the first time. The sample video recommended to the sample equipment in response to the video acquisition request sent by the sample equipment for the first time is used as a training sample to train the video recall model, so that the trained video recall model can be ensured to be suitable for the video acquisition request sent by the target equipment for the first time, and recall processing can still be accurately realized on the video in the video library under the condition that the target equipment uses the video playing application for the first time or does not exist or cannot acquire historical data and the like in the cold-start scene such as a platform account number which is not logged in.
In an embodiment, the video identifier corresponding to the sample video may be a video ID of the sample video; the video related information of the sample video can be uniquely represented by a combination of a video uploading person and video uploading time. The sample device identifier included in the training sample may be a device identifier of the sample device, a MAC address, or factory number, etc. that can uniquely represent the sample device, and the sample device identifier corresponding to the video acquisition request may be queried in the pre-stored sample device identifier information.
In an embodiment, the sample application information may include information such as an application identification and/or an application name of an installed application in the sample device. The sample application information may be obtained in association with the sample device identification. Generally, the closer the application program corresponding to the sample application program information is to the application program installed in the sample device when the video acquisition request is sent, the more accurate the trained video recall model is.
In an embodiment, a device feature vector corresponding to the sample device may be determined based on the sample application information and a sample device identification of the sample device, the device feature vector being used to characterize a device feature of the target device; calculating a sample video feature vector corresponding to the sample video based on the video identifier corresponding to the sample video; and converting the similarity between the device feature vector and the sample video feature vector into a predicted hit result of the sample device on the sample video, wherein the similarity is used for representing the similarity degree between the sample device feature vector and the sample video feature vector. Through the recall processing, sample videos matched with the sample equipment can be screened from a video library, so that a predicted hit result of the sample equipment on the sample videos is obtained, parameters of the video recall model can be conveniently adjusted according to deviation between the predicted hit result and an actual hit result, and training of the video recall model is achieved. Or the sample application feature vector corresponding to the sample application program information may be directly determined as the sample equipment feature vector of the sample equipment, and the subsequent processing procedure is the same as above, which is not repeated here.
In one embodiment, the actual hit result may be marked as 1 (corresponding to being played) or 0 (corresponding to not being played), and the predicted hit result may be a probability value within [0,1 ]. And when the probability value corresponding to the predicted hit result belongs to (0, 1), namely the predicted hit result is unequal to the actual hit result, adjusting the model parameters of the video recall model according to the difference between the predicted hit result and the actual hit result. The adjusted model parameters may include dimensions of the feature vectors, which may be preset by the server before training begins and adjusted with the output results during training. In addition, the feature vector involved in the training process of the video recall model and the process of using the video recall model to recall video can be embedding feature vectors.
According to the embodiment of the disclosure, on one hand, the related information of the installed application program of the target device is used as the basis of the user preference of the target device, after the target application program information corresponding to the installed application program in the target device is obtained, even if the target device uses the video playing application for the first time or does not exist or cannot obtain historical data when the user account is not logged in, and the like, the video set formed by recall videos meeting the viewing preference of the user corresponding to the target device can still be accurately determined. On the other hand, after the video identification of the video to be recalled is acquired, the video recall process can be completed without acquiring detailed metadata information, corresponding recall logic is simple, and the operation amount is small when the video recall is performed by using a trained recall model, so that the recall processing efficiency is ensured to a certain extent.
The process of training the video recall model by the server according to the technical scheme of the present disclosure is described in detail below with reference to a schematic diagram of a training process of the video recall model shown in fig. 2. In the training process of the video recall model described in the present disclosure, multiple training samples may be used to train the model, and the embodiment shown in fig. 2 illustrates the training process of the recall model by taking the processing process of any training sample as an example. The training process of the video recall model may include the steps of:
step 202, obtaining application feature vectors corresponding to installed application programs in the sample equipment.
In this embodiment, a training sample is acquired before training the video recall model. For example, for any training sample, the sample device may be determined first and then the sample video corresponding to the sample device may be determined first, the sample video may be determined first and then the sample device corresponding to the sample video may be determined (as shown in step 208), the sample device and the sample video may be acquired simultaneously in the training sample data set, and the acquisition order of the sample device and the sample video is not limited in this disclosure.
In one embodiment, the sample video includes a sample video recommended to a sample device in response to a video acquisition request first issued by the sample device. The actual hit result of the marked sample device on the sample video is used for indicating whether the sample video is played by the sample device: if the sample video is played by the sample equipment, the actual hit result of the marked video identification corresponding to the sample video is played; otherwise, if the sample video is not played by the sample device, the actual hit result of the marked video identifier corresponding to the sample video is not played. Therefore, the actual hit result of the marked positive sample in the training sample is played, and the actual hit result of the marked negative sample in the training sample is not played.
In an embodiment, the process of determining the sample device and the sample video is a process of obtaining a sample device identifier and a sample video identifier, for example, the sample device identifier may be a device identifier, a MAC address, or a factory number of the sample device, which may uniquely represent device information of the sample device; the sample video identifier may be video related information capable of uniquely representing the sample video, such as a video ID of the sample video or a combination of a video uploading user and a video uploading time.
After the sample device identifier is obtained, a sample device identifier feature vector corresponding to the sample device identifier can be calculated, and the specific process of calculation can be seen from the disclosure in the related art. It should be understood that, when the feature vector dimension is used as the adjustment parameter of the video recall model, the dimensions of the feature vectors involved in the processing for different training samples may be different, but the dimensions of the feature vectors involved in the processing for the same training sample in the embodiment may be the same.
Step 204, calculating the optimal application feature vector by using each application feature vector.
For any sample application program installed in the sample device, each bit of the corresponding sample application feature vector may have different preset meanings, for example, 10 bits of the feature vector may be preset to correspond to specific meanings of 10 dimensions, such as "shopping," "social," "game," "video," "news," "sports," "entertainment," "encyclopedic," "medical," "scientific research," etc., which may be preset to other meanings, and the disclosure is not limited thereto. Then for some social-type application a installed in the sample device its corresponding sample application feature vector may be ea= [0.5,6,0.3,0.4,0.2,0,0.1,0.5,0.8,0] and for some shopping-type application B installed in the sample device its corresponding sample application feature vector may be eb= [5,0.2,0,0.5,0,0,0,0.1,0,0]. For any sample application program installed in the sample device, the specific values of the bits of the corresponding sample application feature vector can be preset according to the nature and content of the application program, and the disclosure is not limited to this.
After the sample application feature vectors corresponding to the applications installed in the sample device are obtained by calculation, the maximum value of the sample application feature vectors can be obtained by using max pooling algorithm according to the bit, so as to obtain the optimal application feature vector corresponding to the sample device, for example, for the sample application feature vector EA and the sample application feature vector EB corresponding to the application program A and the application program B respectively, the optimal application feature vector obtained by calculation using max pooling algorithm is E best = [5,6,0.3,0.5,0.2,0,0.1,0.5,0.8,0]. The specific calculation of max pooling algorithm is disclosed in the related art, which is not limited by the present disclosure.
In step 206, a sample device feature vector corresponding to the sample device is determined.
In an embodiment, the optimal application feature vector calculated in the step 204 may be directly used as the sample device feature vector corresponding to the sample device. At this time, the sample device feature vector can uniquely reflect the user viewing preference corresponding to the installed application program in the sample device. In fact, since the sample device identification feature vector corresponding to the sample device identification is not used, only the sample application information corresponding to the sample device can be acquired when acquiring the training sample, without acquiring the sample device identification.
In one embodiment, the sample device identification feature vector and the optimal application feature vector described above may be used to calculate a sample device feature vector. For example, the sample device identification feature vector may be added bit by bit to the optimal application feature vector, or may be multiplied bit by bit, although other algorithms may be used to calculate the sample device feature vector. The calculated characteristic vector of the sample equipment can reflect the user watching preference corresponding to the installed application program in the sample equipment, and can also reflect the related information of the sample equipment, and the sample equipment is in one-to-one correspondence with the application program, so that even if the same application program is installed on different sample equipment, the training samples can be distinguished on the equipment level, and the more accurate prediction video recall model can be obtained.
In this embodiment, in the case of calculating the sample device feature vector using the sample device identifier feature vector and the above-mentioned optimal application feature vector, as an exemplary embodiment, the sample device identifier feature vector may be calculated based on the sample device identifier, and then the optimal application feature vector may be calculated based on the sample application information; alternatively, as another exemplary embodiment, the optimal application feature vector may be calculated based on the sample application information, and then the sample device identification feature vector may be calculated based on the sample device identification. In other words, the "calculate sample device identifier feature vector" and the "calculate optimal application feature vector" do not have a necessary sequence, and can be adjusted according to actual situations.
Step 208, calculating a sample video feature vector corresponding to the sample video.
Similarly, there is no necessary sequence between steps 202-206 and 208, and the sequence may be adjusted according to the actual situation.
After determining the sample video identifier corresponding to the sample video in the training sample, calculating the sample video feature vector by using the sample video identifier, similar to the process of calculating the sample device identifier feature vector based on the sample device identifier, the specific calculation process of the sample video feature vector can be referred to the disclosure in the related art, which is not limited in this disclosure.
At step 210, a similarity between the sample device feature vector and the sample video feature vector is calculated.
In one embodiment, the dot product between the sample device feature vector and the sample video feature vector may be calculated, and the dot product between the two feature vectors is taken as the similarity of the two: as shown in the formula (1),
Wherein, the feature vectors a and B in the formula (1) are the sample device feature vector and the sample video feature vector, respectively, and a i and B i are the element values of the ith bit of the sample device feature vector and the sample video feature vector, respectively. By adopting the method, the dot product of the sample equipment feature vector and the sample video feature vector is used as a measurement value for measuring the similarity between the two feature vectors.
In an embodiment, the cosine similarity between the sample device feature vector and the sample video feature vector may be calculated, and the similarity between the two feature vectors is measured by using the cosine similarity between the two feature vectors: as shown in the formula (2),
Wherein, the feature vectors a and B in the formula (2) are the sample device feature vector and the sample video feature vector, respectively, and a i and B i are the element values of the ith bit of the sample device feature vector and the sample video feature vector, respectively. The included angle is considered, and the product of the inner product (multiplied and summed by corresponding elements) of the two eigenvectors and the product of the modulus of the two eigenvectors are taken as the calculation result. Of course, in the above calculation process, the respective average value may be subtracted from each of the values of a i and B i, so as to adjust the similarity between a and B. The difference between the sample equipment feature vector and the sample video feature vector is distinguished from the direction of the feature vector through cosine similarity, so that the viewing preference of the user corresponding to the sample equipment can be accurately evaluated.
In another embodiment, the PC coefficients (Pearson correlation Coefficient, pearson correlation coefficients) between the sample device feature vector and the sample video feature vector may be calculated, and the similarity between the two feature vectors is measured using the PC coefficients between the two feature vectors: as shown in the formula (3),
Wherein, the feature vectors a and B in the formula (3) are the sample device feature vector and the sample video feature vector, respectively, and B i is the element value of the ith bit of the sample device feature vector and the sample video feature vector, respectively. Of course, other correlation calculation algorithms may be selected according to practical situations, which the present disclosure is not limited to.
Through the above calculation, the sample device feature vector and the sample video feature vector are converted into numerical values (cosine similarity or PC coefficients, etc.) in scalar form, the larger the numerical value is, the stronger the similarity between the sample device feature vector and the sample video feature vector is.
In step 212, the predicted hits of the sample device to the sample video are output and the model parameters are adjusted.
In the case where the dot product r Dot product , the cosine similarity r Cosine , or the PC coefficient r PC between the sample device feature vector and the sample video feature vector calculated by the expression (1), the expression (2), or the expression (3) is taken as the similarity between the two feature vectors, the similarity value in scalar form described above can be converted into the predicted hit result of the sample device on the sample video using the Sigmoid function: as shown in the formula (4),
Wherein r in the formula (4) may be the scalar form of the value r Dot product 、r Cosine or r PC calculated in the step 310, and S (r) is the predicted hit result. As can be seen from the nature of the Sigmoid function, the greater S (r) ∈ (0, 1), the greater S (r) indicates that if the sample device is recommended to the sample device, the greater the possibility that the sample device plays the sample video; the smaller S (r) indicates that if the sample device is recommended to the sample device, the less likely the sample device plays the sample video.
Of course, other corresponding algorithms disclosed in the related art may be used to convert the dot product r Dot product , the cosine similarity r Cosine or the PC coefficient r PC into a predicted hit result of the sample device on the sample video, and the specific conversion process is not repeated.
Because the sample video identification of the training sample is already marked with the actual hit of the sample device to the sample video: the actual hit result corresponding to the positive sample is 1, and the actual hit result corresponding to the negative sample is 0, so that after the predicted hit result S (r) is calculated, the model parameters of the video recall model can be appropriately adjusted based on the difference between the predicted hit result and the actual hit result.
In one embodiment, the dimensions of the feature vectors or other parameters of the model may be adjusted. For example, after the model outputs a predicted hit result corresponding to a certain training sample, the model parameters of the video recall model may be adjusted according to the difference between the predicted hit result and the actual hit result corresponding to the training sample; and after the model continuously outputs a plurality of predicted hit results corresponding to a plurality of training samples, the model parameters of the video recall model can be adjusted according to average values of differences between the predicted hit results and the corresponding actual hit results.
The video recall model after the parameter adjustment gradually improves the prediction accuracy of the subsequent training samples, namely the difference between the predicted hit result and the actual hit result output by the model gradually decreases. And stopping training when the difference between one or a continuous preset number of predicted hit results and actual hit results is smaller than a certain preset threshold value, and finally obtaining a training completed video recall model for carrying out video recall.
After the video recall model is trained, the model may be used to recall video from the target device. The following describes in detail a video recall process performed by the server through the technical scheme of the present disclosure, in conjunction with a schematic diagram of a video recall process shown in fig. 3. The corresponding video recall process may include the steps of:
Step 302, an application feature vector corresponding to each installed application program in the target device is obtained.
The video acquisition request received by the server may be a video acquisition request sent by the target device for the first time, where "first time" is understood to mean a video acquisition request sent by the target device to a server corresponding to the application for the first time after the target device installs the video playing application, or a video acquisition request sent by the target device to a server corresponding to the application in a case where the video playing application does not log into any user account, etc. Accordingly, the sample application information included in the training sample for training the video recall model may be an application corresponding to the application installed when the sample device first issues a video acquisition request. Because the training samples used in model training contain sample application program information corresponding to the first video acquisition request sent by the sample equipment, the video recall model can respond to the first video acquisition request sent by the target equipment to carry out video recall, thereby facilitating video recall aiming at the target equipment in a cold start scene where the target equipment uses video playing application for the first time or no historical data can not be acquired, such as a user account which is not logged in yet.
And after receiving a video acquisition request sent by any target device, acquiring application feature vectors corresponding to all the application programs installed in the target device. In an embodiment, after receiving a video acquisition request sent by any target device, target application information corresponding to each application installed in the target device is first acquired, and then application feature vectors of each application corresponding to the target application information are calculated respectively. For example, the target application information may be acquired in various manners, for example, the target application information carried by the target application information may be extracted from the video acquisition request, where the target application information corresponds to an application installed when the target device sends the request; or after receiving the video acquisition request, sending an application information acquisition instruction to the target device and receiving target application information returned by the target device; and the pre-stored target application program information corresponding to the previous video acquisition request of the target equipment can be directly used as the current target application program information. Generally, the closer the application corresponding to the target application information is to the application installed in the target device when the video acquisition request is sent, the more accurate the predicted hit result for the target video is.
In an embodiment, the target application information may include information related to an application identifier and/or an application name of an installed application in the target device. As an exemplary embodiment, the target application information may include the related information corresponding to all applications installed in the target device. The video recall processing is carried out based on the related information of all application programs, so that the influence factors of recall results are more comprehensive, and the fact that the video of the masses can be recalled normally is ensured.
As another exemplary embodiment, the above-mentioned information of the target application program includes the above-mentioned information corresponding to an application program having a strong correlation with the video playback application installed in the target device. For example, a correlation coefficient between the video playing application and an application program possibly installed in the target device may be calculated in advance, and information such as an application program name having a correlation coefficient greater than a preset threshold value may be preconfigured in the target device in the form of video playing application configuration information, so that the target device may include, in the video acquisition request, the correlation information of the application program installed in the target device that matches the information such as the application program name to be sent to the server when the target device needs to request to acquire the video.
Because the server finishes screening a large number of application programs in advance, only the related information of the application programs which are strongly related to the video playing equipment is used as the recall basis, the invalid recall processing of the application programs which are weak in relation to the video playing equipment, such as the application programs at the bottom of the operating system, can be avoided, the calculation workload of the optimal application feature vector is reduced, and the video recall efficiency is improved.
After the target application information corresponding to the target device is obtained, a target application feature vector, such as APPi embedding (i > 1), corresponding to each target application program respectively may be calculated based on the target application information. For a specific calculation process, please refer to the foregoing description and disclosure in the related art, which is not limited by the present disclosure.
And step 304, calculating the optimal application feature vector by utilizing each application feature vector.
After the target application feature vectors corresponding to the installed application programs in the target equipment are obtained through calculation, the maximum value of the target application feature vectors can be obtained through max pooling algorithm according to the position, so that the optimal application feature vector corresponding to the target equipment is obtained, for example, APP embedding.
Step 306, determining a target device feature vector corresponding to the target device.
In an embodiment, the optimal application feature vector calculated in the step 304 may be directly used as the target device feature vector corresponding to the target device. At this time, the target device feature vector can uniquely reflect the user viewing preference corresponding to the installed application in the target device. In fact, since the target device identification feature vector corresponding to the target device identification is not used, only the target application information corresponding to the target device can be acquired without acquiring the target device identification, thereby reducing the data transmission amount between the target device and the server to some extent.
In one embodiment, the target device identification feature vector and the optimal application feature vector may be utilized to calculate the target device feature vector. As an exemplary embodiment, the target device identification feature vector may be added by bits to the optimal application feature vector, for example, the target device identification feature vector did embedding may be added by bits to APP embedding described above to obtain a target device feature vector, such as device embedding. As another exemplary embodiment, the target device identification feature vector may be bit-multiplied with the optimal application feature vector, although other algorithms may be used to calculate the target device feature vector. The calculated characteristic vector of the target equipment can reflect the user watching preference corresponding to the installed application program in the target equipment and the related information of the target equipment, so that the target equipment and the application program are in one-to-one correspondence, even if the same application program is installed on different target equipment, the training targets can be distinguished on the equipment level, and the video recall model with more accurate prediction can be obtained.
In this embodiment, in the case of calculating the target device feature vector by using the target device identification feature vector and the optimal application feature vector, the "calculating the target device identification feature vector" and the "calculating the optimal application feature vector" do not have a necessary sequence, and may be adjusted according to the actual situation.
Step 308, importing the alternative videos in the video library into a similarity index.
If the video library contains massive videos which are possibly recalled, and video recall processing is directly performed on all videos in the video library, the recall processing time is possibly low in efficiency, so that all videos in the video library can be initially screened by using similarity indexes to determine videos to be recalled.
In an embodiment, the candidate videos in the video library may be determined first, and then the candidate video feature vectors corresponding to the candidate videos are imported into the similarity index. Wherein. The candidate video may be a video in the video library that is recommended to a device of the same model as the target device, or a video in the video library that is recommended to a device having an IP address within the same preset region as the target device. The similarity index may be a Faiss similarity index or a FLANN, etc.
Step 310, determining the video to be recalled through the similarity index, and calculating the feature vector of the video to be recalled.
For the candidate video feature vectors imported into the index, the similarity between each candidate video feature vector and the target device feature vector corresponding to the target device can be calculated, and the candidate videos with the similarity larger than the preset similarity threshold value or with the similarity arranged in the previous preset number are determined to be the videos to be recalled. The specific process of determining video to recall using similarity index may be found in the disclosure of the related art, which is not limited by the present disclosure.
Through the preliminary screening of the similarity indexes, partial videos with strong correlation with the target equipment are screened out from a video library containing a large number of videos to be recalled, so that the number of the videos to be recalled is greatly reduced, the operand in the video recall processing process is effectively reduced, and the video recall efficiency is improved.
For the determined video to be recalled, the corresponding feature vector of the video to be recalled can be calculated, such as pid embedding. In fact, for a certain alternative video, after it is determined as the video to be recalled, the alternative video and the video to be recalled are actually different names of the same video at different stages. Because the candidate video feature vector corresponding to the candidate video is imported into the similarity index, after the candidate video is determined to be the video to be recalled, the candidate video feature vector corresponding to the candidate video can be used as the recall video feature vector corresponding to the recall video, so that repeated calculation of the recall video feature vector is avoided.
In this embodiment, as an exemplary embodiment, the steps 302-306 described above may be performed first to determine the target device feature vector; or as another exemplary embodiment, steps 308-310 described above may be performed first to determine the video feature vector to recall. In other words, the "determine target device feature vector" and the "determine video feature vector to be recalled" do not have a necessary sequence, and can be adjusted according to actual situations.
Step 312, the similarity between the target device feature vector and the video feature vector to be recalled is calculated.
In an embodiment, the target device feature vector and the video feature vector to be recalled may be input into a similarity index, and the similarity between the target device feature vector and the video feature vector to be recalled may be determined according to the output result of the similarity index.
In one embodiment, a dot product r Dot product between the target device feature vector and the target video feature vector may be calculated: see equation (1) above in step 210. At this time, the dot product of the sample device feature vector and the sample video feature vector is used as a measurement value for measuring the similarity between the two feature vectors.
In another embodiment, a cosine similarity r Cosine between the target device feature vector and the target video feature vector may be calculated: see equation (2) above in step 210. At this time, the included angle between the target device feature vector and the target video feature vector is taken as an consideration angle, and the product of the inner product (multiplication summation of the corresponding elements) of the two feature vectors and the product of the modulus of the two feature vectors is taken as a calculation result. Of course, in the above calculation process, the corresponding average value may be subtracted from each of the values a i and B i, so as to adjust the cosine similarity. The difference between the characteristic vector of the target device and the characteristic vector of the target video is distinguished from the direction of the characteristic vector through cosine similarity, so that the viewing preference of the user corresponding to the target device can be accurately evaluated.
In yet another embodiment, a PC coefficient r PC between the target device feature vector and the target video feature vector may be calculated: see equation (3) above in step 210. Of course, other correlation calculation algorithms may be selected according to practical situations, which the present disclosure is not limited to.
Through the above calculation, the target device feature vector and the target video feature vector are converted into numerical values in scalar form.
And step 314, determining a video set according to the similarity calculation result.
After converting the target device feature vector and the target video feature vector into scalar form values, if r=r Dot product , the scalar form values can be converted into a predicted hit result S (r) of the target device on the target video by using Sigmoid function: see equation (3) above in step 212. The nature of the Sigmoid function shows that the larger the S (r) E (0, 1), the more likely the target video is played by the target device if the target device is recommended to the target device, namely the more likely the target video is played by the target device; the smaller S (r) indicates that if the target device is recommended to the target device, the less likely the target device plays the target video, i.e., the less likely the target video is to be played on the target device.
If r=r Cosine or r=r PC, other corresponding algorithms disclosed in the related art may be used to convert the cosine similarity r Cosine or the PC coefficient r PC into a predicted hit result of the sample device on the sample video, and the specific conversion process is not repeated.
In an embodiment, after the predicted hit results corresponding to all the videos to be recalled are output, N videos in which S (r) in all the predicted hit results is greater than a preset probability threshold S 0 may be determined as recall videos.
In another embodiment, the video acquisition request sent by the target device may include the number N1 of recalled videos, or the number N2 of recalled videos may be preset in the server. As an exemplary embodiment, after outputting predicted hit results corresponding to all the video to be recalled, the server may sort the predicted hit results corresponding to all the video to be recalled, and determine the video to be recalled corresponding to the first N1 or N2S (r) in the hit results as the recall video. Therefore, the recall videos meeting the request number of the target equipment are output, and the output recall videos can be ensured to be part or all of the videos with the largest predicted hit results in the video to be recalled.
As another exemplary embodiment, after calculating the predicted hit result corresponding to any video to be recalled, the server may sort all the currently calculated predicted hit results, stop recall processing on the subsequent other video to be recalled when the number of videos with S (r) greater than the preset probability threshold S 0 in the current predicted hit result satisfies the above N1 or N2, and determine N1 or N2 videos with S (r) greater than the preset probability threshold S 0 in the current predicted hit result as the recall video. Therefore, invalid processing possibly caused by recall processing of all the to-be-recalled views is avoided to a certain extent, and the efficiency of recall processing is further improved.
And recalling videos output through the video recall process together to form a video set. The recall video in the video set may be directly recommended to the target device for playback as necessary. And the video to be recommended can also be sent to a sorting system, and the sorting system can be used for further screening and recommending the screened video to be recommended to target equipment so as to play the video when necessary. In fact, the processing procedures such as sorting the predicted hit results of the target video output by the video recall model in the above exemplary embodiment may also be completed by a sorting system, where the video recall system only needs to send the predicted hit results of the output target video to the sorting system.
In the embodiment shown in fig. 3, the identification feature vector of the target device and the target device feature vector calculated by using the optimal application feature vector corresponding to the application program participate in the similarity calculation, and in fact, the optimal application feature vector (which may be considered as the target device feature vector of the target device) may also be directly used to participate in the similarity calculation. The following is a description of another video recall process, illustrated in fig. 4. The video recall process may include:
Step 402, obtaining application feature vectors corresponding to each installed application program in the target device.
Step 404, calculating an optimal application feature vector by using each application feature vector.
Step 406, importing the alternative videos in the video library into a similarity index.
In step 408, the video to be recalled is determined through the similarity index, and the feature vector of the video to be recalled is calculated.
Step 406 is an optional step, and in fact, all or part of the videos in the video library may be directly determined as the video to be recalled to participate in the processing of the subsequent steps.
The detailed processing procedures of steps 402-408 are not substantially different from those of steps 302-304 and steps 308-310, and thus reference may be made to the descriptions of steps 302-304 and steps 308-310, which are not repeated here.
In step 410, the similarity between the best application feature vector and the video feature vector to be recalled is calculated.
For a certain video to be recalled, in the step, the similarity between the feature vector of the video to be recalled and the optimal application feature vector of the target device is directly calculated, and the similarity is used for representing the similarity between the target device and the video to be recalled, namely, the matching degree between the video to be recalled and the watching preference of the corresponding user of the target device.
Step 412, determining the video set according to the similarity calculation result.
The detailed processing of step 412 is not substantially different from that of step 314, and thus reference to the description of step 314 is omitted here.
So far, the recall processing process of selecting the video set from the video library is finished.
Corresponding to the embodiments of the video recall method described above, the present disclosure also proposes embodiments of a video recall device.
Fig. 5 is a block diagram of an embodiment of a video recall device, shown in accordance with an embodiment of the present disclosure. The video recall device shown in this embodiment may be suitable for a server running a video recommendation application, where the server may be a physical server of an independent host, or may be a virtual server borne by a host cluster, or may be a cloud server; the terminal equipment comprises, but is not limited to, mobile phones, tablet computers, wearable equipment, personal computers and other electronic equipment. The video playing application can be an application program installed in a terminal or a webpage application integrated in a browser, and a user can receive video recommended by the server through the video playing application, wherein the video can be a short video, such as a video clip, a scene short play and the like; but also long videos such as movies, television shows, etc.
As shown in fig. 5, the video recall device may include:
An authorization detection module 501 configured to respond to a video acquisition request sent by a target device, and detect whether the target device is authorized to acquire application information on the target device, where a platform account of the target device is an account whose video interaction behavior occurring on a current platform does not meet a preset condition;
An application information acquisition module 502 configured to acquire target application information of an installed application in the target device, in a case where the target device authorizes acquisition of the application information on the target device;
A video screening module 503 is configured to screen a video set matching the platform account from a video library based at least on the target application information, wherein the video set is used to determine the video sent to the target device.
Optionally, the video filtering module 503 includes:
A device identifier obtaining unit 503A configured to obtain a device identifier of the target device;
A device feature determination unit 503B configured to determine a device feature of the target device based on the device identification and the target application information;
And a video searching unit 503C configured to search N videos with highest similarity to the device features from the video library, so as to obtain the video set, where N is a positive integer.
Optionally, the device feature determining unit 503B is further configured to:
Determining an identification feature vector corresponding to the equipment identification and an application feature vector corresponding to the target application program information;
and calculating an equipment characteristic vector based on the identification characteristic vector and the application characteristic vector, wherein the equipment characteristic vector is used for representing the equipment characteristics of the target equipment.
Optionally, the video search unit 503C is further configured to:
Calculating the similarity between the device feature vector corresponding to the target device and the video feature vector imported in the similarity index, wherein the video feature vector corresponds to the video in the video library;
And determining the videos with the similarity larger than a preset similarity threshold value or the similarity arranged in the preset number N before as N videos with the highest similarity with the equipment characteristics.
Optionally, the account of the same type as the platform account is an account in which the video interaction behavior occurring on the current platform does not meet a preset condition, where the preset condition includes: the number of the video interaction behaviors is larger than a preset number.
Optionally, the video filtering module 503 further includes:
A model input unit 503D configured to input at least the target application information into a pre-trained video recall model, where the video recall model is used to screen a video set from a video library that matches the platform account;
The video recall model is trained through a training sample comprising a positive sample and a negative sample, wherein the positive sample is generated based on basic information of accounts of the same type as the platform account and information of videos with video interaction behaviors, and the negative sample is generated based on basic information of accounts of the same type as the platform account and information of videos sampled from the video library.
Optionally, the training process of the video recall model includes:
Acquiring the training sample, wherein the training sample comprises sample application program information corresponding to an installed application program in sample equipment and a video identifier corresponding to a sample video, and the video identifier corresponding to the sample video is marked with an actual hit result of the sample equipment on the sample video;
inputting the training sample into a video recall model, calculating a feature vector corresponding to the training sample through the video recall model, and outputting a predicted hit result of the sample equipment on the sample video;
Model parameters of the video recall model are adjusted based on a difference between the predicted hit result and the actual hit result.
Optionally, the acquiring the training sample includes:
Acquiring a video identifier which is recommended to sample equipment corresponding to the account of the same type as the platform account and corresponds to a sample video of which the corresponding account has video interaction behavior, and forming a positive sample with a hit actual hit result between sample application program information corresponding to the sample equipment where the corresponding account is located;
And acquiring a video identifier corresponding to the video sampled in the video library, and combining the video identifier with sample application program information corresponding to each sample device to form a negative sample with a missed actual hit result.
Optionally, the calculating, by the video recall model, the feature vector corresponding to the training sample and outputting a predicted hit result of the sample device on the sample video includes:
determining an equipment characteristic vector corresponding to the sample equipment based on the sample application program information, wherein the equipment characteristic vector is used for representing equipment characteristics of the target equipment;
calculating a sample video feature vector corresponding to the sample video based on the video identifier corresponding to the sample video;
And converting the similarity between the equipment feature vector and the sample video feature vector into a predicted hit result of the sample equipment on the sample video, wherein the similarity is used for representing the similarity degree between the sample equipment feature vector and the sample video feature vector.
The embodiment of the disclosure also proposes an electronic device, including:
A processor;
A memory for storing the processor-executable instructions;
Wherein the processor is configured to execute the instructions to implement a video recall method as described in any of the embodiments above.
Embodiments of the present disclosure also provide a storage medium that, when executed by a processor of an electronic device, enables the electronic device to perform the video recall method of any of the above embodiments.
Embodiments of the present disclosure also provide a computer program product configured to perform the video recall method of any of the above embodiments.
Fig. 6 is a schematic block diagram of an electronic device shown in accordance with an embodiment of the present disclosure. For example, the electronic device 600 may be a server, a computer, an industrial personal computer, a mobile phone, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, an electronic device 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or part of the steps of the video recall method described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 may include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the electronic device 600. Examples of such data include instructions for any application or method operating on the electronic device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 606 provides power to the various components of the electronic device 600. The power supply components 606 can include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 600.
The multimedia component 608 includes a screen between the electronic device 600 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front camera and/or a rear camera. When the electronic device 600 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 610 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 614 includes one or more sensors for providing status assessment of various aspects of the electronic device 600. For example, the sensor assembly 614 may detect an on/off state of the electronic device 600, a relative positioning of the components, such as a display and keypad of the electronic device 600, the sensor assembly 614 may also detect a change in position of the electronic device 600 or a component of the electronic device 600, the presence or absence of a user's contact with the electronic device 600, an orientation or acceleration/deceleration of the electronic device 600, and a change in temperature of the electronic device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communication between the electronic device 600 and other devices, either wired or wireless. The electronic device 600 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 616 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an embodiment of the present disclosure, electronic device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the video recall method described above.
In an embodiment of the present disclosure, a non-transitory computer-readable storage medium is also provided, such as memory 604, comprising instructions executable by processor 620 of electronic device 600 to perform the video recall method described above. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing has outlined the detailed description of the method and apparatus provided by the embodiments of the present disclosure, and the detailed description of the principles and embodiments of the present disclosure has been provided herein with the application of the specific examples, the above examples being provided only to facilitate the understanding of the method of the present disclosure and its core ideas; meanwhile, as one of ordinary skill in the art will have variations in the detailed description and the application scope in light of the ideas of the present disclosure, the present disclosure should not be construed as being limited to the above description.

Claims (20)

1. A video recall method, comprising:
responding to a video acquisition request sent by target equipment, and detecting whether the target equipment authorizes acquisition of application program information on the target equipment, wherein a platform account of the target equipment is an account of which video interaction behaviors occurring on a current platform do not meet preset conditions;
Acquiring target application information of an installed application in the target device under the condition that the target device authorizes the acquisition of the application information on the target device, wherein the target application information comprises an application identifier and/or an application name of the installed application;
And screening a video set matched with the platform account from a video library at least based on the target application program information, wherein the video set is used for determining videos sent to the target device, and the videos in the video set comprise videos recommended to devices with the same model as the target device and/or videos recommended to devices with the same IP address of the target device in the video library.
2. The method of claim 1, wherein the screening the video collection from the video library that matches the platform account based at least on the target application information comprises:
acquiring the equipment identifier of the target equipment;
Determining a device characteristic of the target device based on the device identification and the target application information;
And searching N videos with highest similarity with the equipment characteristics from the video library to obtain the video set, wherein N is a positive integer.
3. The method of claim 2, wherein the determining the device characteristics of the target device based on the device identification and the target application information comprises:
Determining an identification feature vector corresponding to the equipment identification and an application feature vector corresponding to the target application program information;
and calculating an equipment characteristic vector based on the identification characteristic vector and the application characteristic vector, wherein the equipment characteristic vector is used for representing the equipment characteristics of the target equipment.
4. The method of claim 2, wherein the searching the N videos from the video library that have the highest similarity to the device feature comprises:
Calculating the similarity between the device feature vector corresponding to the target device and the video feature vector imported in the similarity index, wherein the video feature vector corresponds to the video in the video library;
And determining the videos with the similarity larger than a preset similarity threshold value or the similarity arranged in the preset number N before as N videos with the highest similarity with the equipment characteristics.
5. The method of claim 1, wherein the preset conditions include: the number of the video interaction behaviors is larger than a preset number.
6. The method of claim 1, wherein the screening the video collection from the video library that matches the platform account based at least on the target application information comprises:
Inputting at least the target application information into a pre-trained video recall model, wherein the video recall model is used for screening a video set matched with the platform account from a video library;
The video recall model is trained through a training sample comprising a positive sample and a negative sample, wherein the positive sample is generated based on basic information of accounts of the same type as the platform account and information of videos with video interaction behaviors, the negative sample is generated based on basic information of accounts of the same type as the platform account and information of the videos sampled from the video library, and the accounts of the same type as the platform account are accounts with the video interaction behaviors which occur on the current platform and do not meet preset conditions.
7. The method of claim 6, wherein the training process of the video recall model comprises:
Acquiring the training sample, wherein the training sample comprises sample application program information corresponding to an installed application program in sample equipment and a video identifier corresponding to a sample video, and the video identifier corresponding to the sample video is marked with an actual hit result of the sample equipment on the sample video;
inputting the training sample into a video recall model, calculating a feature vector corresponding to the training sample through the video recall model, and outputting a predicted hit result of the sample equipment on the sample video;
Model parameters of the video recall model are adjusted based on a difference between the predicted hit result and the actual hit result.
8. The method of claim 6, wherein the obtaining the training sample comprises:
Acquiring a video identifier which is recommended to sample equipment corresponding to the account of the same type as the platform account and corresponds to a sample video of which the corresponding account has video interaction behavior, and forming a positive sample with a hit actual hit result between sample application program information corresponding to the sample equipment where the corresponding account is located;
And acquiring a video identifier corresponding to the video sampled in the video library, and combining the video identifier with sample application program information corresponding to each sample device to form a negative sample with a missed actual hit result.
9. The method of claim 6, wherein the computing, by the video recall model, the feature vector corresponding to the training sample and outputting the predicted hit result of the sample device on the sample video comprises:
determining an equipment characteristic vector corresponding to the sample equipment based on the sample application program information, wherein the equipment characteristic vector is used for representing equipment characteristics of the target equipment;
calculating a sample video feature vector corresponding to the sample video based on the video identifier corresponding to the sample video;
And converting the similarity between the equipment feature vector and the sample video feature vector into a predicted hit result of the sample equipment on the sample video, wherein the similarity is used for representing the similarity degree between the sample equipment feature vector and the sample video feature vector.
10. A video recall device, comprising:
The authorization detection module is configured to respond to a video acquisition request sent by target equipment and detect whether the target equipment authorizes acquisition of application information on the target equipment, wherein a platform account of the target equipment is an account of which video interaction behavior occurring on a current platform does not meet preset conditions;
an application information acquisition module configured to acquire target application information of an installed application in the target device, the target application information including an application identifier and/or an application name of the installed application, in a case where the target device authorizes acquisition of the application information on the target device;
And the video screening module is configured to screen a video set matched with the platform account from a video library at least based on the target application program information, wherein the video set is used for determining videos sent to the target device, and the videos in the video set comprise videos recommended to devices with the same model as the target device and/or videos recommended to devices in the video library and located in the same area with the IP address of the target device.
11. The apparatus of claim 10, wherein the video screening module comprises:
a device identifier obtaining unit configured to obtain a device identifier of the target device;
a device feature determination unit configured to determine a device feature of the target device based on the device identification and the target application information;
And the video searching unit is configured to search N videos with highest similarity to the equipment characteristics from the video library to obtain the video set, wherein N is a positive integer.
12. The apparatus of claim 11, wherein the device feature determination unit is further configured to:
Determining an identification feature vector corresponding to the equipment identification and an application feature vector corresponding to the target application program information;
and calculating an equipment characteristic vector based on the identification characteristic vector and the application characteristic vector, wherein the equipment characteristic vector is used for representing the equipment characteristics of the target equipment.
13. The apparatus of claim 11, wherein the video lookup unit is further configured to:
Calculating the similarity between the device feature vector corresponding to the target device and the video feature vector imported in the similarity index, wherein the video feature vector corresponds to the video in the video library;
And determining the videos with the similarity larger than a preset similarity threshold value or the similarity arranged in the preset number N before as N videos with the highest similarity with the equipment characteristics.
14. The apparatus of claim 10, wherein the preset condition comprises: the number of the video interaction behaviors is larger than a preset number.
15. The apparatus of claim 10, wherein the video screening module further comprises:
a model input unit configured to input at least the target application information into a pre-trained video recall model, wherein the video recall model is used for screening out a video set matched with the platform account from a video library;
The video recall model is trained through a training sample comprising a positive sample and a negative sample, wherein the positive sample is generated based on basic information of accounts of the same type as the platform account and information of videos with video interaction behaviors, the negative sample is generated based on basic information of accounts of the same type as the platform account and information of the videos sampled from the video library, and the accounts of the same type as the platform account are accounts with the video interaction behaviors which occur on the current platform and do not meet preset conditions.
16. The apparatus of claim 15, wherein the training process of the video recall model comprises:
Acquiring the training sample, wherein the training sample comprises sample application program information corresponding to an installed application program in sample equipment and a video identifier corresponding to a sample video, and the video identifier corresponding to the sample video is marked with an actual hit result of the sample equipment on the sample video;
inputting the training sample into a video recall model, calculating a feature vector corresponding to the training sample through the video recall model, and outputting a predicted hit result of the sample equipment on the sample video;
Model parameters of the video recall model are adjusted based on a difference between the predicted hit result and the actual hit result.
17. The apparatus of claim 16, wherein the obtaining the training sample comprises:
Acquiring a video identifier which is recommended to sample equipment corresponding to the account of the same type as the platform account and corresponds to a sample video of which the corresponding account has video interaction behavior, and forming a positive sample with a hit actual hit result between sample application program information corresponding to the sample equipment where the corresponding account is located;
And acquiring a video identifier corresponding to the video sampled in the video library, and combining the video identifier with sample application program information corresponding to each sample device to form a negative sample with a missed actual hit result.
18. The apparatus of claim 16, wherein the computing, by the video recall model, the feature vector corresponding to the training sample and outputting the predicted hit result of the sample device on the sample video comprises:
determining an equipment characteristic vector corresponding to the sample equipment based on the sample application program information, wherein the equipment characteristic vector is used for representing equipment characteristics of the target equipment;
calculating a sample video feature vector corresponding to the sample video based on the video identifier corresponding to the sample video;
And converting the similarity between the equipment feature vector and the sample video feature vector into a predicted hit result of the sample equipment on the sample video, wherein the similarity is used for representing the similarity degree between the sample equipment feature vector and the sample video feature vector.
19. An electronic device, comprising:
A processor;
A memory for storing the processor-executable instructions;
Wherein the processor is configured to execute the instructions to implement the video recall method of any one of claims 1 to 9.
20. A computer readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video recall method of any one of claims 1 to 9.
CN202010478591.9A 2020-05-29 2020-05-29 Video recall method, device, electronic equipment and storage medium Active CN113742522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010478591.9A CN113742522B (en) 2020-05-29 2020-05-29 Video recall method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010478591.9A CN113742522B (en) 2020-05-29 2020-05-29 Video recall method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113742522A CN113742522A (en) 2021-12-03
CN113742522B true CN113742522B (en) 2024-05-10

Family

ID=78724961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010478591.9A Active CN113742522B (en) 2020-05-29 2020-05-29 Video recall method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113742522B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106776873A (en) * 2016-11-29 2017-05-31 珠海市魅族科技有限公司 A kind of recommendation results generation method and device
CN107908686A (en) * 2017-10-31 2018-04-13 广东欧珀移动通信有限公司 Information-pushing method, device, server and readable storage medium storing program for executing
CN109714643A (en) * 2018-12-06 2019-05-03 北京达佳互联信息技术有限公司 Recommended method, system and the server and storage medium of video data
CN110688576A (en) * 2019-09-25 2020-01-14 北京达佳互联信息技术有限公司 Content recommendation method and device, electronic equipment and storage medium
CN114996509A (en) * 2022-04-24 2022-09-02 腾讯音乐娱乐科技(深圳)有限公司 Method and device for training video feature extraction model and video recommendation
CN115687690A (en) * 2022-10-09 2023-02-03 北京奇艺世纪科技有限公司 Video recommendation method and device, electronic equipment and storage medium
CN115984734A (en) * 2022-12-12 2023-04-18 北京奇艺世纪科技有限公司 Model training method, video recall method, model training device, video recall device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106776873A (en) * 2016-11-29 2017-05-31 珠海市魅族科技有限公司 A kind of recommendation results generation method and device
CN107908686A (en) * 2017-10-31 2018-04-13 广东欧珀移动通信有限公司 Information-pushing method, device, server and readable storage medium storing program for executing
CN109714643A (en) * 2018-12-06 2019-05-03 北京达佳互联信息技术有限公司 Recommended method, system and the server and storage medium of video data
CN110688576A (en) * 2019-09-25 2020-01-14 北京达佳互联信息技术有限公司 Content recommendation method and device, electronic equipment and storage medium
CN114996509A (en) * 2022-04-24 2022-09-02 腾讯音乐娱乐科技(深圳)有限公司 Method and device for training video feature extraction model and video recommendation
CN115687690A (en) * 2022-10-09 2023-02-03 北京奇艺世纪科技有限公司 Video recommendation method and device, electronic equipment and storage medium
CN115984734A (en) * 2022-12-12 2023-04-18 北京奇艺世纪科技有限公司 Model training method, video recall method, model training device, video recall device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113742522A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN107888981B (en) Audio and video preloading method, device, equipment and storage medium
CN111083512A (en) Switching method and device of live broadcast room, electronic equipment and storage medium
CN108121736B (en) Method and device for establishing subject term determination model and electronic equipment
US20210099751A1 (en) Method for displaying videos, and storage medium and electronic device thereof
CN112131410A (en) Multimedia resource display method, device, system and storage medium
CN107784045B (en) Quick reply method and device for quick reply
CN106453528A (en) Method and device for pushing message
CN112131466A (en) Group display method, device, system and storage medium
CN104636476A (en) Friend recommending method and friend recommending device
CN111859097B (en) Data processing method, device, electronic equipment and storage medium
CN111246255B (en) Video recommendation method and device, storage medium, terminal and server
CN117453933A (en) Multimedia data recommendation method and device, electronic equipment and storage medium
CN110110046B (en) Method and device for recommending entities with same name
CN110297970B (en) Information recommendation model training method and device
CN113742522B (en) Video recall method, device, electronic equipment and storage medium
CN112241486A (en) Multimedia information acquisition method and device
CN108241438B (en) Input method, input device and input device
CN113254707B (en) Model determination method and device and associated media resource determination method and device
CN114722238B (en) Video recommendation method and device, electronic equipment, storage medium and program product
CN113177162B (en) Search result sorting method and device, electronic equipment and storage medium
CN113473233B (en) Log splicing method and device, electronic equipment, storage medium and product
CN115484471B (en) Method and device for recommending anchor
CN113704315B (en) User recommendation method and device, electronic equipment and storage medium
CN115514660B (en) Data caching method and device, electronic equipment and storage medium
CN112989172B (en) Content recommendation method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant