CN111767428A - Video recommendation method and device, electronic equipment and storage medium - Google Patents

Video recommendation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111767428A
CN111767428A CN202010535770.1A CN202010535770A CN111767428A CN 111767428 A CN111767428 A CN 111767428A CN 202010535770 A CN202010535770 A CN 202010535770A CN 111767428 A CN111767428 A CN 111767428A
Authority
CN
China
Prior art keywords
recommended
video
videos
image quality
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010535770.1A
Other languages
Chinese (zh)
Inventor
冯亚楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010535770.1A priority Critical patent/CN111767428A/en
Publication of CN111767428A publication Critical patent/CN111767428A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application relates to the technical field of information processing, in particular to a video recommendation method and device, electronic equipment and a storage medium. The video recommendation method comprises the following steps: acquiring a set of videos to be recommended and characteristic information of each video to be recommended; calculating the image quality score of each video to be recommended according to the feature information of each video to be recommended and a preset image quality score model, wherein the image quality score model is obtained by training according to the feature information of the video; acquiring a set of recommended videos from the set of videos to be recommended according to the image quality scores of the videos to be recommended; and recommending the set of recommended videos. By adopting the embodiment of the application, the video can be recommended by combining the image quality of the video, so that the watching experience of a user on the video is improved.

Description

Video recommendation method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of information processing, in particular to a video recommendation method, a video recommendation device and a storage medium.
Background
With the development of the internet and the popularization of intelligent devices, more and more people watch network videos through the intelligent devices, and higher requirements are made on the quality of the network videos. Generally, a platform such as a video website or a video APP analyzes the user's preference and recommends videos that may be of interest to the user according to the user's preference. However, the inventors found that the following problems exist in the related art: the videos recommended to the user to watch are usually obtained by screening according to the preference of the user, but whether the watching experience brought by the screened videos is good or not is not considered, so that the videos recommended to the user to watch have videos with poor picture quality, the watching experience of the user is obviously influenced, and the user cannot be attracted to watch the videos continuously.
Disclosure of Invention
An object of the embodiments of the present application is to provide a video recommendation method, an apparatus, an electronic device, and a storage medium, which can recommend a video in combination with the image quality of the video, so as to improve the viewing experience of a user on the video.
In order to solve the above technical problem, an embodiment of the present application provides a video recommendation method, including: acquiring a set of videos to be recommended and characteristic information of each video to be recommended; calculating the image quality score of each video to be recommended according to the feature information of each video to be recommended and a preset image quality score model, wherein the image quality score model is obtained by training according to the feature information of the video; acquiring a set of recommended videos from the set of videos to be recommended according to the image quality scores of the videos to be recommended; and recommending the set of recommended videos.
An embodiment of the present application further provides a video recommendation apparatus, including: the system comprises a first acquisition module, a grading module, a second acquisition module and a recommendation module; the first acquisition module is used for acquiring a set of videos to be recommended and characteristic information of each video to be recommended; the scoring module is used for calculating the image quality score of each video to be recommended according to the feature information of each video to be recommended and a preset image quality scoring model, wherein the image quality scoring model is obtained by training according to the feature information of the videos; the second acquisition module is used for acquiring a set of recommended videos from the set of videos to be recommended according to the image quality scores of the videos to be recommended; and the recommending module is used for recommending the set of recommended videos.
An embodiment of the present application further provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video recommendation method described above.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the video recommendation method described above.
Compared with the prior art, the method and the device for recommending the videos acquire a set of videos to be recommended and characteristic information of each video to be recommended; calculating the image quality score of each video to be recommended according to the feature information of each video to be recommended and a preset image quality score model, wherein the image quality score model is obtained by training according to the feature information of the video; acquiring a set of recommended videos from the set of videos to be recommended according to the image quality scores of the videos to be recommended; and recommending the set of recommended videos. The feature information of the video can better reflect the picture quality of the video, so that the trained image quality scoring model can more truly and accurately calculate the image quality score for measuring the picture quality, and the calculated image quality score has higher reference value; the acquired set of videos to be recommended can be matched with the preference of the user, the set of recommended videos is acquired according to the image quality score of the videos to be recommended on the basis, namely, the picture quality of the videos is also used as one of the reference factors of the recommended videos, so that the content quality and the picture quality of the acquired recommended videos are guaranteed, and the watching experience of the user and the user viscosity of a video platform are improved.
In addition, the feature information includes encoding feature information and image feature information, where the encoding feature information includes one of the following or any combination thereof: resolution and frame rate; the image characteristic information comprises one of the following or any combination thereof: ambiguity, blockiness value, and noise value; the characteristic information capable of being expressed in a quantized mode can reflect the quality degree of the picture quality of the video more intuitively.
In addition, the image quality scoring model is obtained by training in the following way: generating a training data set according to the characteristic information of the video; obtaining the image quality grading model based on a preset regression algorithm according to the training data set; the regression algorithm is an algorithm which utilizes a data set to establish a model and carry out training, and the model obtained through the regression algorithm can predict and calculate a specific numerical value more accurately.
Additionally, the recommending the set of recommended videos includes: acquiring content scores of the recommended videos; calculating the recommendation score of each recommended video according to the content score, the image quality score and a preset video weight of each recommended video, wherein the preset video weight comprises a content weight and an image quality weight; recommending the recommended videos in sequence according to the recommendation scores of the recommended videos; the recommended videos are reordered according to the preset video weight by integrating the content quality and the picture quality of the recommended videos, so that the integrated quality of the recommended videos is guaranteed, and the recommendation result is more in line with the recommendation requirement.
In addition, the ambiguity of the video is calculated by the following formula:
Figure BDA0002537007050000031
wherein L represents a threshold value of a gray value in an image of the video; the gi represents the ith gray value in the image of the video; the p (gi) represents the probability of occurrence of the ith gray value in the image; the u represents an average value of gray values in an image of the video; the method for calculating the ambiguity of the video is provided.
In addition, the regression algorithm is an AdaBoost regression algorithm; when the AdaBoost regression algorithm is adopted to train the model, the training process is simple and efficient, the problem of overfitting can be avoided, the finally obtained model is high in precision, and more accurate image quality score can be obtained through calculation.
In addition, the acquiring a set of videos to be recommended includes: acquiring a set of videos to be recommended according to a user collaborative filtering algorithm; when the user collaborative filtering algorithm is adopted to obtain the set of videos to be recommended of the user, the interest and the preference of other users can be shared, the condition that the obtained set of videos to be recommended is incomplete or is relatively unilateral is reduced, the videos to be recommended with novel contents can be obtained for the user, and the unknown interest and the preference of the user can be beneficially discovered.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
FIG. 1 is a flow chart of a video recommendation method in a first embodiment of the present application;
FIG. 2 is a flow chart of a video recommendation method in a second embodiment of the present application;
fig. 3 is a block diagram showing the structure of a video recommendation apparatus according to a third embodiment of the present application;
fig. 4 is a block diagram showing the structure of an electronic device according to a fourth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the following describes each embodiment of the present application in detail with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in various embodiments of the present application in order to provide a better understanding of the present application. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
A first embodiment of the present application relates to a video recommendation method, a specific flow is shown in fig. 1, and the method includes:
step 101, acquiring a set of videos to be recommended and characteristic information of each video to be recommended;
102, calculating the image quality score of each video to be recommended according to the feature information of each video to be recommended and a preset image quality score model;
103, acquiring a set of recommended videos from the set of videos to be recommended according to the image quality scores of the videos to be recommended;
and 104, recommending the set of recommended videos.
The following describes details of the video recommendation method according to this embodiment in detail, and the following is only details provided for easy understanding and is not necessary to implement this embodiment.
The video recommendation method in the embodiment can be applied to scenes in which a user watches videos through a video website, a video APP and the like; when a user opens a video website or a video APP, a video recommendation device in a system/background of a video platform executes the video recommendation method in the embodiment to recommend videos for the user.
In step 101, a set of videos to be recommended and feature information of each video to be recommended are obtained. In an example, a set of videos to be recommended can be obtained according to a user collaborative filtering algorithm; for example, regarding the current user as a whole, calculating the similarity between the current user and other users using the video platform according to a cosine similarity calculation formula, and then obtaining a similar user of the current user according to the similarity; and acquiring videos associated with the interest and the preference of the similar user, and screening out a set of videos which have the highest association degree with the interest and the preference of the similar user and are not watched by the current user as a set of videos to be recommended of the current user. The set of videos to be recommended is obtained through the user collaborative filtering algorithm, namely the interest and the preference of other similar users are shared, so that the situations that the set of videos to be recommended is incomplete and is relatively comprehensive are reduced, the videos to be recommended with novel contents can be obtained for the users, and the unknown interest and the preference of the users can be found. It will be appreciated that it is also possible to obtain the set of videos to be recommended for the user according to other ways (algorithms).
Specifically, the feature information of the video to be recommended includes encoding feature information and image feature information.
Wherein, the coding characteristic information comprises one of the following or any combination thereof: resolution and frame rate; however, the content included in the encoding characteristic information is not particularly limited, and for example, the encoding characteristic information may include: resolution, frame rate, spatial complexity, motion complexity, and decoding error rate, among others. The above exemplary encoding characteristic information will be described below.
(1) Resolution ratio: the total number of pixel points of each frame of image in the video is represented by width multiplied by height and can be directly obtained;
(2) frame rate: the number of images showing the video broadcast per second can be directly obtained;
(3) spatial complexity: the complexity of the video in the spatial domain can be calculated by the following formula:
Figure BDA0002537007050000041
wherein C represents spatial complexity; bits1Represents the sum of the code rates of the I frames (intra-coded picture frames) of the video; QPI denotes the quantization parameter average for I frames of video.
(4) Motion complexity: the complexity of the video in the frequency domain can be calculated by the following formula:
Figure BDA0002537007050000042
wherein M represents motion complexity; bitspRepresents the sum of the code rates of the P frames of the video (forward predictive coded image frames); QPP denotes the quantization parameter average for P frames of video.
(5) Decoding error rate: the error rate when extracting image frames for calculating feature information from a video can be calculated by the following formula:
Figure BDA0002537007050000051
wherein, the image characteristic information comprises one of the following or any combination thereof: ambiguity, blockiness value, and noise value; however, the content included in the image feature information is not particularly limited, and for example, the image feature information may include: brightness, contrast, sharpness, blur, blockiness value, entropy, noise value, etc. When the image characteristic information is obtained, certain image frames can be extracted from the video for calculation; the above exemplary encoding characteristic information will be described below.
(1) Brightness: an average value representing the brightness of the extracted image frame; the image frame is in a YUV format, and a Y vector value of the image frame can be obtained as brightness;
(2) contrast ratio: the contrast representing the decimated image frames may be calculated by the following formula:
contrast ratio ═ (i, j)2P(i,j);(i,j)2=|i-j|
Wherein, (i, j)2The | i-j | represents the gray difference between adjacent pixel points; p (i, j) represents the distribution probability of the gray difference between adjacent pixel points;
(3) definition: the sharpness of the extracted image frame can be calculated by the following formula:
Figure BDA0002537007050000052
defining an image frame as I, and performing low-pass filtering on the image frame I to obtain a reference image frame IrLpf (i); extraction of I and IrObtaining gradient images G and I of IrGradient image G ofr(ii) a Obtaining N image blocks with the most abundant gradient information in G, and recording as { xiG is also obtained when i is 1,2, … … N }rThe N image blocks with the most abundant medium gradient information are marked as { yi1,2, … … N }; SSIM represents a calculation formula of image structure similarity, and is not described herein again;
(4) ambiguity: the degree of blur representing the decimated image frames may be calculated by the following formula:
Figure BDA0002537007050000053
wherein L represents a threshold value of a gray value in an image frame; gi represents the ith gray value in the image frame; p (gi) represents the probability of the ith gray value appearing in the image frame; u represents an average value of gray values in the image frame;
(5) blocking effect value: representing the block effect value of the extracted image frame, firstly calculating the gray difference value of 8 × 8 block edge pixels in the image frame, and then calculating the average value of the gray difference value as the block effect value of the image frame;
(6) entropy: the entropy of information representing the extracted image frame can be calculated by the following formula:
Figure BDA0002537007050000061
wherein p isiRepresenting the probability of the ith gray value appearing in the image frame; l represents the total number of gray levels, typically 256, i.e. L-1 typically 255;
(7) noise value: representing the noise value of the extracted image frame, firstly detecting edge pixel points of the image frame, and calculating the gray difference value between the edge pixel points and adjacent pixel points (taking the gray difference value when the gray difference value is less than the gray average value, and taking 0 when the gray difference value is greater than or equal to the gray average value); and calculating the average value of the gray difference values as the noise value of the image frame.
It is to be understood that the above encoding characteristic information and image characteristic information are merely examples and do not constitute a specific limitation.
In step 102, the image quality score of each video to be recommended is calculated according to the feature information of each video to be recommended and a preset image quality score model. Specifically, the preset image quality score model in this embodiment may be obtained by training in the following manner:
(1) generating a training data set according to the coding characteristic information and the image characteristic information of the video;
(2) and obtaining an image quality grading model based on a preset regression algorithm according to the generated training data set.
Specifically, the video used for training may be a video already stored in a database of a video website or a video APP; the video coding feature information and the image feature information are the same as those described in step 101, and are not described herein again. The training data set generated according to the coding feature information and the image feature information of the video may include: the average value, the standard deviation, the skewness of each item of coding feature information, the average value of coding feature information with the size of the first 25 percent, the average value of coding feature information with the size of the last 25 percent, the average value, the standard deviation, the skewness of each item of image feature information, the average value of image special effect information with the size of the first 25 percent, the average value of image feature information with the size of the last 25 percent, and the like. Taking the set of the data as a training data set, and performing regression training on the training data set by adopting a regression algorithm to obtain an image quality scoring model; it can be understood that the feature information of the video can more intuitively reflect the quality of the picture quality of the video, and therefore, the higher the picture quality of the video is, the higher the image quality score calculated by the image quality score model is.
The regression algorithm in the embodiment can be an AdaBoost regression algorithm, the training process is simple and efficient when the AdaBoost regression algorithm is adopted to train the model, the problem of overfitting can be avoided, the finally obtained model is high in precision, and calculation is facilitated to obtain more accurate image quality scores.
It is to be understood that the data included in the training data set, the training process of the model, and the regression algorithm are all examples and are not limited in particular.
After the feature information of the video to be recommended is obtained, the feature information of the video to be recommended is used as input and is input into a preset image quality grading model, and an output value of the image quality grading model is obtained, namely the image quality grading of the video to be recommended.
In step 103, a set of recommended videos is obtained from the set of videos to be recommended according to the image quality score of each video to be recommended. It can be understood that the higher the picture quality score is, the better the picture quality of the video is, so that a set of videos with top N scores can be obtained according to the picture quality score of the video to be recommended, and the set is used as a set of recommended videos, that is, the top N videos with the highest picture quality in the set of videos to be recommended are used as a set of recommended videos.
In step 104, a set of recommended videos is recommended. Specifically, each recommended video is obtained according to the image quality score of the video to be recommended, so that the recommended videos can be sorted according to the image quality score, and the recommended videos are recommended to the user in the sequence from high image quality score to low image quality score; in practical applications, it can be understood that recommended videos are displayed on a video website or a video APP in the order of the highest image quality score to the lowest image quality score.
Compared with the prior art, the method and the device for recommending the videos acquire a set of videos to be recommended and feature information of each video to be recommended; calculating the image quality score of each video to be recommended according to the feature information of each video to be recommended and a preset image quality score model, wherein the image quality score model is obtained by training according to the feature information of the video; acquiring a set of recommended videos from the set of videos to be recommended according to the image quality scores of the videos to be recommended; and recommending the set of recommended videos. The feature information of the video can better reflect the picture quality of the video, so that the trained image quality score model can more truly and accurately calculate the image quality score for measuring the picture quality, and the calculated image quality score has higher reference value; the acquired set of videos to be recommended can be matched with the preference of the user, the set of recommended videos is acquired according to the image quality score of the videos to be recommended on the basis, namely, the picture quality of the videos is also used as one of the reference factors of the recommended videos, so that the content quality and the picture quality of the acquired recommended videos are guaranteed, and the watching experience of the user and the user viscosity of a video platform are improved.
The second embodiment of the present application relates to a video recommendation method, and the present embodiment is substantially the same as the first embodiment, and mainly differs from the first embodiment in that: specific implementation manners for acquiring the set of recommended videos and recommending the set of recommended videos are provided. The specific flow of the video recommendation method in this embodiment is shown in fig. 2, and the following specifically describes the flow of fig. 2:
step 201, acquiring a set of videos to be recommended and characteristic information of each video to be recommended; this step is substantially similar to step 101 and will not be described herein again.
Step 202, calculating the image quality score of each video to be recommended according to the feature information of each video to be recommended and a preset image quality score model; this step is substantially similar to step 102 and will not be described again here.
And 203, screening the videos to be recommended according to the image quality scores of the videos to be recommended and a preset score lower limit value to obtain a set of recommended videos.
Specifically, after the image quality scores of the videos to be recommended are calculated, the videos to be recommended are screened according to a preset score lower limit value, the videos to be recommended with the image quality scores lower than the score lower limit value are filtered, and the remaining set of the videos to be recommended is used as the set of the recommended videos, so that the videos with low picture quality are effectively filtered, the videos with low picture quality cannot be recommended to a user, and the watching experience of the user is guaranteed.
More specifically, the recommended video number can be preset; after videos with lower picture quality are filtered, if the number of the obtained recommended videos is smaller than the preset number of the recommended videos, videos which are popular and have image quality scores higher than the score lower limit value can be selected from videos already stored in a video website or a video APP database and supplemented to a set of the recommended videos, so that the number of the obtained recommended videos is not too small, and the situation that the recommended videos provided for a user are not rich enough is avoided.
Step 204, obtaining content scores of recommended videos; calculating the recommendation scores of the recommended videos according to the content scores, the image quality scores and the preset video weights of the recommended videos; and recommending the recommended videos in sequence according to the recommendation scores of the recommended videos.
Specifically, as illustrated in step 101, when the set of videos to be recommended is obtained according to the user collaborative filtering algorithm, a set of videos that have the highest degree of association with the interest and taste of the similar user and are not watched by the current user is screened out from videos associated with the interest and taste of the similar user of the current user, and is used as the set of videos to be recommended of the current user, so that the degree of association of the videos can be regarded as the content score of the videos; and because the recommended video is obtained by screening from the video to be recommended, the content score of the recommended video can be directly obtained. After the content score and the image quality score of the recommended video are obtained, the recommended score of the recommended video is recalculated according to the score and the preset video weight; and recommending the recommended videos in sequence according to the recommendation scores. In this step, the preset video weight may be understood as including a content weight of the video and an image quality weight of the video, for example, (the content weight: the image quality weight is 0.6: 0.4), which indicates that the recommendation score of the video is calculated by emphasizing the content of the video; (content weight: picture quality weight: 0.2: 0.8), the recommendation score of the video is calculated with emphasis on the picture quality of the video.
In an example, the preset content weight: image quality weight is 0.6: 0.4, the content score of the recommended video A is 8, the image quality score is 4, the content score of the recommended video B is 2, and the image quality score is 8; from the above data, the recommendation score of the recommended video a is calculated to be (8 × 0.6+4 × 0.4 — 6.4), and the recommendation score of the recommended video B is calculated to be (2 × 0.6+8 × 0.4 — 4.4); since (6.4 > 4..4), recommendation video a is arranged in order before recommendation video B for recommendation. By the method, the comprehensive quality of the recommended videos is guaranteed, and the recommendation result is in accordance with the recommendation requirement.
Compared with the prior art, the method and the device have the advantages that the videos to be recommended are screened according to the image quality scores of the videos to be recommended and the preset score lower limit value to obtain the set of recommended videos, so that the videos with low picture quality can be effectively filtered, the videos with low picture quality cannot be recommended to the user, and the watching experience of the user is guaranteed; and when the recommended videos are recommended, the preset video weight is integrated to sort the recommended videos, so that the recommendation result is more in line with the recommendation requirement.
A third embodiment of the present application relates to a video recommendation apparatus, as shown in fig. 3, including: a first acquisition module 301, a scoring module 302, a second acquisition module 303, and a recommendation module 304.
A first obtaining module 301, configured to obtain a set of videos to be recommended and feature information of each of the videos to be recommended;
the scoring module 302 is configured to calculate an image quality score of each video to be recommended according to feature information of each video to be recommended and a preset image quality scoring model, where the image quality scoring model is obtained by training according to the feature information of the video;
a second obtaining module 303, configured to obtain a set of recommended videos from the set of videos to be recommended according to the image quality scores of the videos to be recommended;
a recommending module 304, configured to recommend the set of recommended videos.
In one example, the feature information includes encoding feature information and image feature information, wherein the encoding feature information includes one of the following or any combination thereof: resolution and frame rate; the image characteristic information comprises one of the following or any combination thereof: ambiguity, blockiness value, and noise value.
In one example, the image quality score model is trained by: generating a training data set according to the characteristic information of the video; and obtaining the image quality grading model based on a preset regression algorithm according to the training data set.
In one example, recommendation module 304 recommends the set of recommended videos, including: acquiring content scores of the recommended videos; calculating the recommendation score of each recommended video according to the content score, the image quality score and a preset video weight of each recommended video, wherein the preset video weight comprises a content weight and an image quality weight; and recommending the recommended videos in sequence according to the recommendation scores of the recommended videos.
In one example, the ambiguity is calculated by the following formula:
Figure BDA0002537007050000091
wherein L represents a threshold value of a gray value in an image of the video; the gi represents the ith gray value in the image of the video; the p (gi) represents the probability of occurrence of the ith gray value in the image; the u represents an average value of gray values in an image of the video.
In one example, the regression algorithm is an AdaBoost regression algorithm.
In one example, the first obtaining module 301 obtains a set of videos to be recommended, including: and acquiring a set of videos to be recommended according to a user collaborative filtering algorithm.
It should be understood that the present embodiment is a device embodiment corresponding to the first or second embodiment, and the present embodiment can be implemented in cooperation with the first or second embodiment. The related technical details mentioned in the first or second embodiment are still valid in this embodiment, and are not described herein again to reduce repetition. Accordingly, the related-art details mentioned in the embodiments can also be applied to the first or second embodiment.
It should be noted that, all the modules involved in this embodiment are logic modules, and in practical application, one logic unit may be one physical unit, may also be a part of one physical unit, and may also be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present application, a unit that is not so closely related to solving the technical problem proposed by the present application is not introduced in the present embodiment, but this does not indicate that there is no other unit in the present embodiment.
A fourth embodiment of the present application relates to an electronic device, as shown in fig. 4, comprising at least one processor 401; and a memory 402 communicatively coupled to the at least one processor 401; and a communication component 403 connected to the at least one processor 401, the communication component 403 receiving and transmitting data under control of the processor 401; wherein the memory 402 stores instructions executable by the at least one processor 401 to perform, by the at least one processor 401: acquiring a set of videos to be recommended and characteristic information of each video to be recommended; calculating the image quality score of each video to be recommended according to the feature information of each video to be recommended and a preset image quality score model, wherein the image quality score model is obtained by training according to the feature information of the video; acquiring a set of recommended videos from the set of videos to be recommended according to the image quality scores of the videos to be recommended; and recommending the set of recommended videos.
Specifically, the electronic device includes: one or more processors 401 and a memory 402, one processor 401 being exemplified in fig. 4. The processor 401 and the memory 402 may be connected by a bus or other means, and fig. 4 illustrates the connection by a bus as an example. Memory 402, which is one type of computer-readable storage medium, may be used to store computer software programs, computer-executable programs, and modules. The processor 401 executes various functional applications of the device and data processing by running computer software programs, instructions, and modules stored in the memory 402, that is, implements the above-described video recommendation method.
The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 402 may optionally include memory located remotely from processor 401, which may be connected to an external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 402 and when executed by the one or more processors 401 perform the video recommendation method of any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
A fifth embodiment of the present application relates to a computer-readable storage medium storing a computer program. The computer program, when executed by a processor, implements the above-described video recommendation method embodiments.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.

Claims (10)

1. A method for video recommendation, comprising:
acquiring a set of videos to be recommended and characteristic information of each video to be recommended;
calculating the image quality score of each video to be recommended according to the feature information of each video to be recommended and a preset image quality score model, wherein the image quality score model is obtained by training according to the feature information of the video;
acquiring a set of recommended videos from the set of videos to be recommended according to the image quality scores of the videos to be recommended;
and recommending the set of recommended videos.
2. The video recommendation method according to claim 1, wherein the feature information comprises coding feature information and image feature information, wherein the coding feature information comprises one or any combination of the following: resolution and frame rate; the image characteristic information comprises one of the following or any combination thereof: ambiguity, blockiness value, and noise value.
3. The video recommendation method according to any one of claims 1 or 2, wherein the image quality score model is trained by:
generating a training data set according to the characteristic information of the video;
and obtaining the image quality grading model based on a preset regression algorithm according to the training data set.
4. The video recommendation method according to claim 1, wherein said recommending the set of recommended videos comprises:
acquiring content scores of the recommended videos;
calculating the recommendation score of each recommended video according to the content score, the image quality score and a preset video weight of each recommended video, wherein the preset video weight comprises a content weight and an image quality weight;
and recommending the recommended videos in sequence according to the recommendation scores of the recommended videos.
5. The video recommendation method according to claim 2, wherein the ambiguity is calculated by the following formula:
Figure FDA0002537007040000011
wherein L represents a threshold value of a gray value in an image of the video; the gi represents the ith gray value in the image of the video; the p (gi) represents the probability of occurrence of the ith gray value in the image; the u represents an average value of gray values in an image of the video.
6. The video recommendation method according to claim 2, wherein said regression algorithm is an AdaBoost regression algorithm.
7. The video recommendation method according to claim 1, wherein said obtaining a set of videos to be recommended comprises:
and acquiring a set of videos to be recommended according to a user collaborative filtering algorithm.
8. A video recommendation apparatus, comprising: the system comprises a first acquisition module, a grading module, a second acquisition module and a recommendation module;
the first acquisition module is used for acquiring a set of videos to be recommended and characteristic information of each video to be recommended;
the scoring module is used for calculating the image quality score of each video to be recommended according to the feature information of each video to be recommended and a preset image quality scoring model, wherein the image quality scoring model is obtained by training according to the feature information of the videos;
the second acquisition module is used for acquiring a set of recommended videos from the set of videos to be recommended according to the image quality scores of the videos to be recommended;
and the recommending module is used for recommending the set of recommended videos.
9. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video recommendation method of any of claims 1-7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the video recommendation method of any one of claims 1 to 7.
CN202010535770.1A 2020-06-12 2020-06-12 Video recommendation method and device, electronic equipment and storage medium Pending CN111767428A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010535770.1A CN111767428A (en) 2020-06-12 2020-06-12 Video recommendation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010535770.1A CN111767428A (en) 2020-06-12 2020-06-12 Video recommendation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111767428A true CN111767428A (en) 2020-10-13

Family

ID=72720829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010535770.1A Pending CN111767428A (en) 2020-06-12 2020-06-12 Video recommendation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111767428A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112203152A (en) * 2020-11-30 2021-01-08 华东交通大学 Multi-modal confrontation learning type video recommendation method and system
CN113709570A (en) * 2020-09-25 2021-11-26 天翼智慧家庭科技有限公司 Apparatus and method for recommending bandwidth based on IPTV probe data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533367A (en) * 2013-10-23 2014-01-22 传线网络科技(上海)有限公司 No-reference video quality evaluation method and device
US20180152763A1 (en) * 2016-11-30 2018-05-31 Facebook, Inc. Recommendation system to enhance video content recommendation
CN111104550A (en) * 2018-10-09 2020-05-05 北京奇虎科技有限公司 Video recommendation method and device, electronic equipment and computer-readable storage medium
CN111163338A (en) * 2019-12-27 2020-05-15 广州市百果园网络科技有限公司 Video definition evaluation model training method, video recommendation method and related device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533367A (en) * 2013-10-23 2014-01-22 传线网络科技(上海)有限公司 No-reference video quality evaluation method and device
US20180152763A1 (en) * 2016-11-30 2018-05-31 Facebook, Inc. Recommendation system to enhance video content recommendation
CN111104550A (en) * 2018-10-09 2020-05-05 北京奇虎科技有限公司 Video recommendation method and device, electronic equipment and computer-readable storage medium
CN111163338A (en) * 2019-12-27 2020-05-15 广州市百果园网络科技有限公司 Video definition evaluation model training method, video recommendation method and related device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709570A (en) * 2020-09-25 2021-11-26 天翼智慧家庭科技有限公司 Apparatus and method for recommending bandwidth based on IPTV probe data
CN112203152A (en) * 2020-11-30 2021-01-08 华东交通大学 Multi-modal confrontation learning type video recommendation method and system

Similar Documents

Publication Publication Date Title
CN109286825B (en) Method and apparatus for processing video
CN111107395B (en) Video transcoding method, device, server and storage medium
CN110418177B (en) Video encoding method, apparatus, device and storage medium
CN112312231B (en) Video image coding method and device, electronic equipment and medium
CN111182303A (en) Encoding method and device for shared screen, computer readable medium and electronic equipment
CN111445424B (en) Image processing method, device, equipment and medium for processing mobile terminal video
CN111263243B (en) Video coding method and device, computer readable medium and electronic equipment
CN112102212A (en) Video restoration method, device, equipment and storage medium
CN111767428A (en) Video recommendation method and device, electronic equipment and storage medium
CN111385577B (en) Video transcoding method, device, computer equipment and computer readable storage medium
CN114554211A (en) Content adaptive video coding method, device, equipment and storage medium
CN110766637A (en) Video processing method, processing device, electronic equipment and storage medium
CN113452996B (en) Video coding and decoding method and device
WO2014045507A1 (en) Video encoding device
CN111182300A (en) Method, device and equipment for determining coding parameters and storage medium
CN106664404A (en) Block segmentation mode processing method in video coding and relevant apparatus
CN116471262A (en) Video quality evaluation method, apparatus, device, storage medium, and program product
CN116980604A (en) Video encoding method, video decoding method and related equipment
CN116074528A (en) Video coding method and device, and coding information scheduling method and device
US20230319327A1 (en) Methods, systems, and media for determining perceptual quality indicators of video content items
CN112565819B (en) Video data processing method and device, electronic equipment and storage medium
WO2024109138A1 (en) Video encoding method and apparatus and storage medium
US10848772B2 (en) Histogram-based edge/text detection
CN118118682A (en) Image transcoding method and device, electronic equipment and storage medium
CN116347070A (en) Image processing method, device and equipment, storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination