CN115861890A - Video analysis method and device, electronic equipment and storage medium - Google Patents

Video analysis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115861890A
CN115861890A CN202211626678.1A CN202211626678A CN115861890A CN 115861890 A CN115861890 A CN 115861890A CN 202211626678 A CN202211626678 A CN 202211626678A CN 115861890 A CN115861890 A CN 115861890A
Authority
CN
China
Prior art keywords
evaluation
target
video
value
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211626678.1A
Other languages
Chinese (zh)
Inventor
赵瑞书
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing IQIYI Science and Technology Co Ltd
Original Assignee
Beijing IQIYI Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing IQIYI Science and Technology Co Ltd filed Critical Beijing IQIYI Science and Technology Co Ltd
Priority to CN202211626678.1A priority Critical patent/CN115861890A/en
Publication of CN115861890A publication Critical patent/CN115861890A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

The application relates to a video analysis method and device, an electronic device and a storage medium. The method comprises the following steps: acquiring a video to be analyzed; determining a first target evaluation and a second target evaluation corresponding to a target video clip according to the intra-video evaluation information and the inter-video evaluation information corresponding to the video to be analyzed; determining a target wonderness evaluation value corresponding to the target video clip based on the intra-video evaluation information, the inter-video evaluation information, the first target evaluation and the second target evaluation; and determining the highlight category corresponding to the target video clip according to the target highlight evaluation value corresponding to the target video clip. By the method, the technical problems that in the related technology, the subjectivity is strong, the labeling speed is slow, and rapid batch data production cannot be achieved when the highlights of the video are manually labeled can be effectively solved.

Description

Video analysis method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of video analysis technologies, and in particular, to a video analysis method and apparatus, an electronic device, and a storage medium.
Background
In the process of developing the accuracy analysis algorithm for the video, a large amount of training data needs to be prepared for model training of the accuracy analysis algorithm. However, the wonderful degree of the video has strong subjectivity, and particularly when the wonderful degree of a segment is represented by a score of 0-10 instead of the wonderful degree and the wonderful degree of the video by labeling, it is difficult to objectively and accurately give a reasonable score by artificially labeling data, and in the process of adding a training data set, a person performing data labeling may need to replace the data labeling for various reasons, so that the uniformity of the labeling standard is more difficult to realize. Meanwhile, data is manually marked, the marking speed is slow, and rapid batch data production cannot be realized.
Therefore, the technical problems that the subjectivity is strong, the labeling speed is slow, and the rapid batch data production cannot be realized exist in the related technology of manually labeling the wonderful degree of the video.
Disclosure of Invention
In order to solve the technical problems that manual labeling of the wonderful degree of a video has strong subjectivity, slow labeling speed and incapability of realizing rapid batch data production, the application provides a video analysis method and device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present application provides a video analysis method, including:
acquiring a video to be analyzed;
determining a first target evaluation and a second target evaluation corresponding to a target video segment according to intra-video evaluation information and inter-video evaluation information corresponding to the video to be analyzed, wherein the intra-video evaluation information includes the first target evaluation corresponding to the target video segment, the inter-video evaluation information includes the second target evaluation corresponding to the target video segment, the first target evaluation is determined according to the evaluation of each candidate video segment in the video to be analyzed, the second target evaluation is a global evaluation determined according to the evaluation of each video segment in a target video set, all the candidate video segments include the target video segment, each candidate video segment has a unique corresponding time period in the video to be analyzed, and the target video set includes the video to be analyzed;
determining a target highlight evaluation value corresponding to the target video segment based on the intra-video evaluation information, the inter-video evaluation information, the first target evaluation and the second target evaluation;
and determining the wonderness category corresponding to the target video clip according to the target wonderness evaluation value corresponding to the target video clip.
Optionally, as in the foregoing method, the determining a target highlight evaluation value corresponding to the target video segment based on the intra-video evaluation information, the inter-video evaluation information, the first target evaluation, and the second target evaluation; the method comprises the following steps:
determining a first candidate evaluation corresponding to each candidate video clip according to the video internal evaluation information;
calculating the average value of all the first candidate evaluations to obtain a first average value;
determining a second candidate evaluation corresponding to each candidate video clip according to the inter-video evaluation information;
calculating the average value of all the second candidate evaluations to obtain a second average value;
determining a first size relationship between the first target evaluation and the first average value; determining a second size relationship between the second target evaluation and the second average value;
and determining a target fineness evaluation value corresponding to the target video clip based on the first size relation and the second size relation.
Optionally, as in the foregoing method, the determining a first magnitude relationship between the first target evaluation and the first average value includes:
determining a first minimum candidate evaluation with the lowest evaluation value and a first highest candidate evaluation with the highest evaluation value from all the first candidate evaluations;
dividing the first minimum candidate evaluation and the first average value to obtain a first preset number of first low evaluation value intervals; dividing the first highest candidate evaluation and the first average value to obtain a second preset number of first high-evaluation value intervals;
and determining a first target evaluation value interval containing the first target evaluation value in all first evaluation value intervals to obtain a first magnitude relation between the first target evaluation and the first average value, wherein the all first evaluation value intervals comprise the first low evaluation value interval and the first high evaluation value interval.
Optionally, as in the foregoing method, the determining a second magnitude relationship between the second target evaluation and the second average value includes:
determining a second minimum candidate evaluation with the lowest evaluation value and a second highest candidate evaluation with the highest evaluation value from all the second candidate evaluations;
dividing the second minimum candidate evaluation and the second average value to obtain a third preset number of second low evaluation value intervals; determining a minimum designated second evaluation between the second highest candidate evaluation and a preset second evaluation upper limit, and dividing the designated second evaluation and the second average value to obtain a fourth preset number of second high-evaluation value intervals;
and determining a second target evaluation value interval containing the second target evaluation value in all second evaluation value intervals to obtain a second magnitude relation between the second target evaluation and the second average value, wherein the all second evaluation value intervals include the second low evaluation value interval and the second high evaluation value interval.
Optionally, as in the foregoing method, the determining a target highlight evaluation value corresponding to the target video segment based on the first size relationship and the second size relationship includes:
determining a first wonderness value corresponding to each first evaluation value interval; determining a second highlight value corresponding to each second evaluation value interval;
determining a first target wonderness value corresponding to the first target evaluation value interval according to the first wonderness value corresponding to each first evaluation value interval; determining a second target wonderness value corresponding to the second target evaluation value interval according to a second wonderness value corresponding to each second evaluation value interval;
calculating the first target wonderness value and the second target wonderness value according to a preset weighting mode to obtain a target wonderness evaluation value;
and assigning the maximum target highlight evaluation value to the target video clip under the condition that the second target evaluation is higher than or equal to the preset second evaluation upper limit.
Optionally, as in the foregoing method, the determining, according to the target highlight evaluation value corresponding to the target video segment, the highlight category corresponding to the target video segment includes:
performing behavior detection on the target video clip to obtain a behavior detection result;
determining the high-chroma category corresponding to the target video clip as the high-chroma category when the behavior detection result indicates that the behavior of a preset behavior type exists in the target video clip and the target high-chroma evaluation value is greater than or equal to a preset threshold value;
and determining the highlight category corresponding to the target video clip as a low highlight category when the behavior detection result indicates that the behavior of a preset behavior type exists in the target video clip and the target highlight evaluation value is smaller than a preset threshold value.
Optionally, as in the foregoing method, after determining the category of the highlight corresponding to the target video segment according to the target highlight evaluation value corresponding to the target video segment, the method further includes:
determining a designated fineness category corresponding to each designated video clip, wherein all the designated video clips comprise the target video clip;
taking the appointed video clip with the corresponding appointed precise color type as a high-precision color clip;
taking the designated video clip with the corresponding designated high-chroma category as a low-chroma clip;
according to a preset quantity relation, selecting a first quantity of high-precision training segments from all the high-precision segments, and selecting a second quantity of low-wonderness training segments from all the low-wonderness segments, wherein the preset quantity relation is satisfied between the first quantity and the second quantity.
In a second aspect, an embodiment of the present application provides a video analysis apparatus, including:
the acquisition module is used for acquiring a video to be analyzed;
a first determining module, configured to determine, according to intra-video evaluation information and inter-video evaluation information corresponding to the video to be analyzed, a first target evaluation and a second target evaluation corresponding to a target video segment, where the intra-video evaluation information includes the first target evaluation corresponding to the target video segment, the inter-video evaluation information includes the second target evaluation corresponding to the target video segment, the first target evaluation is determined according to an evaluation of each candidate video segment in the video to be analyzed, the second target evaluation is a global evaluation determined according to an evaluation of each video segment in a target video set, all the candidate video segments include the target video segment, each candidate video segment has a unique corresponding time period in the video to be analyzed, and the target video set includes the video to be analyzed;
a second determination module, configured to determine a target highlight evaluation value corresponding to the target video segment based on intra-video evaluation information, the inter-video evaluation information, the first target evaluation, and the second target evaluation;
and the wonderful degree type determining module is used for determining the wonderful degree type corresponding to the target video clip according to the target wonderful degree evaluation value corresponding to the target video clip.
In a third aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor, when executing the computer program, is configured to implement the method according to any of the preceding claims.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, which includes a stored program, where the program is executed to perform the method according to any one of the preceding claims.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
the method provided by the embodiment of the application provides an implementation mode capable of automatically determining the category of the wonderful degree corresponding to the video segment, and compared with the method for manually marking the wonderful degree of the video in the related technology, the method can effectively guarantee the unification of the judgment standard and improve the efficiency of determining the category of the wonderful degree, and further can effectively overcome the technical problems that the manual marking of the wonderful degree of the video in the related technology has strong subjectivity, slow marking speed and incapability of realizing rapid batch data production.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a video analysis method according to an embodiment of the present disclosure;
FIG. 2 is a diagram illustrating filter scores and filter-related scores corresponding to video 1 in an application example of the present application;
FIG. 3 is a diagram illustrating filter scores and filter correlation scores corresponding to video 2 in an example of application of the present application;
fig. 4 is a block diagram of a video analysis apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
According to an aspect of an embodiment of the present application, there is provided a video analysis method. Alternatively, in this embodiment, the video analysis method may be applied to a hardware environment formed by a terminal and a server. The server is connected with the terminal through a network, can be used for providing services (such as advertisement push services and application services) for the terminal or a client installed on the terminal, and can be provided with a database on the server or independent of the server for providing data storage services for the server.
The network may include, but is not limited to, at least one of: wired networks, wireless networks. The wired network may include, but is not limited to, at least one of: wide area networks, metropolitan area networks, local area networks, which may include, but are not limited to, at least one of the following: WIFI (Wireless Fidelity), bluetooth. The terminal may not be limited to a PC, a mobile phone, a tablet computer, and the like.
The video analysis method in the embodiment of the application may be executed by a server, a terminal, or both the server and the terminal. The terminal executing the video analysis method according to the embodiment of the present application may also be executed by a client installed thereon.
Taking the video analysis method in this embodiment executed by the server as an example, fig. 1 is a video analysis method provided in this embodiment, and includes the following steps:
and step S101, acquiring a video to be analyzed.
The video analysis method in this embodiment may be applied to a scene where a highlight and a non-highlight video clip need to be identified in a video, for example: a scene in which the model of the highlight analysis algorithm is trained by highlight and non-highlight video segments.
Taking the analysis of the video to be analyzed as an example, the saturation of one or more candidate video segments in the video to be analyzed is determined by identifying the one or more candidate video segments.
The video to be analyzed may be obtained from the video library actively by the server implementing the method of the present embodiment, or may be uploaded to the server.
The video to be analyzed may include one or more videos, and the steps of the method of this embodiment may be performed sequentially among different videos or may be performed concurrently.
For example, the server obtains one of the videos (e.g., a video of a tv series, a movie video, etc.) from the database of the video platform as the video to be analyzed.
Step S102, according to intra-video evaluation information and inter-video evaluation information corresponding to a video to be analyzed, determining a first target evaluation and a second target evaluation corresponding to a target video clip, wherein the intra-video evaluation information includes the first target evaluation corresponding to the target video clip, the inter-video evaluation information includes the second target evaluation corresponding to the target video clip, the first target evaluation is determined according to the evaluation of each candidate video clip in the video to be analyzed, the second target evaluation is a global evaluation determined according to the evaluation of each video clip in a target video set, all the candidate video clips include the target video clip, each candidate video clip has a unique corresponding time period in the video to be analyzed, and the target video set includes the video to be analyzed.
After the video to be analyzed is acquired, the intra-video evaluation information and the inter-video evaluation information corresponding to the video to be analyzed may be acquired.
The intra-video rating information and the inter-video rating information may be information stored in a designated database in association with the video to be analyzed.
In the video to be analyzed, each candidate video segment has a unique corresponding time period (for example, the duration is 1 second, 2 seconds, and the like), there is no intersection between the time periods corresponding to different candidate video segments, and in general, the candidate video segments are consecutive in time sequence.
For the video to be analyzed in the video website, because the audience of the video in the video website is in the order of hundreds of millions and the audience is spread in all age groups, corresponding video internal evaluation information (scores are distributed from 0 to 100) and video inter-evaluation information (scores are distributed from 0 to +/-infinity) are generated according to the actual audience situation of the user for each second playing situation in each film.
Each candidate video segment in the video to be analyzed has a corresponding first rating and second rating.
The first evaluation for any one of the candidate video segments is used to indicate the relationship of the viewing conditions (e.g., the number of plays, the number of comments, the number of barrage, etc.) of the candidate video segment with respect to the viewing conditions of other candidate video segments in the video to be analyzed. For example, the first evaluation of the candidate video segment I with the lowest first evaluation in the video to be analyzed is 0, and the first evaluation of the candidate video segment II with the highest first evaluation is 100; for any candidate video segment i, the corresponding first evaluation may be obtained based on the first viewing condition a with the first evaluation of 0, and the first viewing condition b with the first evaluation of 100; for example, when the viewing condition includes a parameter type including only the number of plays, the first viewing condition a is played 1000 times, the first viewing condition b is played 11000 times, and the candidate video clip i is played 5000 times, the first score of the candidate video clip i may be:
(100×5000)/(11000-1000)=50;
in addition, other ways and other parameter types of viewing conditions can be adopted to determine the first score of any candidate video segment. This is not to be taken as an example.
The second evaluation for any one of the candidate video segments is used to indicate the relationship between the viewing conditions (e.g., the number of plays, the number of comments, the number of barrage, etc.) of the candidate video segment with respect to the viewing conditions of other video segments in the target video set (including other videos except the video to be analyzed). For example, the second evaluation of the video segment a with the lowest second viewing condition a in the video to be analyzed is 0, and the second evaluation of the video segment b with the second viewing condition b is 400; the second evaluation of any candidate video segment can be determined based on the viewing condition of any candidate video segment, the second evaluation of the video segment a with the lowest second viewing condition a being 0, and the second evaluation of the video segment b with the second viewing condition b being 400, with reference to the method for determining the first score.
Alternatively, the intra-video rating information and the inter-video rating information may be obtained in advance before the video to be processed is acquired, and thus, the first target rating and the second target rating of the target video clip may be determined directly based on the intra-video rating information and the inter-video rating information. And step S103, determining a target color fineness evaluation value corresponding to the target video segment based on the video internal evaluation information, the video inter-evaluation information, the first target evaluation and the second target evaluation.
After the intra-video evaluation information, the inter-video evaluation information, the first target evaluation, and the second target evaluation are determined, a target sharpness evaluation value corresponding to the target video segment may be determined based on the intra-video evaluation information, the inter-video evaluation information, the first target evaluation, and the second target evaluation, for example:
determining a first wonderness evaluation value corresponding to different first evaluations and a second wonderness evaluation value corresponding to different second evaluations, and determining a first wonderness evaluation value corresponding to the first target evaluation and a second wonderness evaluation value corresponding to the second target evaluation; and finally determining a target wonderness evaluation value. And step S104, determining the wonderness type corresponding to the target video clip according to the target wonderness evaluation value corresponding to the target video clip.
After the target chroma evaluation value of the target video is determined, the corresponding chroma category of the target video clip can be judged according to the target chroma evaluation value.
For example, the highlight evaluation value sections corresponding to different highlight categories may be preset, and then the highlight category corresponding to the target video clip may be determined according to the highlight evaluation value section in which the target highlight evaluation value section falls.
Further, when the highlight category corresponding to the target video clip is determined, the video content actually displayed in the target video clip can be judged, and the highlight category can be determined by combining the video content and the target highlight evaluation value.
Through the method in the embodiment, an implementation mode capable of automatically determining the category of the wonderful degree corresponding to the video segment is provided, and compared with the method for manually marking the wonderful degree of the video in the related technology, the method can effectively guarantee the unification of the judgment standard and improve the efficiency of determining the category of the wonderful degree, and further can effectively overcome the technical problems that the manual marking of the wonderful degree of the video in the related technology has strong subjectivity, slow marking speed and incapability of realizing rapid batch data production.
As an alternative embodiment, as the foregoing method, the step S103 of determining the target highlight evaluation value corresponding to the target video segment based on the intra-video evaluation information, the inter-video evaluation information, the first target evaluation and the second target evaluation includes the following steps:
step S201, according to the intra-video evaluation information, a first candidate evaluation corresponding to each candidate video segment is determined.
After obtaining the intra-video rating information, the intra-video rating information may be a curve of rating change in general, and further, a first candidate rating corresponding to each candidate video segment may be determined.
Step S202, average value calculation is carried out on all the first candidate evaluations to obtain a first average value.
After obtaining all the first candidate evaluations, an average value of all the first candidate evaluations may be calculated, so as to obtain a first average value corresponding to the video to be analyzed.
Step S203, determining a second candidate evaluation corresponding to each candidate video clip according to the inter-video evaluation information;
after the intra-video evaluation information is obtained, the inter-video evaluation information may be a curve of one evaluation change (in a case where there is an upper limit (for example, 100) with respect to the curve of the intra-video evaluation information, there is no upper limit in the inter-video evaluation information), and further, a second candidate evaluation corresponding to each candidate video segment may be specified.
And step S204, calculating the average value of all the second candidate evaluations to obtain a second average value.
After all the second candidate evaluations are obtained, the average value of all the second candidate evaluations may be calculated, and then the second average value corresponding to the video to be analyzed is obtained.
Step S205, determining a first size relationship between the first target evaluation and the first average value; a second magnitude relationship between the second target evaluation and the second average is determined.
After the first target evaluation and the first average value are obtained, a first size relationship between the first target evaluation and the first average value can be determined; as an alternative embodiment, the method for determining the first magnitude relationship between the first target evaluation and the first average value includes the following steps:
in step S301, a first minimum candidate evaluation with the lowest evaluation value and a first maximum candidate evaluation with the highest evaluation value are determined among all the first candidate evaluations.
After all the first candidate evaluations are determined, a first minimum candidate evaluation having the lowest evaluation value and a first highest candidate evaluation having the highest evaluation value may be determined among all the first candidate evaluations by means of one-by-one comparison.
Step S302, dividing the first minimum candidate evaluation and the first average value to obtain a first low evaluation value interval with a first preset number; and dividing the first highest candidate evaluation and the first average value to obtain a second preset number of first high-evaluation value intervals.
After the first minimum candidate evaluation and the first average value are obtained, an evaluation interval with the first minimum candidate evaluation as the minimum value and the first average value as the maximum value may be obtained, and the evaluation interval may be divided according to a preset first preset number to obtain a first low evaluation value interval with a first preset number, or alternatively, the first low evaluation value interval with the first preset number may be obtained by dividing in a uniform division manner. And, there is no intersection between the different two first low-evaluation-value sections.
After the first highest candidate evaluation and the first average value are obtained, an evaluation interval with the first highest candidate evaluation as the maximum value and the first average value as the minimum value is obtained, and the evaluation interval can be divided according to a preset second preset number to obtain a second preset number of first high-evaluation value intervals, and optionally, a uniform division mode can be adopted to divide to obtain a second preset number of first high-evaluation value intervals. And, there is no intersection between two first high scoring value intervals that are different.
In step S303, a first target evaluation value section including a first target evaluation value is determined from all the first evaluation value sections, so as to obtain a first magnitude relationship between the first target evaluation and the first average value, where all the first evaluation value sections include a first low evaluation value section and a first high evaluation value section.
After the first low evaluation value section and the first high evaluation value section are obtained, all the first evaluation value sections may be obtained, and the first evaluation value section in which the first target evaluation value falls may be determined as the first target evaluation value section, and the first target evaluation value section including the first target evaluation value may be determined as the first magnitude relationship between the first target evaluation and the first average value.
For example, when the first average value is 56, the first preset number is 7, and the second preset number is 4, the first low-evaluation-value section that can be obtained by dividing includes [0,8), [8,16), [16,24), [24,32), [32,40), [40,48), [48,56); the first high scoring value interval includes [56,67 ], [67,78 ], [78,89 ], [89,100]. If the first target evaluation value is 66, the first high evaluation value section [56,67 ] is determined as the first target evaluation value section.
As an alternative embodiment, the determining a second magnitude relationship between the second target evaluation and the second average value according to the foregoing method includes the following steps:
in step S401, the second minimum candidate evaluation with the lowest evaluation value and the second highest candidate evaluation with the highest evaluation value are determined among all the second candidate evaluations.
After all the second candidate evaluations are determined, a second minimum candidate evaluation having the lowest evaluation value and a second highest candidate evaluation having the highest evaluation value may be determined among all the second candidate evaluations by means of one-by-one comparison.
Step S402, dividing the second minimum candidate evaluation and the second average value to obtain a third preset number of second low evaluation value intervals; and determining the minimum specified second evaluation between the second highest candidate evaluation and the preset second evaluation upper limit, and dividing the specified second evaluation and the second average value to obtain a fourth preset number of second high-evaluation-value intervals.
After the second minimum candidate evaluation and the second average value are obtained, an evaluation interval with the second minimum candidate evaluation as the minimum value and the second average value as the maximum value may be obtained, and the evaluation interval may be divided by a preset third preset number to obtain a third preset number of second low evaluation value intervals, or alternatively, a uniform division may be performed to obtain a third preset number of second low evaluation value intervals. And, there is no intersection between two different second low evaluation value sections.
After the second highest candidate evaluation and the second average value are obtained, the evaluation value intervals may be divided according to the second highest candidate evaluation and the second average value, and since the second highest candidate evaluation may have a peak value much higher than the average value, a minimum specified second evaluation may be determined between the second highest candidate evaluation and a preset second evaluation upper limit.
The preset second evaluation upper limit may be a previously set evaluation upper limit. E.g., 400, 500, etc.
Illustratively, when the second evaluation upper limit is 400 and the second highest candidate evaluation is 300, then 300 is taken as the designated second evaluation; when the second evaluation upper limit is 400 and the second highest candidate evaluation is 500, 400 is taken as the designated second evaluation.
After the designated second evaluation is determined, an evaluation interval with the designated second evaluation as the maximum value and the second average value as the minimum value can be obtained, and the evaluation interval can be divided according to a preset fourth preset number to obtain a fourth preset number of second high-evaluation value intervals, and optionally, a uniform division mode can be adopted to divide the second high-evaluation value intervals to obtain a fourth preset number of second high-evaluation value intervals. And, there is no intersection between two second highest scoring value intervals that are different.
In step S403, a second target evaluation value section including a second target evaluation value is determined from all second evaluation value sections, so as to obtain a second magnitude relationship between the second target evaluation and the second average value, where all the second evaluation value sections include a second low evaluation value section and a second high evaluation value section.
After the second low evaluation value section and the second high evaluation value section are obtained, all the second evaluation value sections may be obtained, and the second evaluation value section in which the second target evaluation value falls may be determined as the second target evaluation value section, and the second target evaluation value section including the second target evaluation value may be determined as the first magnitude relationship between the second target evaluation and the second average value.
For example, when the second average value is 70, the third preset number is 7, the second highest candidate evaluation is 310, the preset second evaluation upper limit is 400, and the fourth preset number is 4, it may be determined that the second evaluation is 300 and the divided second low evaluation value sections include [0,10), [10,20), [20,30), [30,40), [40,50), [50,60), [60,70); the second highest scoring value interval includes [70,130 ], [130,190 ], [190,250 ], [250,310]. If the second target evaluation is 66, determining a second low evaluation value section [60,70) as a second target evaluation value section; if the second target evaluation value is 199, the second high evaluation value section [190,250 ] is determined as the second target evaluation value section.
In step S206, a target sharpness evaluation value corresponding to the target video segment is determined based on the first size relationship and the second size relationship.
After the first size relationship and the second size relationship are determined, the target highlight evaluation value corresponding to the target video segment may be determined according to the first size relationship and the second size relationship, as an alternative embodiment, as the foregoing method, the determining the target highlight evaluation value corresponding to the target video segment based on the first size relationship and the second size relationship includes the following steps:
step S501, determining a first saliency value corresponding to each first evaluation value section; a second highlight value corresponding to each second evaluation value section is determined.
After all the first evaluation value intervals are obtained, a first highlight value can be assigned to each first evaluation value interval, and then a first highlight value corresponding to each first evaluation value interval can be obtained. For example, when 11 first evaluation value sections are included in order from low to high in accordance with the size of each first evaluation value section, the corresponding first saliency values may be 0,1, 2, 3, 4, 5, 6, 7, 8, 9,10 in order. Similarly, a second highlight value corresponding to each second evaluation value section may be determined.
Step S502, determining a first target wonderness value corresponding to the first target evaluation value interval according to the first wonderness value corresponding to each first evaluation value interval; and determining a second target wonderness value corresponding to the second target evaluation value interval according to the second wonderness value corresponding to each second evaluation value interval.
When the first highlight value corresponding to each first evaluation value section is determined, and the first target evaluation value section is one of all the first evaluation value sections, the first target highlight value corresponding to the first target evaluation value section can be determined.
Similarly, when the second highlight value corresponding to each second evaluation value section is determined, and the second target evaluation value section is one of all the second evaluation value sections, the second target highlight value corresponding to the second target evaluation value section can be determined.
Step S503, calculating the first target wonderness value and the second target wonderness value according to a preset weighting manner, to obtain a target wonderness evaluation value.
After the first target wonderness value and the second target wonderness value are obtained, the first target wonderness value and the second target wonderness value can be calculated according to a preset weighting mode.
In the preset weighting manner, a first weight corresponding to the first target saliency value and a second weight corresponding to the second target saliency value may be defined, and in general, both the first weight and the second weight may be 0.5, that is, the first target saliency value and the second target saliency value are subjected to average value calculation to obtain the target saliency evaluation value.
For example, when the first target saliency value is 7 and the second target saliency value is 9, and the preset weighting manner is to perform average value calculation, the target saliency evaluation value is (7+9)/2=8.
And step S504, under the condition that the second target evaluation is higher than or equal to the preset second evaluation upper limit, assigning the maximum target highlight evaluation value to the target video clip.
For a section whose second target rating is higher than a preset second rating upper limit (for example, 400, such a video section has absolutely many forward viewing operations), the target video section is assigned with the largest target highlight rating value, for example, a video section directly determined as 10 points in highlight.
By the method in the embodiment, an implementation mode for automatically calculating the target wonderness evaluation value corresponding to the target video clip is provided, the efficiency of determining the target wonderness evaluation value can be effectively improved, and the technical problems of large subjective image and low efficiency caused by manual determination are avoided.
As an alternative embodiment, as in the foregoing method, the step S105 of determining the highlight category corresponding to the target video segment according to the target highlight evaluation value corresponding to the target video segment includes the following steps:
step S601, performing behavior detection on the target video clip to obtain a behavior detection result.
After the target video segment is obtained, behavior detection can be performed on the target video segment through a preset video behavior detection algorithm, so that a behavior detection result corresponding to the target video segment is obtained.
The behavior detection result may be a behavior indicating behavior indicated in the target video segment, such as: skiing, fighting, laughing, crying, etc.
Step S602, when the behavior detection result indicates that the behavior of the preset behavior type exists in the target video segment, and the target chroma evaluation value is greater than or equal to the preset threshold, determining that the chroma category corresponding to the target video segment is the high-chroma category.
And when the behavior detection result is determined, judging whether the behavior detection result indicates that the target video clip has the behavior of the preset behavior type.
The preset behavior category may be type information indicating a behavior including a possible actual wonderful meaning, such as skiing, fighting, laughing, crying, and the like, as described above. And the preset behavior categories can be added, deleted or modified according to the behavior categories required to be detected in the actual application.
After the target color accuracy evaluation value is determined, the relationship between the target color accuracy evaluation value and a preset threshold value can be determined.
The preset threshold may be a preset threshold for distinguishing between the high-chroma evaluation value and the low-chroma evaluation value, for example, when the preset threshold is 6, if the target-chroma evaluation value is greater than or equal to 6, the target-chroma evaluation value is the high-chroma evaluation value, and conversely, the target-chroma evaluation value is the low-chroma evaluation value.
If the behavior detection result indicates that a behavior of a preset behavior type exists in the target video segment and the target wonderness evaluation value is greater than or equal to the preset threshold value, it indicates that the behavior includes a behavior with an actual wonderness meaning, and the target wonderness evaluation value is a high-precision evaluation value, which may reflect that the target video segment does not have a high wonderness because the video to be analyzed has a high wonderness, but the target video segment itself does include a wonderness behavior segment with a high wonderness (e.g., a fighting scene, a quarry scene, a kiss scene, a laugh scene, a cry scene, etc.), so that the wonderness class corresponding to the target video segment is determined to be the high-precision class.
Step S603, when the behavior detection result indicates that a behavior of a preset behavior type exists in the target video segment and the target saliency evaluation value is smaller than a preset threshold, determining that the saliency category corresponding to the target video segment is a low saliency category.
When the behavior detection result indicates that a behavior of a preset behavior type exists in the target video segment and the target saliency evaluation value is smaller than a preset threshold value, it indicates that the target video segment contains a behavior with actual saliency, but the target saliency evaluation value is still a low saliency evaluation value, and indicates that the video to be detected has low saliency, and the highlight category corresponding to the target video segment is determined to be the low saliency category in this case because the behavior does not exist (for example, an empty mirror scene, a normal chat scene, a scene in which actors do not have language, no action, no emotion expression, and the like), but the behavior corresponding to the target video segment does not attract viewers, so that the highlight category can better meet the viewing habits of users.
As an alternative embodiment, as in the foregoing method, after the step S105 determines the highlight category corresponding to the target video segment according to the target highlight evaluation value corresponding to the target video segment, the method further includes the following steps:
step S701, determining the designated fineness category corresponding to each designated video clip, wherein all the designated video clips comprise target video clips.
Before step S701, a target request for requesting generation of training data for training a highlight algorithm may be obtained, and then all designated video segments may be obtained in response to the target request, and the designated highlight category corresponding to each designated video segment is determined according to the method in the foregoing embodiment.
Step S702, the corresponding designated video clip with the designated high-precision type as the high-precision video clip.
Step S703, using the designated video segment whose designated high-chroma category is the low-high-chroma category as the low-high-chroma segment.
Step S704, according to a preset number relationship, selecting a first number of high-chroma training segments from all the high-chroma segments, and selecting a second number of low-chroma training segments from all the low-chroma segments, where the first number and the second number satisfy the preset number relationship.
After the designated high-chroma category corresponding to each designated video segment is determined, all the designated video segments can be classified according to the high-chroma category, the designated video segments with the corresponding designated high-chroma category as high-chroma segments, and the designated video segments with the corresponding designated high-chroma category as low-chroma segments.
In order to provide training data for training a highlight algorithm in a later period, a first number of high-highlight training segments can be selected from all the high-highlight segments and a second number of low-highlight training segments can be selected from all the low-highlight segments according to a preset number relation.
The preset number relationship may be a proportional relationship indicating the number of high-chroma segments and the number of low-chroma segments in all the training data, or may be a relationship directly defining the number of high-chroma segments and the number of low-chroma segments.
Further, after determining that the predetermined number relationship is given, the first number and the second number may be determined, and then the first number of high-chroma training segments may be selected among all the high-chroma segments, and the second number of low-chroma training segments may be selected among all the low-chroma segments.
The high-chroma training segment is positive sample data used for training the wonderful algorithm, and the low-wonderful training segment is negative sample data used for training the wonderful algorithm.
In addition, training data of a highlight algorithm can be dynamically added according to newly added video content, and the training data can meet the current requirements.
By the method in the embodiment, the high-chroma training segments and the low-chroma training segments for training the highlight algorithm can be quickly determined, so that the production efficiency of training data can be effectively improved.
As described below, an application example to which any of the foregoing embodiments is applied is provided:
(1) Converting the existing filter scores (namely, video internal evaluation information) and filter related scores (namely, video inter-evaluation information) into specific wonderness scores (namely, wonderness evaluation values), wherein a specific conversion algorithm is as follows:
first, the distribution of scores, average scores, and the like of different videos are different regardless of the filter data or the filter-related data. The filter related data is mainly used for explaining the distribution of the film viewing situation in each second in the whole film, the filter related data is mainly used for explaining the film viewing situation in each second under the global situation (for example, all videos contained in the whole video platform), and for the distribution situation of all films, two scores need to be analyzed respectively in the process of score conversion.
As shown in fig. 2 and 3, filter scores and filter correlation scores corresponding to movies within the two video platforms (i.e., video 1 and video 2) are shown. The abscissa represents each time point location, and the unit is second, and the ordinate represents the fraction corresponding to the time point location.
As can be seen from fig. 2 and 3, the score distribution between videos is large in difference, and there is no specific upper limit for the filter-related score for the highest score thereof (the distribution of the scores is [0, + ∞ ]). Meanwhile, for the same video, the distribution of the filter scores and the filter related scores of the same video also has a certain difference. Although the difference exists, it can be determined that, for each distribution of scores, the score is relatively high, and the forward (repeat play, large number of bullet screens, etc.) operation of viewing is more. And at the position with relatively low score, the negative operation of film watching (film watching quit, double-speed playing, few barrage and the like) is more.
Aiming at the distribution situation, a general fractional division algorithm is provided. The method comprises the following specific steps of:
1. a first average score (i.e., a first average value) of the filter scores in the video is respectively calculated, and a second average score (i.e., a second average value) avg _ score of the filter-related scores in the video is used to represent an average viewing condition of the video to be analyzed.
2. And directly judging the video clip with the fineness of 10 points for the clip with the filter related data higher than 400 points (such clip has absolutely more forward viewing operations).
3. All candidate video clips of a video to be analyzed are divided into 2 parts by calculating an average score, the candidate video clips higher than the average score are regarded as wonderful clips, and the candidate video clips lower than the average score are regarded as common clips, and the calculation method is as follows:
firstly, the difference value delta _ score1 of the highest and average scores is calculated, and max_score-
avg _ score, delta _ score1 is the fractional difference, which is 100 points if max _ score is the maximum 5 value of the filter data (i.e., the first highest candidate evaluation), and max _ score is the filter data
Max _ score = min (400, max relative u lving _score) when the maximum value of the correlation data (i.e., specifying the second evaluation) is the second highest candidate evaluation in the foregoing embodiment, and 400 is the second upper evaluation in the foregoing embodiment
And (4) limiting. Then, 4 equal parts of the value one _ part0 (i.e., the first highest scoring interval in the filter data, or the second highest scoring interval in the filter-related data) are divided for delta _ score
Evaluation value interval) is one _ part = delta _ score 1/4. For the result in [ avg _ score +0
one _ part, avg _ score +1 × one _ part) score range is 7 points as a highlight score, for the score range [ avg _ score +1 × one _ part, avg _ score +2 × one _ part ]
Segments within score 8 are scored as highlights, segments within score range of [ avg _ score +2 × one _ part, avg _ score5+3 × one _ part) are scored as highlights 9, and segments within score range of [ avg _ score +2 × one _ part ] are scored as highlights
The segments within the score range of +3 × one _ part, max _ score ] are scored as 10 points for highlights. Because the filter fraction and the filter related fraction can calculate the corresponding wonderness fraction, the fraction of the segment with 7 or more wonderness is taken as the final wonderness segment
Taking the average of the two scores (i.e., the preset weighting mode is the average calculation mode): 0final \ score = (score 1+ score 2)/2; score1 is a first target sharpness value, score2
Is a second target wonderness value.
For video segments below the average score, the difference between the average score and the lowest score, delta _ score2= avg _ score-min _ score, is first calculated. Here min _ score is all filtering
The lowest score of the video in the filter data or filter related data (i.e., the first 5 smallest candidate evaluations in the filter data, or the second smallest candidate evaluations in the filter related data). Then, the delta _ score2 is divided into 8 equal parts, and the value of each equal part is one _ part =
And delta _ score2/8, wherein the scores corresponding to the videos with the scores of [ min _ score +0 × one _ part, min _ score +1 × one _ part) marked as 0-point of chroma and the same reason of 1-5 points of chroma are respectively obtained by adopting formulas [ min _ score + i × one _ part, min _ score + (i + 1) × one _ part), wherein i represents the specific chroma score. The scores corresponding to the 6-score high-chroma videos are obtained by adopting [ min _ score +6 × one _ part, avg _ score ]. Similarly, since the filter score and the filter-related score can both calculate the corresponding highlight score, the score of the segment having a highlight of 6 or less is the average of the two scores (i.e., the predetermined weighting manner is the average calculation manner) as the final non-highlight segment: final _ score = (score 1+ score 2)/2.
4. And judging whether the video clips with high score and high chroma are video clips containing behaviors with actual wonderful significance.
For the high-definition video with the high definition obtained from the video to be analyzed, the video segments with the definition score higher than 7 are identified by adopting the existing video behavior detection algorithm (namely, the algorithm for behavior detection), so as to obtain the information such as the labels (namely, the behavior detection results) containing the behaviors in each video segment, the confidence level (which can be set according to the actual use scene, such as 95 percent, 90 percent, and the like), and the video segments without the actual definition are filtered through a higher confidence level threshold value and the behavior labels needing to be reserved.
And when the wonderful degree analysis algorithm has new requirements, corresponding training sample data can be dynamically and quickly produced, for example, when the behavior type of skiing is required to be added as the output content of the wonderful degree algorithm, the data with the behavior type of skiing can be quickly produced through the video behavior detection algorithm for training the wonderful degree algorithm.
For a video segment with a low wonderness degree value (that is, the target wonderness degree evaluation value is smaller than a preset threshold), such as a behavior segment still having an actual wonderness meaning, even if the video segment contains the label data obtained by the video behavior detection algorithm, the video segment is still considered as a video segment with a low wonderness degree, so that the video segment is more in line with the watching habit of the user.
In the actual process of training the highlight algorithm, for a large amount of video data, the data volume of the finally produced data is unbalanced among different highlight scores, and generally, after the data is filtered by using the behavior recognition algorithm, the segment with high highlight score is far lower than the segment with low highlight score. Here, it is only necessary to retain the acquired high-resolution highlight video segments as much as possible, and the video with low highlight score is partially retained according to the ratio that the number of the video segments corresponding to each highlight score is about 1:1 (i.e., the preset number relationship is yes).
As shown in fig. 4, according to an embodiment of another aspect of the present application, there is also provided a video analysis apparatus including:
the acquisition module 1 is used for acquiring a video to be analyzed;
the first determining module 2 is configured to determine a first target evaluation and a second target evaluation corresponding to a target video segment according to intra-video evaluation information and inter-video evaluation information corresponding to a video to be analyzed, where the intra-video evaluation information includes the first target evaluation corresponding to the target video segment, the inter-video evaluation information includes the second target evaluation corresponding to the target video segment, the first target evaluation is determined according to an evaluation of each candidate video segment in the video to be analyzed, the second target evaluation is a global evaluation determined according to an evaluation of each video segment in a target video set, all the candidate video segments include the target video segment, each candidate video segment has a unique corresponding time period in the video to be analyzed, and the target video set includes the video to be analyzed;
a second determining module 3, configured to determine a target highlight evaluation value corresponding to the target video segment based on the intra-video evaluation information, the inter-video evaluation information, the first target evaluation, and the second target evaluation;
and the highlight category determining module 4 is configured to determine a highlight category corresponding to the target video segment according to the target highlight evaluation value corresponding to the target video segment.
Specifically, the specific process of implementing the functions of each module in the apparatus according to the embodiment of the present invention may refer to the related description in the method embodiment, and is not described herein again.
According to another embodiment of the present application, there is also provided an electronic apparatus including: as shown in fig. 5, the electronic device may include: the system comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 complete communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
the processor 1501 is configured to implement the steps of the above-described method embodiments when executing the program stored in the memory 1503.
The bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this is not intended to represent only one bus or type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The embodiments of the present application further provide a computer-readable storage medium, where the storage medium includes a stored program, and when the program is run, the method steps of the foregoing method embodiments are performed.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above description is merely illustrative of particular embodiments of the invention that enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of video analysis, comprising:
acquiring a video to be analyzed;
determining a first target evaluation and a second target evaluation corresponding to a target video segment according to intra-video evaluation information and inter-video evaluation information corresponding to the video to be analyzed, wherein the intra-video evaluation information includes the first target evaluation corresponding to the target video segment, the inter-video evaluation information includes the second target evaluation corresponding to the target video segment, the first target evaluation is determined according to an evaluation of each candidate video segment in the video to be analyzed, the second target evaluation is a global evaluation determined according to an evaluation of each video segment in a target video set, all the candidate video segments include the target video segment, each candidate video segment has a unique corresponding time period in the video to be analyzed, and the target video set includes the video to be analyzed;
determining a target highlight evaluation value corresponding to the target video segment based on the intra-video evaluation information, the inter-video evaluation information, the first target evaluation and the second target evaluation;
and determining the wonderness category corresponding to the target video clip according to the target wonderness evaluation value corresponding to the target video clip.
2. The method according to claim 1, wherein the target highlight evaluation value corresponding to the target video segment is determined based on intra-video evaluation information, the inter-video evaluation information, the first target evaluation, and the second target evaluation; the method comprises the following steps:
determining a first candidate evaluation corresponding to each candidate video clip according to the video internal evaluation information;
calculating the average value of all the first candidate evaluations to obtain a first average value;
determining a second candidate evaluation corresponding to each candidate video clip according to the inter-video evaluation information;
calculating the average value of all the second candidate evaluations to obtain a second average value;
determining a first size relationship between the first target evaluation and the first average value; determining a second size relationship between the second target evaluation and the second average value;
and determining a target fineness evaluation value corresponding to the target 5 video clip based on the first size relation and the second size relation.
3. The method of claim 2, wherein said determining a first magnitude relationship between said first target rating and said first average value comprises:
determining a first minimum candidate evaluation with the lowest evaluation value and a first highest candidate evaluation with the highest evaluation value from all the first candidate evaluations;
0, dividing the first minimum candidate evaluation and the first average value to obtain a first low evaluation value interval with a first preset number; dividing the first highest candidate evaluation and the first average value to obtain a second preset number of first high-evaluation value intervals;
and determining a first target evaluation value interval containing the first target evaluation value in all first evaluation value intervals to obtain a first magnitude relation between the first target evaluation and 5 of the first average value, wherein all the first evaluation value intervals comprise the first low evaluation value interval and the first high evaluation value interval.
4. The method of claim 3, wherein determining a second magnitude relationship between the second target rating and the second average value comprises:
determining a second minimum candidate evaluation with the lowest evaluation value, 0 and a second highest candidate evaluation with the highest evaluation value from all the second candidate evaluations;
dividing the second minimum candidate evaluation and the second average value to obtain a third preset number of second low evaluation value intervals; determining a minimum specified second evaluation between the second highest candidate evaluation and a preset second evaluation upper limit, and dividing the specified second evaluation and the second average value to obtain a fourth preset number of second high-evaluation-value 5 intervals;
and determining a second target evaluation value interval containing the second target evaluation value in all second evaluation value intervals to obtain a second magnitude relation between the second target evaluation and the second average value, wherein all the second evaluation value intervals comprise the second low evaluation value interval and the second high evaluation value interval.
5. The method of claim 4, wherein determining the target highlight rating value corresponding to the target video segment based on the first size relationship and the second size relationship comprises:
determining a first wonderness value corresponding to each first evaluation value interval; determining a second highlight value corresponding to each second evaluation value interval;
determining a first target wonderness value corresponding to the first target evaluation value interval according to the first wonderness value corresponding to each first evaluation value interval; determining a second target wonderness value corresponding to the second target evaluation value interval according to a second wonderness value corresponding to each second evaluation value interval;
calculating the first target wonderness value and the second target wonderness value according to a preset weighting mode to obtain a target wonderness evaluation value;
and assigning the maximum target highlight evaluation value to the target video clip under the condition that the second target evaluation is higher than or equal to the preset second evaluation upper limit.
6. The method according to claim 1, wherein the determining the category of the target video segment according to the target highlight evaluation value corresponding to the target video segment comprises:
performing behavior detection on the target video clip to obtain a behavior detection result;
determining the high-chroma category corresponding to the target video clip as the high-chroma category when the behavior detection result indicates that the behavior of a preset behavior type exists in the target video clip and the target high-chroma evaluation value is greater than or equal to a preset threshold value;
and determining the highlight category corresponding to the target video clip as a low highlight category when the behavior detection result indicates that the behavior of a preset behavior type exists in the target video clip and the target highlight evaluation value is smaller than a preset threshold value.
7. The method according to any one of claims 1 to 6, wherein after determining the category of the target video segment according to the target highlight evaluation value corresponding to the target video segment, the method further comprises:
determining a designated fineness category corresponding to each designated video clip, wherein all the designated video clips comprise the target video clip;
taking the appointed video clip with the corresponding appointed precise color type as a high-precision color clip;
taking the designated video clip with the corresponding designated high-chroma category as a low-chroma clip;
according to a preset quantity relation, selecting a first quantity of high-precision training segments from all the high-precision segments, and selecting a second quantity of low-wonderness training segments from all the low-wonderness segments, wherein the preset quantity relation is satisfied between the first quantity and the second quantity.
8. A video analysis apparatus, comprising:
the acquisition module is used for acquiring a video to be analyzed;
a first determining module, configured to determine, according to intra-video evaluation information and inter-video evaluation information corresponding to the video to be analyzed, a first target evaluation and a second target evaluation corresponding to a target video segment, where the intra-video evaluation information includes the first target evaluation corresponding to the target video segment, the inter-video evaluation information includes the second target evaluation corresponding to the target video segment, the first target evaluation is determined according to an evaluation of each candidate video segment in the video to be analyzed, the second target evaluation is a global evaluation determined according to an evaluation of each video segment in a target video set, all the candidate video segments include the target video segment, each candidate video segment has a unique corresponding time period in the video to be analyzed, and the target video set includes the video to be analyzed;
a second determination module, configured to determine a target highlight evaluation value corresponding to the target video segment based on intra-video evaluation information, the inter-video evaluation information, the first target evaluation, and the second target evaluation;
and the highlight category determining module is used for determining the highlight category corresponding to the target video clip according to the target highlight evaluation value corresponding to the target video clip.
9. An electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor, when executing the computer program, implementing the method of any of claims 1 to 7.
10. A computer-readable storage medium, comprising a stored program, wherein the program when executed performs the method of any of claims 1 to 7.
CN202211626678.1A 2022-12-16 2022-12-16 Video analysis method and device, electronic equipment and storage medium Pending CN115861890A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211626678.1A CN115861890A (en) 2022-12-16 2022-12-16 Video analysis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211626678.1A CN115861890A (en) 2022-12-16 2022-12-16 Video analysis method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115861890A true CN115861890A (en) 2023-03-28

Family

ID=85673848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211626678.1A Pending CN115861890A (en) 2022-12-16 2022-12-16 Video analysis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115861890A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503791A (en) * 2023-06-30 2023-07-28 腾讯科技(深圳)有限公司 Model training method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503791A (en) * 2023-06-30 2023-07-28 腾讯科技(深圳)有限公司 Model training method and device, electronic equipment and storage medium
CN116503791B (en) * 2023-06-30 2023-09-15 腾讯科技(深圳)有限公司 Model training method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109753601B (en) Method and device for determining click rate of recommended information and electronic equipment
WO2020259572A1 (en) Tag determination method for negative feedback, video recommendation method, apparatus and device, and storage medium
CN109688479B (en) Bullet screen display method, bullet screen display device and bullet screen display server
JP2018530847A (en) Video information processing for advertisement distribution
CN106776528B (en) Information processing method and device
CN111107416B (en) Bullet screen shielding method and device and electronic equipment
CN108335131B (en) Method and device for estimating age bracket of user and electronic equipment
CN112995690B (en) Live content category identification method, device, electronic equipment and readable storage medium
WO2020135059A1 (en) Search engine evaluation method, apparatus and device, and readable storage medium
CN108768743B (en) User identification method and device and server
CN109583228B (en) Privacy information management method, device and system
CN115861890A (en) Video analysis method and device, electronic equipment and storage medium
CN111159563A (en) Method, device and equipment for determining user interest point information and storage medium
CN112995765B (en) Network resource display method and device
CN107682427B (en) Message pushing method, device, equipment and storage medium
US11527091B2 (en) Analyzing apparatus, control method, and program
US20150227970A1 (en) System and method for providing movie file embedded with advertisement movie
CN112287225A (en) Object recommendation method and device
TWI725375B (en) Data search method and data search system thereof
CN108882024B (en) Video playing method and device and electronic equipment
CN109963174B (en) Flow related index estimation method and device and computer readable storage medium
CN109561350B (en) User interest degree evaluation method and system
CN111324733A (en) Content recommendation method, device, equipment and storage medium
CN108764021B (en) Cheating video identification method and device
CN110996177B (en) Video recommendation method, device and equipment for video-on-demand cinema

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination