CN110611848A - Information processing method, system, terminal, server and readable storage medium - Google Patents

Information processing method, system, terminal, server and readable storage medium Download PDF

Info

Publication number
CN110611848A
CN110611848A CN201910939507.6A CN201910939507A CN110611848A CN 110611848 A CN110611848 A CN 110611848A CN 201910939507 A CN201910939507 A CN 201910939507A CN 110611848 A CN110611848 A CN 110611848A
Authority
CN
China
Prior art keywords
video
target
clip
time point
target content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910939507.6A
Other languages
Chinese (zh)
Inventor
李立锋
叶军
吴嘉旭
颜伟婷
王�琦
杜欧杰
蒋伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MIGU Video Technology Co Ltd, MIGU Culture Technology Co Ltd filed Critical MIGU Video Technology Co Ltd
Priority to CN201910939507.6A priority Critical patent/CN110611848A/en
Publication of CN110611848A publication Critical patent/CN110611848A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an information processing method, a terminal, a server and a storage medium, relates to the technical field of communication, and aims to solve the problem that the existing information processing mode is poor in display effect. The method comprises the following steps: after a first operation aiming at target content in a video playing page is received, determining a target video clip associated with the target content in the video, wherein the score of the target video clip meets a preset requirement; acquiring a playing time point of the target video clip; and identifying the playing time point. According to the embodiment of the invention, a user can check the target video segment associated with the target content of the operation in the video by clicking the marked playing time point, so that the impression on the target content can be deepened, and the purpose of improving the information display effect is achieved.

Description

Information processing method, system, terminal, server and readable storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an information processing method, system, terminal, server, and readable storage medium.
Background
Currently, in the video playing process, the video content is usually played in a preset playing sequence. When a user is interested in particular content (e.g., advertisements) in a video, the user often clicks on the object of interest. In the prior art, for example, after a user clicks an advertisement, the user jumps to a page associated with the advertisement or displays more advertisement parameters, and returns to the original video to continue playing after displaying the advertisement. Thus, the user may only stay on click for the impression of the advertisement while watching the video. Therefore, the existing information processing mode has the problem of poor display effect.
Disclosure of Invention
The embodiment of the invention provides an information processing method, a terminal, a server and a storage medium, which aim to solve the problem of poor display effect of the existing information processing mode.
In a first aspect, an embodiment of the present invention provides an information processing method, which is applied to a terminal, and the method includes:
after a first operation aiming at target content in a video playing page is received, determining a target video clip associated with the target content in the video, wherein the score of the target video clip meets a preset requirement;
acquiring a playing time point of the target video clip;
and identifying the playing time point.
Optionally, the determining a target video segment in the video, which is associated with the target content, includes:
acquiring a classification label of the target content;
acquiring scoring information of the video clips matched with the classification labels of the target content;
and selecting a target video clip from the video clips according to the grading information.
Optionally, the obtaining scoring information of the video segments matched with the classification tags includes:
and scoring the video clips according to at least one of the duration of the video clips matched with the classification labels, the area ratio of elements in the video clips related to the target content, the element engagement degree and the clip heat degree, so as to obtain scoring information of the video clips.
Optionally, the scoring the video segments according to at least one of the duration of the video segments matched with the classification tags, the area ratio of elements in the video segments related to the target content, the degree of engagement of the elements, and the segment heat degree to obtain scoring information of the video segments includes:
according to the formulaCalculating the score of the video clip, wherein S represents the clip score, M represents the area ratio of elements related to the target content in the video clip, F represents the degree of engagement of the elements in the video clip, H represents the clip heat degree, t represents the clipThe period of time.
Optionally, before the obtaining of the play time point of the target video segment, the method further includes:
acquiring a target identification file aiming at the video, wherein the target identification file comprises playing time point information of a video clip associated with the target content;
the obtaining of the playing time point of the target video clip includes:
and searching the playing time point of the target video clip from the target identification file.
Optionally, the determining a target video segment in the video, which is associated with the target content, includes:
sending a first request to a server, wherein the first request comprises the target content;
receiving a playing time point of a video clip which is sent by the server and is associated with the target content;
determining a target video segment in the video segments;
the obtaining of the playing time point of the target video clip includes:
and determining the playing time point of the target video clip according to the playing time point of the video clip.
In a second aspect, an embodiment of the present invention further provides an information processing method, which is applied to a server, where the method includes:
acquiring a first request sent by a terminal, wherein the first request comprises target content in a video playing page;
and feeding back the playing time point of the video clip associated with the target content in the video to the terminal.
Optionally, the feeding back, to the terminal, a playing time point of a video segment associated with the target content in the video includes:
identifying a classification label for the target content;
determining a video segment matching the classification label of the target content;
identifying a playing time point of the video clip in the video;
and feeding back the playing time point of the video clip to the terminal.
Optionally, after determining the video segment matching the classification label of the target content, the method further includes:
scoring the video clips to obtain scoring information of the video clips;
and sending the grading information of the video clips to the terminal.
Optionally, the scoring the video segments includes:
and scoring the video clips according to at least one of the duration of the video clips, the area ratio of elements related to the target content in the video clips, the fitness of the elements and the heat of the clips.
Optionally, the scoring the video segments according to at least one of duration of the video segments, area ratio of elements in the video segments related to the target content, element engagement degree, and segment heat degree includes:
according to the formulaAnd calculating the score of the video clip, wherein S represents the clip score, M represents the area ratio of elements related to the target content in the video clip, F represents the degree of engagement of the elements in the video clip, H represents the clip heat degree, and t represents the clip duration.
Optionally, before the obtaining the first request sent by the terminal, the method further includes:
identifying a target segment in the video to obtain target information of the target segment, wherein the target information comprises at least one of a classification label, playing time point information, position information, size information and scoring information;
sending the target information of the target segment to the terminal;
the acquiring the first request sent by the terminal includes:
and under the condition that the target information does not comprise the playing time point information, acquiring a first request sent by the terminal.
In a third aspect, an embodiment of the present invention further provides an information processing system, including a terminal and a server, where the terminal is configured to send a first request including target content to the server after receiving a first operation for the target content in a video playback page;
the server is used for receiving the first request sent by the terminal; sending the playing time point of a video clip associated with the target content in the video to the terminal;
the terminal is also used for receiving the playing time point of the video clip sent by the server; determining a target video clip with a score meeting a preset requirement in the video clips; and identifying the playing time point of the target video clip.
In a fourth aspect, an embodiment of the present invention further provides a terminal, including: transceiver, processor, memory and computer program stored on and executable on said memory, said processor for reading a program in said memory implementing the steps in the information processing method according to the first aspect of the invention.
In a fifth aspect, an embodiment of the present invention further provides a server, including: transceiver, processor, memory and computer program stored on and executable on said memory, said processor for reading the program in said memory to implement the steps in the information processing method according to the second aspect.
In a sixth aspect, the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program, when executed by a processor, implements the steps in the information processing method according to the first aspect or the steps in the information processing method according to the second aspect.
In the embodiment of the invention, after receiving a first operation of a user on a target content in a video playing page, a terminal can determine a target video clip associated with the target content operated by the user in a video, and then identify a playing time point of the target video clip by acquiring the playing time point of the target video clip. Therefore, the user can check the target video clip associated with the operated target content in the video by clicking the marked playing time point, so that the impression on the target content can be deepened, and the aim of improving the information display effect is fulfilled.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart of an information processing method provided by an embodiment of the invention;
FIG. 2 is a second flowchart of an information processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating an example of identifying a playing time point of a target video segment according to an embodiment of the present invention;
fig. 4 is one of the structural diagrams of a terminal provided in the embodiment of the present invention;
fig. 5a is one of the structural diagrams of a determination module in a terminal according to an embodiment of the present invention;
fig. 5b is a second structural diagram of a determination module in the terminal according to the embodiment of the present invention;
fig. 6 is a second structural diagram of a terminal according to an embodiment of the present invention;
FIG. 7 is one of the block diagrams of a server provided by the embodiments of the present invention;
fig. 8 is a structural diagram of a first sending module in the server according to the embodiment of the present invention;
FIG. 9 is a second block diagram of a server according to an embodiment of the present invention;
FIG. 10 is a third block diagram of a server according to an embodiment of the present invention;
fig. 11 is a third structural diagram of a terminal according to an embodiment of the present invention;
fig. 12 is a fourth structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of an information processing method provided by an embodiment of the present invention, and is applied to a terminal, as shown in fig. 1, including the following steps:
step 101, after receiving a first operation for a target content in a video playing page, determining a target video segment associated with the target content in the video, wherein a score of the target video segment meets a preset requirement.
The target content may be specific content of interest to the user in the video playing page, such as advertisement content, content including a specific star or thing, and the like. In the embodiment of the invention, during the process of watching the video, if the user sees the interested target content, a first operation aiming at the content can be executed to obtain more information related to the content, wherein the first operation can be a targeted operation such as clicking operation, long-time pressing operation, pressing operation and the like.
After receiving the first operation for the target content, the terminal may determine, in response to the first operation, a target video segment associated with the target content in the currently played video, where the number of the target video segments may be one or more, where the target video segment is associated with the target content and may be a segment having the same or similar elements, for example, when the target content is an automobile advertisement, the target content is associated with an automobile advertisement and may be a segment in which an automobile appears in the video, or a segment in which an automobile with the same brand appears.
Specifically, it may be that a video clip associated with the target content, that is, a video clip including an element that is the same as or similar to an element in the target content, is determined by identifying an element of each frame of image in the video, and then a target video clip having a higher association with the target content is determined, for example, a video clip including the same element in the target content or a video clip including the most of the same element is taken as the target video clip.
Or, to determine the associated target video segments more quickly, a first request including information of the target content may be sent to a server after receiving the first operation, so as to request the server to return one or more video segments associated with the target content in the video, where the information of the target content may be an image frame or a video segment corresponding to the target content, the server may identify the target content after receiving the first request, such as identifying an element in the target content, a corresponding classification tag, and the like, the server may further identify one or more video segments associated with the target content in the video, and an association degree of each video segment with the target content, such as represented by a score, and return each video segment and the corresponding association degree to the terminal, so that the terminal determines the target video clip according to the relevance.
Or, the server may identify the video in advance and send the target identification file for the video to the terminal, and if the terminal starts playing the video, the server may send the target identification file for the video to the terminal. Taking the target content as the advertisement content as an example, the server may identify video segments related to the advertisement in the video in advance, and mark the classification tag of each video segment, so that the terminal may determine, according to the target identification file, the video segments related to the target content operated by the user and having the same classification tag.
The score of the target video segment associated with the target content meets a preset requirement, and specifically, the score of the target video segment may be higher than a preset score, or the score of the target video segment is a video segment with a score higher than a preset score in all video segments associated with the target content in the video, such as a video segment with a score ranking 5. In order to determine the score of the target video segment, after the video segments associated with the target content are determined, the score information of the associated video segments may be obtained first, for example, the segment score information may be obtained from a server, and the server may perform the score according to the element engagement degree, the playing heat degree, the element area ratio, and the like of each associated video segment with the target content.
It should be noted that, when the target content is an advertisement, and when the first request is sent to the server, the advertisement slot ID and the channel ID may also be reported to the server, where the channel ID may be a delivery channel of the advertisement, for example, may be a playing platform, and the server may obtain a classification tag of the advertisement from a corresponding advertisement platform according to the advertisement slot ID and the channel ID, and then issue the classification tag to the terminal.
Optionally, the determining a target video segment in the video, which is associated with the target content, includes:
acquiring a classification label of the target content;
acquiring scoring information of the video clips matched with the classification labels of the target content;
and selecting a target video clip from the video clips according to the grading information.
In an alternative embodiment, the video segment associated with the target content may be determined based on the classification label of the target content, specifically, the classification label of the target content may be obtained first, such as determining the corresponding classification label through an element in the target content, or the classification label of the target content may also be identified by a server by sending information of the target content to the server. For example, if the user clicks on milk in a milk advertisement in a video, the classification tag for acquiring the milk advertisement may be milk or xxx milk.
Then, based on the classification label of the target content, a video segment in the video matching with the classification label of the target content may be further determined, where the matching with the classification label of the target content may be the same as or similar to the classification label of the target content, specifically, the video segment matching with the classification label of the target content may be determined by segmenting the video by content and assigning the classification label to each video segment, or the server identifies the classification label of each segment in the video and determines the video segment matching therewith according to the classification label of the target content.
The above-mentioned obtaining of the rating information of the video clip may be obtaining of the rating information of the video clip from a server in real time, and specifically, the server may score the video clip according to the information of the video clip, such as a clip duration, an area ratio of an element in the video clip related to the target content, an element engagement degree, a clip heat degree, and the like, so that the terminal may obtain the rating information of the video clip from the server. Alternatively, the above-mentioned obtaining of the rating information of the video segments may be to search the rating information of the video segments from a pre-received target identification file sent by the server, where the target identification file includes the rating information of each video segment in the video.
Or, the above-mentioned obtaining of the rating information of the video clip may be that the terminal scores the video clip according to the information of the video clip, such as the clip duration, the area ratio of the elements in the video clip related to the target content, the element engagement degree, the clip popularity, and the like, so as to obtain the rating of the video clip, wherein the clip popularity of the video clip may be obtained from the cloud big data statistics.
In this way, the terminal may select, according to the scoring information of the video segments, a target video segment whose score meets a preset requirement from the video segments, and specifically, when the number of the video segments is multiple, may select, from the video segments, a target video segment whose score is higher or higher than a preset score, where the number of the target video segments may be a preset number, for example, 5 target video segments whose scores rank in the top 5 are selected.
The specific manner of scoring the video segments may be as follows: and scoring the video clips according to at least one of the duration of the video clips matched with the classification labels, the area ratio of elements in the video clips related to the target content, the element engagement degree and the clip heat degree.
The duration of the video clip is the playing duration of the video clip, and the area ratio of the element in the video clip related to the target content may be obtained by identifying the area size M1 of the element in the video clip related to the target content, obtaining the entire screen size M2 of the video clip, and obtaining the area ratio M1/M2 according to the ratio of the two.
The above element fitting degree may be understood as a degree of coincidence between an element related to the target content in a video segment and a user preference, taking an advertisement in which the video segment is a certain star era as an example, if a gender of the star is consistent with a gender of a user who clicks the video segment most, the element fitting degree may be a ratio of a number of users of the gender of the clicked segment most to a number of users of the gender of the clicked segment least, otherwise, the element fitting degree may be a ratio of a number of users of the gender of the clicked segment least to a number of users of the gender of the clicked segment most, where the gender of the user who clicks the video segment most may be determined by a male-female ratio among the number of users clicking the video segment counted by the server, and the server may calculate the gender of the user according to a user operation behavior on a user device which clicks the video segment.
The segment popularity may represent a degree of attention of the video segment by the user, and is generally related to a playing amount, for example, if there is no star in the video segment, the segment popularity may be a current video playing amount, and if there is a star in the video segment, the segment popularity may be a current video playing amount x (the video playing amount of the star in this month/the average playing amount of N stars in this month before the ranking), and N may be determined according to actual needs, and the star may be a leading actor in the video, such as a movie lead actor.
In this way, the score of each video clip may be determined according to one or more of the duration of each video clip, the area ratio of the element in the clip related to the target content, the element engagement degree, and the clip heat degree, where the score of each video clip may be positively related to the clip duration, the area ratio, the element engagement degree, and the clip heat degree, that is, the longer the clip duration, the larger the area ratio of the related element, the higher the element engagement degree, or the larger the clip heat degree, the higher the score.
For example, the score of a certain video segment may be calculated by using the product of the segment duration of the certain video segment and a fixed coefficient, the score of the certain video segment may be calculated by using the product of the area fraction of the element related to the target content in the certain video segment and a fixed coefficient, the score of the certain video segment may be calculated by using the product of the element engagement degree of the certain video segment and a fixed coefficient, or the score of the certain video segment may be calculated by using the product of the segment heat degree of the certain video segment and a fixed coefficient, and the like, specific values of the fixed coefficients may be set as required, or the score may be calculated by using the product of a plurality of the segment duration of the certain video segment, the area fraction of the element related to the target content in the segment, the element engagement degree and the segment heat degree, and the specific calculation formula is not limited.
More specifically, to obtain a more reliable segment score, the server or the terminal may obtain the segment score according to a formula according to a duration of the video segment, an area ratio of an element in the video segment related to the target content, an element engagement degree, and a segment heat degreeCalculating the score of the video segment, wherein S represents the segment score, and M represents the video segment related to the target contentF denotes the degree of engagement of the elements in the video clip, H denotes the clip heat, and t denotes the clip duration. In this way, the scores of the video segments are all in direct proportion to the area ratio of the relevant elements, the fitness of the elements, the segment hotness and the square root of the segment duration.
It should be noted that, for the identification of the classification tags, a machine can be trained to identify the classification tags corresponding to different materials by collecting related materials in advance, taking advertisement materials as an example, first, data collection can be performed, specifically, the same type of materials can be searched by using a crawler or a 3 rd party open source gallery through text tags corresponding to target advertisements, or the same type of materials can be searched by using the graph search function of the existing search engine; then, labeling data labels, specifically, labeling elements in the advertisement material, such as people, things, logo, and the like, through an image data labeling tool, such as labelImg, antotorious, and the like; next, a neural network model or other classification model can be used for data training for the labeled advertising material, and a training machine determines its classification label by identifying elements in the material. Finally, data verification may be performed, for example, 20% of training data and 10% of non-training data may be extracted to verify the training result, and finally the classification label recognition model is obtained through training. Thus, by training the classification label recognition model, the classification label of each segment can be recognized quickly and accurately.
And 102, acquiring the playing time point of the target video clip.
The obtaining of the playing time point of the target video segment may be a step of obtaining the playing time point of the target video segment in real time from a server, such as the playing start time of the target video segment on the playing progress bar of the video, or a step of searching the playing time point of the target video segment according to a pre-received target identification file for the video sent by the server, where the playing time point information of each video segment in the video may be identified by the server.
Optionally, the step 101 includes:
sending a first request to a server, wherein the first request comprises the target content;
receiving a playing time point of a video clip which is sent by the server and is associated with the target content;
determining a target video segment in the video segments;
the step 102 includes:
and determining the playing time point of the target video clip according to the playing time point of the video clip.
In this embodiment, in order to obtain the playing time point of the target video segment associated with the target content more quickly, after receiving the first operation, in a case where the identification file for the video sent by the server is not received in advance, a first request including the target content may be sent to the server, or, in a case where the identification file for the video sent by the server is received in advance, the playing time point information of each video segment is not included in the identification file for the video, the first request may be sent to the server, so that the server identifies the target content based on the first request, such as identifying an element in the target content, a corresponding classification tag, and the like, and further analyzes the video according to the identified information, such as analyzing a segment in the video that includes the same element or the same classification tag as the target content, thereby determining the video segment associated with the target content in the video, and finally returning the playing time point of the video segment associated with the target content to the terminal.
After receiving the playing time point of the video clip associated with the target content returned by the server, the terminal may determine a target video clip in the video clip, and specifically may determine, according to the score information of the video clip, a target video clip whose score meets a preset requirement. Then, the playing time point of the target video clip can be found out according to the playing time point of the video clip.
Therefore, by obtaining the playing time point of the target video clip from the server, the system resource of the terminal can be saved, and the efficiency of identifying the playing time point can be improved.
Optionally, before step 102, the method further includes:
acquiring a target identification file aiming at the video, wherein the target identification file comprises playing time point information of a video clip associated with the target content;
the step 102 comprises:
and searching the playing time point of the target video clip from the target identification file.
In this manner, the video may be identified in advance by the server, for example, before receiving a first operation for a target content in the video, the server invokes an artificial intelligence AI service to identify a target segment in the video, to obtain target information of the target segment, and generates a target identification file for the video by combining the target information of each target segment, where the target identification file may include playing time point information of a video segment associated with the target content, so that the terminal may search from the target identification file to obtain playing time point information of each segment, and may further determine a playing time point of the target video segment.
The server may identify a target segment in the video, where the target segment may be an advertisement segment, a segment of a certain star concerned by a user, or a segment of a specific scene, and may further identify one or more items of a classification tag, a play start time, a play end time, position information of a target element, size information, and segment score information of each target segment, so as to obtain a target identification file including the target information of each target segment. It should be noted that the target identification file may be stored in different formats, such as a database, json, xml, txt, or a custom format.
When the target information includes the classification tag, the playing start time, the playing end time and the segment score information of each target segment, the terminal may search for at least one video segment that is the same as the classification tag of the target content from the target identification file by acquiring the classification tag of the target content and obtain the score information of each video segment when receiving a first operation for the target content in the video, select a target video segment that has a higher score and the same classification tag as the target content according to the score information of each video segment, and then search for the playing start time and the playing end time of the target video segment from the target identification file, so as to obtain the playing time point information of the target video segment.
And step 103, identifying the playing time point.
After obtaining the playing time point of the target video segment, the playing time point of the target video segment may be identified, specifically, the playing time point of the target video segment may be identified by using a preset symbol on the playing progress bar of the video, where the preset symbol may be a dot with a specific color or a symbol with another shape, and the playing time point may be a playing start time of the target video segment.
For example, as shown in fig. 2, dots may be used on a playing progress bar of a currently playing video to identify the determined playing time points of the target video segments related to the target content clicked by the user, so that the user knows the playing positions of the related segments, and may view the corresponding target video segments by clicking the identifiers. When the user clicks the marks at other positions, the original marks can be restored, and the newly clicked marks can be displayed by different symbols.
Optionally, after the step 103, the method further includes:
and when the number of the target video clips is multiple, sequentially playing each target video clip according to the playing sequence of each target video clip.
In order to further deepen the impression of the user on the operated target content, after the playing time point of the target video segment is identified, each target video segment can be sequentially played according to the playing sequence of each target video segment, so that the user can quickly view the segments related to the target content in the current video, for example, if the user clicks a milk advertisement in the video, the segments with milk elements can be identified on the playing progress bar, and the segments with milk elements can be sequentially played for the user.
In the information processing method in this embodiment, after receiving a first operation of a user for a target content in a video playing page, a terminal may determine a target video segment associated with the target content operated by the user in a video, and then identify a playing time point of the target video segment by obtaining the playing time point of the target video segment. Therefore, the user can check the target video clip associated with the operated target content in the video by clicking the marked playing time point, so that the impression on the target content can be deepened, and the aim of improving the information display effect is fulfilled.
The following description will be given, by way of example, with reference to fig. 2, to illustrate an embodiment of the present invention:
the AI server can perform offline recognition on the video played by the terminal side, recognize the advertisement-related segment therein, and when the terminal enters the video playing page 20, can acquire the advertisement recognition file corresponding to the video from the AI server.
In the process of watching the video, the user may click on an advertisement of interest on the video playing page 20, such as click on milk 21 in a milk advertisement or click on a car 22 in a car advertisement, the terminal may send a request to the AI server, where the request carries the content of the advertisement clicked by the user, and the AI server will identify a classification tag of the content of the advertisement, such as a car and milk, and issue the identification result, i.e., the classification tag, to the terminal.
After the terminal obtains the classification label of the advertisement clicked by the user, the playing time point and the score of the relevant segment corresponding to the classification label can be found out from the advertisement identification file obtained in advance based on the classification label, the terminal can select 5 relevant segments with the highest score according to a preset score rule, then pop up a video playing progress bar 23, and mark is carried out on the playing time point corresponding to each relevant segment, as shown by a small black dot 24 in the figure.
Referring to fig. 3, fig. 3 is a flowchart of an information processing method according to an embodiment of the present invention, and as shown in fig. 3, the method includes the following steps:
step 301, a first request sent by a terminal is obtained, where the first request includes target content in a video playing page.
Step 302, feeding back the playing time point of the video segment associated with the target content in the video to the terminal.
Optionally, the step 302 includes:
identifying a classification label for the target content;
determining a video segment matching the classification label of the target content;
identifying a playing time point of the video clip in the video;
and feeding back the playing time point of the video clip to the terminal.
Optionally, after determining the video segment matching the classification label of the target content, the method further includes:
scoring the video clips to obtain scoring information of the video clips;
and sending the grading information of the video clips to the terminal.
Optionally, the scoring the video segments includes:
and scoring the video clips according to at least one of the duration of the video clips, the area ratio of elements related to the target content in the video clips, the fitness of the elements and the heat of the clips.
Optionally, the scoring the video segments according to at least one of duration of the video segments, area ratio of elements in the video segments related to the target content, element engagement degree, and segment heat degree includes:
according to the formulaAnd calculating the score of the video clip, wherein S represents the clip score, M represents the area ratio of elements related to the target content in the video clip, F represents the degree of engagement of the elements in the video clip, H represents the clip heat degree, and t represents the clip duration.
Optionally, before step 301, the method further includes:
identifying a target segment in the video to obtain target information of the target segment, wherein the target information comprises at least one of a classification label, playing time point information, position information, size information and scoring information;
and sending the target information of the target segment to the terminal.
The acquiring the first request sent by the terminal includes:
and under the condition that the target information does not comprise the playing time point information, acquiring a first request sent by the terminal.
It should be noted that, this embodiment is taken as an implementation of the server side corresponding to the embodiment shown in fig. 1, and specific implementation thereof may refer to relevant descriptions in the embodiment shown in fig. 1, and for avoiding repeated descriptions, this embodiment is not described again.
The information processing method in the embodiment acquires a first request sent by a terminal, wherein the first request comprises target content in a video playing page; and feeding back the playing time point of the video clip associated with the target content in the video to the terminal. In this way, the server can feed back the playing time point of the video clip associated with the target content in the video to the terminal by acquiring the target content in the playing page of the terminal, so that the terminal can identify the playing time point of the target video clip associated with the target content in the video based on the playing time point, the impression of the user on the target content is deepened, and the information display effect is further improved.
The embodiment of the invention also provides an information processing system, which comprises a terminal and a server, wherein the terminal is used for sending a first request comprising the target content to the server after receiving the first operation aiming at the target content in the video playing page;
the server is used for receiving the first request sent by the terminal; sending the playing time point of a video clip associated with the target content in the video to the terminal;
the terminal is also used for receiving the playing time point of the video clip sent by the server; determining a target video clip with a score meeting a preset requirement in the video clips; and identifying the playing time point of the target video clip.
The target content may be specific content of interest to the user in the video playing page, such as advertisement content, content including a specific star or thing, and the like. In this embodiment, if the user sees the target content of interest while watching the video, a first operation on the content may be performed to obtain more information related to the content, where the first operation may be a click operation, a long-press operation, a press operation, or other specific operations.
After receiving the first operation for the target content, the terminal may send a first request including the target content to a server, so that the server identifies the target content based on the first request, for example, identifies elements, corresponding classification tags, and the like in the target content, and further analyzes the video according to the identified information, for example, analyzes a segment in the video that includes the same elements or the same classification tags as the target content, thereby determining a video segment in the video that is associated with the target content, and finally, may send a play time point of the video segment associated with the target content to the terminal.
After receiving the playing time point of the video clip associated with the target content returned by the server, the terminal may first determine a target video clip in the video clip, specifically, may determine the target video clip of which the score meets a preset requirement by acquiring the score information of the video clip. Then, the playing time point of the target video clip can be found out according to the playing time point of the video clip, and finally, the playing time point of the target video clip can be identified.
For a specific implementation manner of obtaining the scoring information of the video segment and identifying the playing time point, reference may be made to related descriptions in the foregoing method embodiments, and in order to avoid repetition, details are not described here again in this embodiment.
In the information processing system in the embodiment of the invention, after receiving a first operation of a user on a target content in a video playing page, a terminal can send a first request including the target content to a server, and the server identifies a playing time point of a video clip in a video, which is associated with the target content operated by the user, and feeds the playing time point back to the terminal, so that the terminal can further determine the target video clip with a score meeting a preset requirement, and identify the playing time point of the target video clip according to the playing time point fed back by the server. Therefore, the user can check the target video clip associated with the operated target content in the video by clicking the marked playing time point, so that the impression on the target content can be deepened, and the aim of improving the information display effect is fulfilled.
The embodiment of the invention also provides the terminal. Referring to fig. 4, fig. 4 is a structural diagram of a terminal according to an embodiment of the present invention. Because the principle of solving the problem of the terminal is similar to the information processing method in the embodiment of the invention, the implementation of the terminal can refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 4, the terminal 400 includes:
the determining module 401 is configured to determine, after receiving a first operation for a target content in a video playing page, a target video segment associated with the target content in the video, where a score of the target video segment meets a preset requirement;
a first obtaining module 402, configured to obtain a playing time point of the target video segment;
an identification module 403, configured to identify the playing time point.
Optionally, as shown in fig. 5a, the determining module 401 includes:
a first obtaining unit 4011, configured to obtain a classification label of the target content;
a second obtaining unit 4012, configured to obtain scoring information of a video segment that matches the classification tag of the target content;
the selecting unit 4013 is configured to select a target video segment from the video segments according to the scoring information of the video segments.
Optionally, the second obtaining unit 4012 is configured to score the video segments according to at least one of duration of the video segments matched with the classification tags, area ratio of elements in the video segments related to the target content, element fitness, and segment popularity, so as to obtain scoring information of the video segments.
Optionally, the second obtaining unit 4012 is configured to obtain the formulaAnd calculating the score of the video clip, wherein S represents the clip score, M represents the area ratio of elements related to the target content in the video clip, F represents the degree of engagement of the elements in the video clip, H represents the clip heat degree, and t represents the clip duration.
Optionally, as shown in fig. 5b, the determining module 401 includes:
a sending unit 4014, configured to send a first request to a server, where the first request includes the target content;
a receiving unit 4015, configured to receive a playing time point of a video clip sent by the server and associated with the target content;
a determining unit 4016, configured to determine a target video segment in the video segments;
the first obtaining module 402 is configured to determine a playing time point of the target video segment according to the playing time point of the video segment.
Optionally, as shown in fig. 6, the terminal 400 further includes:
a second obtaining module 404, configured to obtain a target identification file for the video, where the target identification file includes playing time point information of a video segment associated with the target content;
the first obtaining module 402 is configured to search for a playing time point of the target video segment from the target identification file.
The terminal provided by the embodiment of the present invention can execute the above method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
The terminal 400 of the embodiment of the present invention may determine, after receiving a first operation of a user for a target content in a video playing page, a target video segment in a video that is associated with the target content operated by the user, and then identify a playing time point of the target video segment by acquiring the playing time point of the target video segment. Therefore, the user can check the target video clip associated with the operated target content in the video by clicking the marked playing time point, so that the impression on the target content can be deepened, and the aim of improving the information display effect is fulfilled.
The embodiment of the invention also provides a server. Referring to fig. 7, fig. 7 is a block diagram of a server according to an embodiment of the present invention. Because the principle of solving the problem of the server is similar to the information processing method in the embodiment of the invention, the implementation of the server can refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 7, the server 700 includes:
an obtaining module 701, configured to obtain a first request sent by a terminal video, where the first request includes target content in a video playing page;
a first sending module 702, configured to feed back, to the terminal, a playing time point of a video segment in the video that is associated with the target content.
Optionally, as shown in fig. 8, the first sending module 702 includes:
a first identifying unit 7021, configured to identify a classification label of the target content;
a determining unit 7022, configured to determine a video segment matching the classification label of the target content;
a second identifying unit 7023, configured to identify a playing time point of the video segment in the video;
a sending unit 7024, configured to feed back the playing time point of the video segment to the terminal.
Optionally, as shown in fig. 9, the server 700 further includes:
a scoring module 703, configured to score the video segments to obtain scoring information of the video segments;
a second sending module 704, configured to send the scoring information of the video segment to the terminal.
Optionally, the scoring module 703 is configured to score the video segment according to at least one of a duration of the video segment, an area ratio of an element in the video segment related to the target content, an element engagement degree, and a segment heat degree.
Optionally, the scoring module 703 is used for calculating a formulaAnd calculating the score of the video clip, wherein S represents the clip score, M represents the area ratio of elements related to the target content in the video clip, F represents the degree of engagement of the elements in the video clip, H represents the clip heat degree, and t represents the clip duration.
Optionally, as shown in fig. 10, the server 700 further includes:
the identification module 705 is configured to identify a target segment in the video to obtain target information of the target segment, where the target information includes at least one of a classification tag, playing time point information, position information, size information, and score information;
a third sending module 706, configured to send target information of the target segment to the terminal;
the obtaining module 701 is configured to obtain the first request sent by the terminal when the target information does not include the play time point information.
The server provided by the embodiment of the present invention may execute the above method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
The server 700 of the embodiment of the present invention can feed back the playing time point of the video segment associated with the target content in the video to the terminal by acquiring the first request including the target content in the video playing page sent by the terminal, so that the terminal can identify the playing time point of the target video segment associated with the target content in the video based on the playing time point, deepen the impression of the user on the target content, and further improve the information display effect.
The embodiment of the invention also provides another terminal. Because the principle of solving the problem of the terminal is similar to the information processing method in the embodiment of the invention, the implementation of the terminal can refer to the implementation of the method, and repeated details are not repeated. As shown in fig. 11, the base station according to the embodiment of the present invention includes: the processor 1100, which reads the program in the memory 1120, performs the following processes:
after a first operation aiming at target content in a video playing page is received, determining a target video clip associated with the target content in the video, wherein the score of the target video clip meets a preset requirement;
acquiring a playing time point of the target video clip;
and identifying the playing time point.
A transceiver 1110 for receiving and transmitting data under the control of the processor 1100.
Where in fig. 11, the bus architecture may include any number of interconnected buses and bridges, with one or more processors, represented by processor 1100, and various circuits, represented by memory 1120, being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 1110 may be a number of elements including a transmitter and a transceiver providing a means for communicating with various other apparatus over a transmission medium. For different user devices, the user interface 1130 may also be an interface capable of interfacing with a desired device, including but not limited to a keypad, display, speaker, microphone, joystick, etc.
The processor 1100 is responsible for managing the bus architecture and general processing, and the memory 1120 may store data used by the processor 1100 in performing operations.
The processor 1100 is also configured to read the program in the memory 1120, and execute the following steps:
acquiring a classification label of the target content;
acquiring scoring information of the video clips matched with the classification labels of the target content;
and selecting a target video clip from the video clips according to the grading information of the video clips.
The processor 1100 is also configured to read the program in the memory 1120, and execute the following steps:
and scoring the video clips according to at least one of the duration of the video clips matched with the classification labels, the area ratio of elements in the video clips related to the target content, the element engagement degree and the clip heat degree, so as to obtain scoring information of the video clips.
The processor 1100 is also configured to read the program in the memory 1120, and execute the following steps:
according to the formulaAnd calculating the score of the video clip, wherein S represents the clip score, M represents the area ratio of elements related to the target content in the video clip, F represents the degree of engagement of the elements in the video clip, H represents the clip heat degree, and t represents the clip duration.
The processor 1100 is also configured to read the program in the memory 1120, and execute the following steps:
acquiring a target identification file for the video through the transceiver 1110, wherein the target identification file includes playing time point information of a video segment associated with the target content;
and searching the playing time point of the target video clip from the target identification file.
The processor 1100 is also configured to read the program in the memory 1120, and execute the following steps:
sending a first request to a server through a transceiver 1110, the first request including the target content;
receiving, by the transceiver 1110, a play time point of a video clip associated with the target content transmitted by the server;
determining a target video segment in the video segments;
and determining the playing time point of the target video clip according to the playing time point of the video clip.
The terminal provided by the embodiment of the present invention can execute the above method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
The embodiment of the invention also provides a server. Because the principle of solving the problem of the server is similar to the information processing method in the embodiment of the invention, the implementation of the server can refer to the implementation of the method, and repeated details are not repeated. As shown in fig. 12, the server according to the embodiment of the present invention includes: a processor 1200 for reading the program in the memory 1220 and executing the following processes:
acquiring a first request sent by a terminal through a transceiver 1210, wherein the first request comprises target content in a video playing page;
the playing time point of the video segment associated with the target content in the video is fed back to the terminal through the transceiver 1210.
A transceiver 1210 for receiving and transmitting data under the control of the processor 1200.
Where in fig. 12, the bus architecture may include any number of interconnected buses and bridges, with various circuits of one or more processors represented by processor 1200 and memory represented by memory 1220 being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 1210 may be a number of elements including a transmitter and a receiver that provide a means for communicating with various other apparatus over a transmission medium.
The processor 1200 is responsible for managing the bus architecture and general processing, and the memory 1220 may store data used by the processor 1200 in performing operations.
The processor 1200 is further configured to read the computer program and execute the following steps:
identifying a classification label for the target content;
determining a video segment matching the classification label of the target content;
identifying a playing time point of the video clip in the video;
and feeding back the playing time point of the video clip to the terminal.
The processor 1200 is further configured to read the computer program and execute the following steps:
scoring the video clips to obtain scoring information of the video clips;
and sending the grading information of the video clips to the terminal.
The processor 1200 is further configured to read the computer program and execute the following steps:
and scoring the video clips according to at least one of the duration of the video clips, the area ratio of elements related to the target content in the video clips, the fitness of the elements and the heat of the clips.
The processor 1200 is further configured to read the computer program and execute the following steps:
according to the formulaAnd calculating the score of the video clip, wherein S represents the clip score, M represents the area ratio of elements related to the target content in the video clip, F represents the degree of engagement of the elements in the video clip, H represents the clip heat degree, and t represents the clip duration.
The processor 1200 is further configured to read the computer program and execute the following steps:
identifying a target segment in the video to obtain target information of the target segment, wherein the target information comprises at least one of a classification label, playing time point information, position information, size information and scoring information;
sending the target information of the target segment to the terminal;
and under the condition that the target information does not comprise the playing time point information, acquiring a first request sent by the terminal.
The server provided by the embodiment of the present invention may execute the above method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
Furthermore, the computer-readable storage medium of the embodiment of the present invention is used for storing a computer program, and in one implementation, the computer program can be executed by a processor to implement the following steps:
after a first operation aiming at target content in a video playing page is received, determining a target video clip associated with the target content in the video, wherein the score of the target video clip meets a preset requirement;
acquiring a playing time point of the target video clip;
and identifying the playing time point.
The computer program is further executable by a processor to perform the steps of:
acquiring a classification label of the target content;
acquiring scoring information of the video clips matched with the classification labels of the target content;
and selecting a target video clip from the video clips according to the grading information of the video clips.
The computer program is further executable by a processor to perform the steps of:
and scoring the video clips according to at least one of the duration of the video clips matched with the classification labels, the area ratio of elements in the video clips related to the target content, the element engagement degree and the clip heat degree, so as to obtain scoring information of the video clips.
The computer program is further executable by a processor to perform the steps of:
according to the formulaAnd calculating the score of the video clip, wherein S represents the clip score, M represents the area ratio of elements related to the target content in the video clip, F represents the degree of engagement of the elements in the video clip, H represents the clip heat degree, and t represents the clip duration.
The computer program is further executable by a processor to perform the steps of:
acquiring a target identification file aiming at the video, wherein the target identification file comprises playing time point information of a video clip associated with the target content;
and searching the playing time point of the target video clip from the target identification file.
The computer program is further executable by a processor to perform the steps of:
sending a first request to a server, wherein the first request comprises the target content;
receiving a playing time point of a video clip which is sent by the server and is associated with the target content;
determining a target video segment in the video segments;
and determining the playing time point of the target video clip according to the playing time point of the video clip.
In another embodiment, the computer program is executable by a processor to perform the steps of:
acquiring a first request sent by a terminal, wherein the first request comprises target content in a video playing page;
and feeding back the playing time point of the video clip associated with the target content in the video to the terminal.
The computer program is executable by a processor to implement the steps of:
identifying a classification label for the target content;
determining a video segment matching the classification label of the target content;
identifying a playing time point of the video clip in the video;
and sending the playing time point of the video clip to the terminal.
The computer program is executable by a processor to implement the steps of:
scoring the video clips to obtain scoring information of the video clips;
and sending the grading information of the video clips to the terminal.
The computer program is executable by a processor to implement the steps of:
and scoring the video clips according to at least one of the duration of the video clips, the area ratio of elements related to the target content in the video clips, the fitness of the elements and the heat of the clips.
The computer program is executable by a processor to implement the steps of:
according to the formulaAnd calculating the score of the video clip, wherein S represents the clip score, M represents the area ratio of elements related to the target content in the video clip, F represents the degree of engagement of the elements in the video clip, H represents the clip heat degree, and t represents the clip duration.
The computer program is executable by a processor to implement the steps of:
identifying a target segment in the video to obtain target information of the target segment, wherein the target information comprises at least one of a classification label, playing time point information, position information, size information and scoring information;
sending the target information of the target segment to the terminal;
and under the condition that the target information does not comprise the playing time point information, acquiring a first request sent by the terminal.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the transceiving method according to various embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (16)

1. An information processing method applied to a terminal, the method comprising:
after a first operation aiming at target content in a video playing page is received, determining a target video clip associated with the target content in the video, wherein the score of the target video clip meets a preset requirement;
acquiring a playing time point of the target video clip;
and identifying the playing time point.
2. The method of claim 1, wherein the determining a target video segment of the video associated with the target content comprises:
acquiring a classification label of the target content;
acquiring scoring information of the video clips matched with the classification labels;
and selecting a target video clip from the video clips according to the grading information.
3. The method of claim 2, wherein the obtaining scoring information of the video segments matching the classification label comprises:
and scoring the video clips according to at least one of the duration of the video clips matched with the classification labels, the area ratio of elements in the video clips related to the target content, the element engagement degree and the clip heat degree, so as to obtain scoring information of the video clips.
4. The method according to claim 3, wherein the scoring the video segments according to at least one of duration of the video segments matching the classification tags, area ratio of elements in the video segments related to the target content, degree of engagement of the elements, and segment heat degree to obtain scoring information of the video segments comprises:
according to the formulaAnd calculating the score of the video clip, wherein S represents the clip score, M represents the area ratio of elements related to the target content in the video clip, F represents the degree of engagement of the elements in the video clip, H represents the clip heat degree, and t represents the clip duration.
5. The method according to claim 1, wherein before the obtaining the playing time point of the target video segment, the method further comprises:
acquiring a target identification file aiming at the video, wherein the target identification file comprises playing time point information of a video clip associated with the target content;
the obtaining of the playing time point of the target video clip includes:
and searching the playing time point of the target video clip from the target identification file.
6. The method of claim 1, wherein the determining a target video segment of the video associated with the target content comprises:
sending a first request to a server, wherein the first request comprises the target content;
receiving a playing time point of a video clip which is sent by the server and is associated with the target content;
determining a target video segment in the video segments;
the obtaining of the playing time point of the target video clip includes:
and determining the playing time point of the target video clip according to the playing time point of the video clip.
7. An information processing method applied to a server is characterized by comprising the following steps:
acquiring a first request sent by a terminal, wherein the first request comprises target content in a video playing page;
and feeding back the playing time point of the video clip associated with the target content in the video to the terminal.
8. The method according to claim 7, wherein the feeding back the playing time point of the video segment associated with the target content in the video to the terminal comprises:
identifying a classification label for the target content;
determining a video segment matching the classification label of the target content;
identifying a playing time point of the video clip in the video;
and feeding back the playing time point of the video clip to the terminal.
9. The method of claim 8, wherein after determining the video segments that match the classification label of the target content, the method further comprises:
scoring the video clips to obtain scoring information of the video clips;
and sending the grading information of the video clips to the terminal.
10. The method of claim 9, wherein scoring the video segments comprises:
and scoring the video clips according to at least one of the duration of the video clips, the area ratio of elements related to the target content in the video clips, the fitness of the elements and the heat of the clips.
11. The method of claim 10, wherein scoring the video segments according to at least one of duration of the video segments, area ratio of elements in the video segments related to the target content, degree of engagement of the elements, and degree of hotness of the segments comprises:
according to the formulaAnd calculating the score of the video clip, wherein S represents the clip score, M represents the area ratio of elements related to the target content in the video clip, F represents the degree of engagement of the elements in the video clip, H represents the clip heat degree, and t represents the clip duration.
12. The method of claim 7, wherein before the obtaining the first request sent by the terminal, the method further comprises:
identifying a target segment in the video to obtain target information of the target segment, wherein the target information comprises at least one of a classification label, playing time point information, position information, size information and scoring information;
sending the target information of the target segment to the terminal;
the acquiring the first request sent by the terminal includes:
and under the condition that the target information does not comprise the playing time point information, acquiring a first request sent by the terminal.
13. An information processing system is characterized by comprising a terminal and a server, wherein the terminal is used for sending a first request comprising target content to the server after receiving a first operation aiming at the target content in a video playing page;
the server is used for receiving the first request sent by the terminal; sending the playing time point of a video clip associated with the target content in the video to the terminal;
the terminal is also used for receiving the playing time point of the video clip sent by the server; determining a target video clip with a score meeting a preset requirement in the video clips; and identifying the playing time point of the target video clip.
14. A terminal comprising a transceiver, a processor, a memory and a computer program stored on the memory and executable on the processor, characterized in that the processor is configured to read a program in the memory to implement the steps in the information processing method according to any one of claims 1 to 6.
15. A server comprising a transceiver, a processor, a memory and a computer program stored on the memory and executable on the processor, wherein the processor is configured to read a program in the memory to implement the steps in the information processing method according to any one of claims 7 to 12.
16. A computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the steps in the information processing method according to any one of claims 1 to 6; or implementing the steps in the information processing method of any of claims 7 to 12.
CN201910939507.6A 2019-09-30 2019-09-30 Information processing method, system, terminal, server and readable storage medium Pending CN110611848A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910939507.6A CN110611848A (en) 2019-09-30 2019-09-30 Information processing method, system, terminal, server and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910939507.6A CN110611848A (en) 2019-09-30 2019-09-30 Information processing method, system, terminal, server and readable storage medium

Publications (1)

Publication Number Publication Date
CN110611848A true CN110611848A (en) 2019-12-24

Family

ID=68893979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910939507.6A Pending CN110611848A (en) 2019-09-30 2019-09-30 Information processing method, system, terminal, server and readable storage medium

Country Status (1)

Country Link
CN (1) CN110611848A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770386A (en) * 2020-05-29 2020-10-13 维沃移动通信有限公司 Video processing method, video processing device and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110035373A1 (en) * 2009-08-10 2011-02-10 Pixel Forensics, Inc. Robust video retrieval utilizing audio and video data
US20150256808A1 (en) * 2014-03-04 2015-09-10 Gopro, Inc. Generation of video from spherical content using edit maps
CN105792000A (en) * 2014-12-23 2016-07-20 北京数码视讯科技股份有限公司 Video recommendation method and device
CN105893443A (en) * 2015-12-15 2016-08-24 乐视网信息技术(北京)股份有限公司 Video recommendation method and apparatus, and server
CN106844446A (en) * 2016-12-16 2017-06-13 飞狐信息技术(天津)有限公司 Video methods of marking, device and video system based on user's viewing behavior
CN106851407A (en) * 2017-01-24 2017-06-13 维沃移动通信有限公司 A kind of control method and terminal of video playback progress
US20170168660A1 (en) * 2015-12-15 2017-06-15 Le Holdings (Beijing) Co., Ltd. Voice bullet screen generation method and electronic device
CN109005463A (en) * 2018-08-20 2018-12-14 聚好看科技股份有限公司 Page presentation and page data method for pushing and device
CN109068180A (en) * 2018-09-28 2018-12-21 武汉斗鱼网络科技有限公司 A kind of method and relevant device of determining video selection collection
CN110121093A (en) * 2018-02-06 2019-08-13 优酷网络技术(北京)有限公司 The searching method and device of target object in video

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110035373A1 (en) * 2009-08-10 2011-02-10 Pixel Forensics, Inc. Robust video retrieval utilizing audio and video data
US20150256808A1 (en) * 2014-03-04 2015-09-10 Gopro, Inc. Generation of video from spherical content using edit maps
CN105792000A (en) * 2014-12-23 2016-07-20 北京数码视讯科技股份有限公司 Video recommendation method and device
CN105893443A (en) * 2015-12-15 2016-08-24 乐视网信息技术(北京)股份有限公司 Video recommendation method and apparatus, and server
US20170168660A1 (en) * 2015-12-15 2017-06-15 Le Holdings (Beijing) Co., Ltd. Voice bullet screen generation method and electronic device
CN106844446A (en) * 2016-12-16 2017-06-13 飞狐信息技术(天津)有限公司 Video methods of marking, device and video system based on user's viewing behavior
CN106851407A (en) * 2017-01-24 2017-06-13 维沃移动通信有限公司 A kind of control method and terminal of video playback progress
CN110121093A (en) * 2018-02-06 2019-08-13 优酷网络技术(北京)有限公司 The searching method and device of target object in video
CN109005463A (en) * 2018-08-20 2018-12-14 聚好看科技股份有限公司 Page presentation and page data method for pushing and device
CN109068180A (en) * 2018-09-28 2018-12-21 武汉斗鱼网络科技有限公司 A kind of method and relevant device of determining video selection collection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770386A (en) * 2020-05-29 2020-10-13 维沃移动通信有限公司 Video processing method, video processing device and electronic equipment

Similar Documents

Publication Publication Date Title
EP3407285B1 (en) Target user orientation method and device, and computer storage medium
CN109034864A (en) Improve method, apparatus, electronic equipment and storage medium that precision is launched in advertisement
CN108269128B (en) Advertisement putting method, device, equipment and storage medium
CN109522531B (en) Document generation method and device, storage medium and electronic device
CN110035314A (en) Methods of exhibiting and device, storage medium, the electronic device of information
KR20160055930A (en) Systems and methods for actively composing content for use in continuous social communication
CN110278466B (en) Short video advertisement putting method, device and equipment
US9043828B1 (en) Placing sponsored-content based on images in video content
US10290028B2 (en) Computer implemented system for managing advertisements and a method thereof
US8346604B2 (en) Facilitating bidding on images
CN106688215A (en) Automated click type selection for content performance optimization
US20170213248A1 (en) Placing sponsored-content associated with an image
CN105678317B (en) Information processing method and server
CN107277573A (en) Video-frequency advertisement put-on method, device and computer-readable recording medium
US9449231B2 (en) Computerized systems and methods for generating models for identifying thumbnail images to promote videos
CN113382301A (en) Video processing method, storage medium and processor
CN105160545A (en) Delivered information pattern determination method and device
CN103425993A (en) Method and system for recognizing images
CN111309940A (en) Information display method, system, device, electronic equipment and storage medium
US20170213239A1 (en) Audience reach of different online advertising publishers
KR20170021101A (en) A smart studying apparatus, a method, and a computer readable storage medium for providing an user with scoring and studying information
CN110796480A (en) Real-time advertisement putting management method, device and system
US20190050890A1 (en) Video dotting placement analysis system, analysis method and storage medium
CN104680393A (en) Interactive advertisement method based on image contents and matching
CN111259257A (en) Information display method, system, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191224

RJ01 Rejection of invention patent application after publication