CN111556326A - Public class video clip pushing method and device, electronic equipment and storage medium - Google Patents

Public class video clip pushing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111556326A
CN111556326A CN202010231146.2A CN202010231146A CN111556326A CN 111556326 A CN111556326 A CN 111556326A CN 202010231146 A CN202010231146 A CN 202010231146A CN 111556326 A CN111556326 A CN 111556326A
Authority
CN
China
Prior art keywords
content
user
public class
pushed
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010231146.2A
Other languages
Chinese (zh)
Inventor
涂序文
韩静
汪世超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ping An Education Technology Co.,Ltd.
Original Assignee
Tutorabc Network Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tutorabc Network Technology Shanghai Co ltd filed Critical Tutorabc Network Technology Shanghai Co ltd
Priority to CN202010231146.2A priority Critical patent/CN111556326A/en
Publication of CN111556326A publication Critical patent/CN111556326A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Computing Systems (AREA)
  • Library & Information Science (AREA)
  • Computational Linguistics (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a method and a device for pushing a video clip of an open class, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring user information of a user to be pushed and a user associated with the user; forming a candidate public class video set according to the user information of the associated users, wherein the candidate public class video set comprises public class videos participated by the associated users, and each public class video has a content tag of an associated timestamp; acquiring the social platform account of the user to be pushed; acquiring a user tag of the user to be pushed from the social platform according to the social platform account of the user to be pushed; intercepting video clips of the public class videos according to the matching of the user tags and the content tags of the public class videos in the candidate public class video set; and pushing the intercepted video clip to the user to be pushed. The method and the device provided by the invention realize the recommendation of the video clip of the public class through an effective label mechanism.

Description

Public class video clip pushing method and device, electronic equipment and storage medium
Technical Field
The invention relates to the field of computer application, in particular to a method and a device for pushing a video clip of an open class, electronic equipment and a storage medium.
Background
With the development of the internet, online education is produced in order to increase the education audience. The current online education mode is mostly online video courses and online video interactive courses and the like. An online education platform is to promote online courses, and typically provides public classes so that more audiences can experience the education way of the online courses.
However, in the existing public class mode, on one hand, there are no targeted content setting and object pushing, so that the popularization effect is limited; on the other hand, the popularization effect of the public classes is difficult to track and determine; on the other hand, public classes have high requirements on network bandwidth, and are easy to cause blocking and further influence the online education effect if being influenced by the network bandwidth.
Therefore, how to consider the influence of network bandwidth while pertinently pushing the open course is a technical problem to be solved in the field.
Disclosure of Invention
The present invention is directed to overcoming the above-mentioned shortcomings in the related art, and providing a method, an apparatus, an electronic device, and a storage medium for pushing a video clip in a public class, which overcome one or more of the problems due to the limitations and disadvantages of the related art, at least to a certain extent.
According to an aspect of the present invention, there is provided a video clip pushing method for a public class, including:
acquiring user information of a user to be pushed and a user associated with the user;
forming a candidate public class video set according to the user information of the associated users, wherein the candidate public class video set comprises public class videos participated by the associated users, and each public class video has a content tag of an associated timestamp;
acquiring the social platform account of the user to be pushed;
acquiring a user tag of the user to be pushed from the social platform according to the social platform account of the user to be pushed;
intercepting video clips of the public class videos according to the matching of the user tags and the content tags of the public class videos in the candidate public class video set; and
and pushing the intercepted video clip to the user to be pushed.
In some embodiments of the present invention, the content tag of the associated timestamp of each of the public class videos is tagged by the users participating in the public class video.
In some embodiments of the present invention, the public classes are live online classes, and the content tag of the associated timestamp of each video of the public classes is generated by:
in the process of live broadcasting of the public class, receiving quasi-content tags provided by users participating in the public class in real time, and enabling the users related to the quasi-content tags to provide the marking time of the quasi-content tags;
aggregating the same quasi-content labels with the marking time difference smaller than the preset time difference to generate a plurality of content classifications;
for quasi-content labels different from each content classification, calculating a first similarity of the quasi-content labels and the content classifications adjacent to the quasi-content labels in the front of the time and a second similarity of the content classifications adjacent to the quasi-content labels in the back of the time;
judging whether the first similarity and the second similarity are both larger than a preset similarity threshold value;
if the first similarity and the second similarity are both larger than a preset similarity threshold, judging whether the first similarity is larger than or equal to the second similarity;
if the first similarity is larger than or equal to the second similarity, adding the quasi-content label into the content classification which is adjacent to the quasi-content label in time;
if the first similarity is smaller than the second similarity, adding the quasi-content label into the content classification which is adjacent to the quasi-content label in terms of time;
and determining the content label of the associated timestamp of each public class video according to each content classification.
In some embodiments of the present invention, if at least one of the first similarity and the second similarity is less than or equal to a predetermined similarity threshold, determining whether both the first similarity and the second similarity are less than or equal to a predetermined similarity threshold;
and if only one of the first similarity and the second similarity is less than or equal to a preset similarity threshold, adding the quasi-content label into the content classification of which the similarity is greater than the preset similarity threshold in the first similarity and the second similarity.
In some embodiments of the invention, the quasi-content tag is deleted if the first similarity and the second similarity are both less than or equal to a predetermined similarity threshold.
In some embodiments of the present invention, the determining the content tag of the associated timestamp of each of the public class videos according to each of the content classifications comprises:
classifying each content:
taking the quasi-content label with the largest quantity in the content classification as the content label of the content classification;
and taking the earliest marking time in each quasi-content label in the content classification as a starting time stamp related to the content label, and taking the latest marking time as an ending time stamp related to the content label.
In some embodiments of the present invention, said receiving, in real time, a quasi-content tag provided by a user participating in the public class during the course of the live broadcast of the public class, and enabling the marking time of the quasi-content tag provided by the user associated with the quasi-content tag to be before comprises:
generating a content tag set according to user tags of users on the social platform;
screening a candidate content tag set from the content tag set;
and providing a list of candidate content tags in the set of candidate content tags to users participating in the public class.
In some embodiments of the present invention, the filtering the candidate content tag set from the content tag set comprises:
acquiring course information of the public class, and calculating the association degree between the course information of the public class and each content label in the content label set;
sequencing each content label in the content label set according to the relevance;
and adding the content labels with the highest relevance degree into a candidate content label set.
In some embodiments of the present invention, the filtering the candidate content tag set from the content tag set comprises:
acquiring user tags of users participating in the public class and/or teachers of the public class on a social platform;
and selecting content tags from the content tag set according to the user participating in the public class and/or the user tags of the teacher in the social platform to join in a candidate content tag set.
In some embodiments of the present invention, the determining the content tag of the associated timestamp of each of the public class videos according to each of the content classifications further comprises:
counting quasi-content tags for each of the content classifications;
setting the weight of the content label of each content classification according to the number of the quasi-content labels of each content classification;
correspondingly, when video clips of a plurality of public class videos are intercepted according to the matching of the user tag and the content tag of each public class video in the candidate public class video set, sequencing the intercepted video clips of the public class videos according to the weight of the content tag;
selecting a plurality of video clips to be pushed from the video clips of the public class videos according to the sequencing result;
correspondingly, the pushing the intercepted video clip to the user to be pushed includes:
and pushing the video clip to be pushed to the user to be pushed.
In some embodiments of the present invention, the pushing the to-be-pushed video clip to the to-be-pushed user includes:
respectively pushing a plurality of video clips to be pushed to the users to be pushed according to the sorting sequence; or
And splicing a plurality of video clips to be pushed into the same video clip and pushing the same video clip to the user to be pushed.
In some embodiments of the present invention, the selecting a plurality of video clips to be pushed from the video clips of the plurality of public class videos according to the sorting result further includes:
when a plurality of video clips to be pushed have the same content tags, only the video clip to be pushed corresponding to the content tag with the highest weight is reserved.
According to another aspect of the present invention, there is also provided a video clip pushing apparatus for a public class, including:
the first acquisition module is used for acquiring user information of a user to be pushed and related to the user;
a forming module, configured to form a candidate public class video set according to the user information of the associated user, where the candidate public class video set includes public class videos that the associated users have participated in, and each public class video has a content tag with an associated timestamp;
the second acquisition module is used for acquiring the social platform account of the user to be pushed;
the third acquisition module is used for acquiring the user tag of the user to be pushed from the social platform according to the social platform account of the user to be pushed;
the matching module is used for intercepting video clips of the public class videos according to the matching of the user tags and the content tags of the public class videos in the candidate public class video set; and
and the pushing module is used for pushing the intercepted video clip to the user to be pushed.
According to still another aspect of the present invention, there is also provided an electronic apparatus, including: a processor; a storage medium having stored thereon a computer program which, when executed by the processor, performs the steps as described above.
According to yet another aspect of the present invention, there is also provided a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps as described above.
Compared with the prior art, the invention has the advantages that:
the invention pushes the open course in a targeted manner, and simultaneously considers the influence of network bandwidth, and pushes the video clip instead of the whole open course video, thereby improving the push efficiency and the push effect of the open course.
Drawings
The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
Fig. 1 shows a flowchart of a video clip pushing method for a public class according to an embodiment of the invention.
Fig. 2-4 illustrate a flow diagram for generating content tags for a public class video, according to a specific embodiment of the present invention.
FIG. 5 illustrates a flow diagram for generating content tags based on content classification in accordance with a specific embodiment of the present invention.
Fig. 6 is a flowchart illustrating a process for providing a list of candidate content tags for a user who wants to participate in an open class according to an embodiment of the present invention.
FIG. 7 illustrates a flow diagram for generating a set of candidate content tags, in accordance with a specific embodiment of the present invention.
Fig. 8 is a block diagram illustrating a video clip pushing apparatus for a public class according to an embodiment of the present invention.
Fig. 9 schematically illustrates a computer-readable storage medium in an exemplary embodiment of the invention.
Fig. 10 schematically illustrates an electronic device in an exemplary embodiment of the invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a flowchart of a video clip pushing method for a public class according to an embodiment of the invention. The video clip pushing method for the public class comprises the following steps:
step S110: and acquiring user information of a user to be pushed, which is related to the user.
Specifically, the associated user of the user to be pushed may be an attention user, an attention user or a mutual attention user of the user to be pushed on a predetermined social platform. In other embodiments, the associated user of the user to be pushed may be a user in a cell phone communication of the user to be pushed. The invention is not so limited.
Specifically, the user information may include one or more items of a user account number, a user name, a user identifier, and a user mobile phone number. The invention is not so limited.
Further, in each embodiment of the present invention, obtaining the user information of the user associated with the user to be pushed may be implemented after the user to be pushed authorizes the user to be pushed.
Step S120: and forming a candidate public class video set according to the user information of the associated users, wherein the candidate public class video set comprises public class videos participated by the associated users, and each public class video has a content tag of an associated timestamp.
Specifically, in step S120, whether the associated user participates in the public class or not can be queried from the online education platform through the user information of the associated user, so that the video of the public class that each associated user participates in can be obtained.
In particular, the content tags described herein are used to describe the primary content played in the video with its corresponding timestamp. Content tags with associated timestamps for each public class video may be manually tagged by the user or automatically generated by the system based on video content identification. For example, content tags for a video of a public class for a certain English study may include travel (timestamp 3:57-10:39), British (10:40-18:21), United states (18:22-29:05), and so on. The invention is not limited thereto
Step S130: and acquiring the social platform account of the user to be pushed.
Specifically, the social platform account of the user to be pushed is, for example, any one of an identifier, a name, a mailbox, and a mobile phone of the user on the social platform. Step S130 may be performed via authorization of the user to be pushed.
Step S140: and acquiring a user tag of the user to be pushed from the social platform according to the social platform account of the user to be pushed.
Specifically, the user tag of the social platform may be, for example, a user tag in a user portrait provided by the social platform, or a user tag generated after various types of data of the user are acquired from the social platform according to an account of the social platform of the user, which is not limited in the present invention. The user tags described in the present invention may be tags of content of interest to the user. For example, a user tag for a user may include animation, English learning, travel, British, and so on.
Step S150: and intercepting video clips of the public class videos according to the matching of the user tags and the content tags of the public class videos in the candidate public class video set.
Specifically, step S150 intercepts video segments of one or more public classes in each public class video through matching of the user tag and the content tag. Therefore, the video clips pushed to the user to be pushed in the following steps are video clips which are interesting to the user.
Step S160: and pushing the intercepted video clip to the user to be pushed.
According to the video clip pushing method for the public class, disclosed classes are pushed in a targeted manner, the influence of network bandwidth is considered, and the video clip is pushed instead of the whole public class video, so that the pushing efficiency and the pushing effect of the public class are improved.
In some embodiments of the present invention, the content tag of the associated timestamp of each of the public class videos is tagged by the users participating in the public class video. In other embodiments, the content tag of the associated timestamp of each of the public class videos may be obtained by automatic prediction through a machine learning model (e.g., a supervised learning model or an unsupervised learning model). The invention is not so limited.
In embodiments where the content tag of the associated timestamp of each of the public class videos is tagged by a user participating in the public class video, the public class may be a live online lesson. Thus, the content tag of the associated timestamp of each of the public class videos may be described in conjunction with fig. 2-4.
Step S201: and in the process of live broadcasting of the public class, receiving quasi-content tags provided by users participating in the public class in real time, and enabling the users related to the quasi-content tags to provide the marking time of the quasi-content tags.
Specifically, the users participating in the public class can provide quasi-content tags for the public class videos in the live broadcasting process of the public class through manual input, option selection and the like.
In particular, users participating in an open class may make the provision of quasi-content tags on their own initiative or after prompting via the system. Further, only a portion of the users participating in the public class may be prompted to provide quasi-content tags.
Considering that there may be a deviation in the content tags provided by the respective users, the content tags are aligned and processed by the following steps so as to obtain accurate content tags. Further, if the content tags provided by the respective users are directly used, the amount of data stored in the system is large, and the processing efficiency of the system is reduced when the system performs step S150.
Step S202: the same quasi-content tags with the marking time difference smaller than the preset time difference are aggregated to generate a plurality of content classifications.
Specifically, the predetermined time difference may be a time difference within 5 seconds to 10 minutes, for example, 30 seconds, 1 minute, 3 minutes, 5 minutes, etc., and may be set according to actual requirements, which is not limited by the present invention.
Specifically, when the predetermined time difference is set to 30 seconds, for each quasi-content tag: label a (0:30, part thirty seconds), label a (0:35, part thirty five seconds), label B (0:45, part forty five seconds), label C (0:50, part fifty seconds), label a (0:55, part fifty five seconds), label a (1:34, part thirty four seconds), label a (2:35, part thirty five seconds).
In one implementation, the same quasi-content tags marked with time differences smaller than the preset time difference are subjected to aggregation representation, and as for the quasi-content tags to be judged, the quasi-content tags can be added into the content classification as long as the quasi-content tags with the time differences smaller than the preset time difference exist in the content classification same as the quasi-content tags. For example, in the above-described embodiment, tag a (0:30, zero minutes thirty seconds), tag a (0:35, zero minutes thirty-five seconds), tag a (0:55, zero minutes fifty-five seconds), and tag a (1:34, one minute thirty-four seconds) are aggregated into one content classification.
In another implementation, the same quasi-content tags with tag time differences less than a predetermined time difference are aggregated to indicate that the time difference between the earliest inter-tag time and the latest tag time in the content classification needs to be less than the predetermined time difference. For example, in the above-described embodiment, the tag a (0:30, zero minutes thirty seconds), the tag a (0:35, zero minutes thirty-five seconds), the tag a (0:55, zero minutes fifty-five seconds) are aggregated into one content classification.
By analogy, the present invention is not limited to obtaining multiple content classifications.
Step S203: for quasi-content labels different from each content classification, calculating a first similarity of the quasi-content labels and the content classifications adjacent to the quasi-content labels in the front time and a second similarity of the content classifications adjacent to the quasi-content labels in the back time.
In one particular implementation, the content classification temporally immediately preceding and temporally subsequent to the quasi-content tag refers to the content classification having the content tag closest to the marking time of the quasi-content tag among the generated content classifications.
For example, for content class A', it includes tag A (0:30, zero minutes thirty seconds), tag A (0:35, zero minutes thirty-five seconds), tag A (0:55, zero minutes fifty-five seconds); content class B' which includes label B (1:03, one-zero-three seconds), label B (1:15, one-fifteen seconds), label B (1:35, one-thirty-five seconds); content category C' includes label C (1:00, one minute), label B (1:45, one minute forty-five seconds). For quasi-content tag D to be processed (0:58, zero minutes fifty-eight seconds), the content temporally preceding and adjacent to the quasi-content tag is classified as content class a ', and the content temporally succeeding and adjacent to the quasi-content tag is classified as content class B'.
Further, the similarity between the quasi-content label and the content classification is calculated as the similarity between the quasi-content label and the quasi-content label with the largest number in the content classification.
Step S204: and judging whether the first similarity and the second similarity are both larger than a preset similarity threshold value.
Specifically, the similarity calculation may be performed by a cosine similarity calculation or a similarity calculation between other words, which is not limited in the present invention. The predetermined similarity threshold may be set in accordance with the similarity algorithm employed.
If the first similarity and the second similarity are both greater than the predetermined similarity threshold, step S205 is executed: and judging whether the first similarity is greater than or equal to a second similarity.
Specifically, when the first similarity and the second similarity are both greater than the predetermined similarity threshold, it indicates that the quasi-content tag can be classified into one of the two content categories in step S203.
If the first similarity is greater than or equal to the second similarity, step S206 is executed: the quasi-content tag is added to a content classification that is temporally previously adjacent to the quasi-content tag.
If the first similarity is smaller than the second similarity, step S207 is executed: the quasi-content tag is added to a content classification that is temporally adjacent to the quasi-content tag.
If at least one of the first similarity and the second similarity is less than or equal to a predetermined similarity threshold, step S209 is executed: and judging whether the first similarity and the second similarity are both smaller than or equal to a preset similarity threshold value.
If only one of the first similarity and the second similarity is smaller than or equal to a predetermined similarity threshold, step S210 is executed: and adding the quasi-content label into the content classification of which the similarity is greater than a preset similarity threshold in the first similarity and the second similarity.
If the first similarity and the second similarity are both less than or equal to a predetermined similarity threshold, then step S210 is executed: the quasi-content tag is deleted.
Specifically, if it is determined in step S209 that the first similarity and the second similarity are both less than or equal to the predetermined similarity threshold, the quasi-content tag may be a tag marked by a user by mistake, and thus, the quasi-content tag may be deleted in step S210, so that the clustering result is more accurate, and the data processing amount is further reduced.
Thus, the individual quasi-content tags are added or deleted to the content classifications in step S203, step S204, step S205, step S206, step S207, step S209, and step S210 to reduce the number of content classifications.
Step S208: and determining the content label of the associated timestamp of each public class video according to each content classification.
Specifically, a specific implementation of the step S208 may refer to fig. 5, and fig. 5 shows a flowchart of generating a content tag according to content classification according to an embodiment of the present invention. Two steps in the flow chart shown in fig. 5 may be performed for each content category:
step S221: and taking the quasi-content label with the largest number in the content classification as the content label of the content classification.
Specifically, each individual quasi-content tag is added to or deleted from each content classification through the aforementioned steps S203, S204, S205, S206, S207, S209, and S210, so that a plurality of different quasi-content tags exist in each content classification. In step S211, the quasi-content tags with the largest number are used as the content tags of the content classification by counting the number of quasi-content tags in the content classification.
For example, for content class A', it includes label A (0:30, zero minutes thirty seconds), label A (0:35, zero minutes thirty-five seconds), label A (0:55, zero minutes fifty-five seconds), label D (0:58, zero minutes fifty-eight seconds). The number of quasi-content tags a is 3, and the number of quasi-content tags D is 1, so that the quasi-content tag a with the largest number is used as the content tag of the content classification a'.
Step S222: and taking the earliest marking time in each quasi-content label in the content classification as a starting time stamp related to the content label, and taking the latest marking time as an ending time stamp related to the content label.
For example, for content class A', it includes label A (0:30, zero minutes thirty seconds), label A (0:35, zero minutes thirty-five seconds), label A (0:55, zero minutes fifty-five seconds), label D (0:58, zero minutes fifty-eight seconds). The earliest 0:30 may be used as the start timestamp and the latest 0:58 may be used as the end timestamp. The start time stamp and the end time stamp are used for the interception of the video segment of step S150.
The intercepted video segment of step S150 can be made more complete by step S222. Further, the intercepting of the video segment may further include the step of adjusting the start time stamp and the end time stamp to avoid that the intercepted video segment does not start and/or end at the part of the break between the sentences, which is not limited by the present invention.
Referring now to fig. 6, fig. 6 illustrates a flow diagram for providing a list of candidate content tags to a user who wants to participate in an public class, in accordance with a specific embodiment of the present invention. In some embodiments of the present invention, the receiving, in real time, a quasi-content tag provided by a user participating in the public class during the live broadcasting of the public class in the local axis S201, and before the marking time for the quasi-content tag provided by the user associated with the quasi-content tag, the method may further include the following steps:
step S231: and generating a content tag set according to the user tags of the users on the social platform.
In some embodiments, step S231 may obtain the user tags of all users on the social platform through user authorization, and after performing deduplication, use the user tags as content tags to generate a content tag set.
In other embodiments, step S231 may obtain the user tags of all users of the online education platform in the social platform through user authorization, and after performing deduplication, use the user tags as content tags to generate the content tag set.
The present invention can also be implemented in many different ways, which are not described herein.
Step S232: and screening candidate content label sets from the content label sets.
Specifically, step S232 may filter content tags from the content tag set through different criteria (e.g., similarity, relevance, etc.) and generate a candidate content tag set.
Step S233: and providing a list of candidate content tags in the set of candidate content tags to users participating in the public class.
Therefore, the users participating in the public class can directly select the content tags from the candidate content tag list for marking. In the matching in step S150, since the user tag and the content tag are homologous, the calculation amount of matching is reduced, and the matching efficiency and the matching accuracy are increased.
Turning next to FIG. 7, FIG. 7 illustrates a flow diagram for generating a set of candidate content tags, in accordance with a specific embodiment of the present invention. The step S232 of filtering the candidate content tag set from the content tag sets in fig. 6 may include the following steps:
step S241: and acquiring the course information of the public class, and calculating the association degree between the course information of the public class and each content label in the content label set.
In some embodiments, the association degree may be implemented by, for example, an algorithm, where the course information and the content tag of the public class are used as a keyword pair, and the keyword pair is retrieved by using a search engine, and the more links/web pages obtained by retrieval, the higher the association degree is. In other embodiments, the association degree may be implemented by, for example, an algorithm that uses the course information of the public class as a keyword, searches the keyword by using a search engine, and obtains the frequency of the course information of the public class and the frequency of the content tags existing in the same sentence, the same paragraph, and the same web page from the search result, wherein the association degree is higher as the frequency is higher. The present invention can also implement more different association algorithms, which are not described herein.
Specifically, the course information of the public course may include one or more of a course name, a course keyword extracted from the course introduction, and a course teacher, for example, which is not limited by the invention.
Step S242: and sequencing the content tags in the content tag set according to the relevance.
Step S243: and adding the content labels with the highest relevance degree into a candidate content label set.
For example, step S243 may add the N content tags with the highest association degree to the candidate content tag set. For another example, step S243 may add M% content tags with the highest relevance to the candidate content tag set. N is an integer greater than 0, and M is a constant greater than 0. The values of N and M may be determined according to the size of the display area of the candidate content tag list, for example, which is not intended to limit the present invention.
In some variations, in some embodiments of the present invention, the step S232 of fig. 6 of filtering the candidate content tag set from the content tag set may include the following steps: acquiring user tags of users participating in the public class and/or teachers of the public class on a social platform; and selecting the same content tags as the user tags from the content tag set according to the user participating in the public class and/or the user tags of the teacher in the social platform to join in a candidate content tag set (a content tag duplication removing step can be further included).
Therefore, the user tags of the social platform of the users related to the public class are used as candidate content tags, and the screening speed and the screening efficiency of the step S232 are increased.
In some embodiments of the present invention, the step S208 shown in fig. 2 of determining the content tag of the associated timestamp of each public class video according to each content classification may further include the following steps: counting quasi-content tags for each of the content classifications; setting the weight of the content label of each content classification according to the number of the quasi-content labels of each content classification. Thus, a larger number of quasi-content tags indicates more users tagging in the content classification, and the more accurate the content tags of the content classification. Thus, the weight of the content tag of the content classification is further optimized for the video capture of step S150 and the presentation of the video segment in step S160 shown in fig. 1.
For example, step S150 shown in fig. 1 may include the following steps: when video clips of a plurality of public class videos are intercepted according to the matching of the user tags and the content tags of all public class videos in the candidate public class video set, sequencing the intercepted video clips of the public class videos according to the weight of the content tags; and selecting a plurality of video clips to be pushed from the video clips of the videos of the public classes according to the sequencing result. Correspondingly, step S160 shown in fig. 1 may include the following steps: and pushing the video clip to be pushed to the user to be pushed. Specifically, the selection of the video segments to be pushed may be performed according to a predetermined number or a predetermined percentage, which is not limited by the present invention.
Therefore, the step of intercepting the video segments is considered, but when the video segments are too many, the requirement on the bandwidth is higher, so that the video segments to be pushed are the video segments closer to the content tags while the intercepted video segments are sorted and screened by the weight of the content tags to reduce the number of the video segments to be pushed.
In some specific implementations of the foregoing embodiments, the step of pushing the to-be-pushed video clip to the to-be-pushed user may include: and respectively pushing a plurality of video clips to be pushed to the users to be pushed according to the sorting sequence. Therefore, each video clip is independently sent, and a user can click and push the video clip to play and watch the video clip by himself.
In another specific implementation of the foregoing embodiment, the step of pushing the to-be-pushed video clip to the to-be-pushed user may include: and splicing a plurality of video clips to be pushed into the same video clip and pushing the same video clip to the user to be pushed. Therefore, after the plurality of video clips are spliced together, the user can conveniently play and watch the video clips together.
In the above embodiment, the selecting a plurality of video clips to be pushed from the video clips of the multiple public class videos according to the sorting result may further include the following steps: when a plurality of video clips to be pushed have the same content tags, only the video clip to be pushed corresponding to the content tag with the highest weight is reserved. Therefore, the video clips with the same content tag push are prevented from being repeatedly played and watched by the user, meanwhile, the video clips with the same content tag push are reduced, and the reserved video clips are the video clips closer to the content tags.
The above are merely a plurality of specific implementations of the present invention, and each of the specific implementations may be implemented alone or in combination, and the present invention is not limited thereto.
Fig. 8 is a block diagram illustrating a video clip pushing apparatus for a public class according to an embodiment of the present invention. The device 300 for pushing the video clips of the public class comprises a first obtaining module 310, a forming module 320, a second obtaining module 330, a third obtaining module 340, a matching module 350 and a pushing module 360.
The first obtaining module 310 is configured to obtain user information of a user to be pushed, which is associated with the user;
the forming module 320 is configured to form a candidate public class video set according to the user information of the associated user, where the candidate public class video set includes public class videos that the associated users have participated in, and each public class video has a content tag with an associated timestamp;
the second obtaining module 330 is configured to obtain the social platform account of the user to be pushed;
the third obtaining module 340 is configured to obtain, from the social platform, a user tag of the user to be pushed according to the social platform account of the user to be pushed;
the matching module 350 is configured to intercept a video clip of an open course video according to matching between the user tag and a content tag of each open course video in the candidate open course video set; and
the pushing module 360 is configured to push the intercepted video clip to the user to be pushed.
In the video clip pushing device for the public class according to the exemplary embodiment of the present invention, the public class is pushed in a targeted manner while considering the influence of network bandwidth, and the video clip is pushed instead of the whole public class video, so that the pushing efficiency and the pushing effect of the public class are improved.
Fig. 8 is a schematic diagram illustrating the device 300 for pushing a public class video clip provided by the present invention, and the splitting, merging and adding of modules are within the scope of the present invention without departing from the concept of the present invention. The present invention provides a device 300 for pushing public class video clips, which can be implemented by software, hardware, firmware, plug-in and any combination thereof, and the present invention is not limited thereto.
In an exemplary embodiment of the present invention, a computer-readable storage medium is further provided, on which a computer program is stored, which when executed by, for example, a processor, can implement the steps of the video clip pushing method for a public class described in any of the above embodiments. In some possible embodiments, the aspects of the present invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present invention described in the above-mentioned section of the opening video clip pushing method of the present specification, when the program product is run on the terminal device.
Referring to fig. 9, a program product 700 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the tenant computing device, partly on the tenant device, as a stand-alone software package, partly on the tenant computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing devices may be connected to the tenant computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In an exemplary embodiment of the invention, there is also provided an electronic device that may include a processor and a memory for storing executable instructions of the processor. Wherein the processor is configured to execute the steps of the video clip pushing method for public classes in any of the above embodiments via executing the executable instructions.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 500 according to this embodiment of the invention is described below with reference to fig. 10. The electronic device 500 shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 10, the electronic device 500 is embodied in the form of a general purpose computing device. The components of the electronic device 500 may include, but are not limited to: at least one processing unit 510, at least one memory unit 520, a bus 530 that couples various system components including the memory unit 520 and the processing unit 510, a display unit 540, and the like.
Wherein the storage unit stores program code, which can be executed by the processing unit 510, so that the processing unit 510 executes the steps according to various exemplary embodiments of the present invention described in the above-mentioned section of the video clip pushing method of the present specification. For example, the processing unit 510 may perform the steps shown in any of fig. 1 to 6.
The memory unit 520 may include a readable medium in the form of a volatile memory unit, such as a random access memory unit (RAM)5201 and/or a cache memory unit 5202, and may further include a read only memory unit (ROM) 5203.
The memory unit 520 may also include a program/utility 5204 having a set (at least one) of program modules 5205, such program modules 5205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 530 may be one or more of any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 500 may also communicate with one or more external devices 600 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a tenant to interact with the electronic device 500, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 500 to communicate with one or more other computing devices. Such communication may be through input/output (I/O) interfaces 550. Also, the electronic device 500 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 560. The network adapter 560 may communicate with other modules of the electronic device 500 via the bus 530. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 500, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, or a network device, etc.) to execute the above-mentioned public class video clip pushing method according to the embodiment of the present invention.
Compared with the prior art, the invention has the advantages that:
the invention pushes the open course in a targeted manner, and simultaneously considers the influence of network bandwidth, and pushes the video clip instead of the whole open course video, thereby improving the push efficiency and the push effect of the open course.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (15)

1. A video clip pushing method for a public class is characterized by comprising the following steps:
acquiring user information of a user to be pushed and a user associated with the user;
forming a candidate public class video set according to the user information of the associated users, wherein the candidate public class video set comprises public class videos participated by the associated users, and each public class video has a content tag of an associated timestamp;
acquiring the social platform account of the user to be pushed;
acquiring a user tag of the user to be pushed from the social platform according to the social platform account of the user to be pushed;
intercepting video clips of the public class videos according to the matching of the user tags and the content tags of the public class videos in the candidate public class video set; and
and pushing the intercepted video clip to the user to be pushed.
2. The method of claim 1, wherein the content tag of the associated timestamp of each of the public class videos is tagged by a user participating in the public class video.
3. The method as claimed in claim 2, wherein the public class is an online live class, and the content tag of the associated timestamp of each video of the public class is generated by the following steps:
in the process of live broadcasting of the public class, receiving quasi-content tags provided by users participating in the public class in real time, and enabling the users related to the quasi-content tags to provide the marking time of the quasi-content tags;
aggregating the same quasi-content labels with the marking time difference smaller than the preset time difference to generate a plurality of content classifications;
for quasi-content labels different from each content classification, calculating a first similarity of the quasi-content labels and the content classifications adjacent to the quasi-content labels in the front of the time and a second similarity of the content classifications adjacent to the quasi-content labels in the back of the time;
judging whether the first similarity and the second similarity are both larger than a preset similarity threshold value;
if the first similarity and the second similarity are both larger than a preset similarity threshold, judging whether the first similarity is larger than or equal to the second similarity;
if the first similarity is larger than or equal to the second similarity, adding the quasi-content label into the content classification which is adjacent to the quasi-content label in time;
if the first similarity is smaller than the second similarity, adding the quasi-content label into the content classification which is adjacent to the quasi-content label in terms of time;
and determining the content label of the associated timestamp of each public class video according to each content classification.
4. The opening course video clip pushing method of claim 3,
if at least one of the first similarity and the second similarity is smaller than or equal to a preset similarity threshold, judging whether the first similarity and the second similarity are both smaller than or equal to a preset similarity threshold;
and if only one of the first similarity and the second similarity is less than or equal to a preset similarity threshold, adding the quasi-content label into the content classification of which the similarity is greater than the preset similarity threshold in the first similarity and the second similarity.
5. The method of claim 4, wherein the quasi-content tag is deleted if the first similarity and the second similarity are both less than or equal to a predetermined similarity threshold.
6. The method of claim 3, wherein determining the content tag of the associated timestamp of each of the public class videos according to each of the content classifications comprises:
classifying each content:
taking the quasi-content label with the largest quantity in the content classification as the content label of the content classification;
and taking the earliest marking time in each quasi-content label in the content classification as a starting time stamp related to the content label, and taking the latest marking time as an ending time stamp related to the content label.
7. The method as claimed in claim 3, wherein the receiving the quasi-content tag provided by the user participating in the public class in real time during the live broadcasting of the public class, and the time marking the quasi-content tag provided by the user associated with the quasi-content tag comprises:
generating a content tag set according to user tags of users on the social platform;
screening a candidate content tag set from the content tag set;
and providing a list of candidate content tags in the set of candidate content tags to users participating in the public class.
8. The video clip pushing method for the public class according to claim 7, wherein the filtering the candidate content tag set from the content tag set comprises:
acquiring course information of the public class, and calculating the association degree between the course information of the public class and each content label in the content label set;
sequencing each content label in the content label set according to the relevance;
and adding the content labels with the highest relevance degree into a candidate content label set.
9. The video clip pushing method for the public class according to claim 7, wherein the filtering the candidate content tag set from the content tag set comprises:
acquiring user tags of users participating in the public class and/or teachers of the public class on a social platform;
and selecting content tags from the content tag set according to the user participating in the public class and/or the user tags of the teacher in the social platform to join in a candidate content tag set.
10. The method of claim 3, wherein determining the content tag of the associated timestamp of each of the public class videos according to each of the content classifications further comprises:
counting quasi-content tags for each of the content classifications;
setting the weight of the content label of each content classification according to the number of the quasi-content labels of each content classification;
correspondingly, when video clips of a plurality of public class videos are intercepted according to the matching of the user tag and the content tag of each public class video in the candidate public class video set, sequencing the intercepted video clips of the public class videos according to the weight of the content tag;
selecting a plurality of video clips to be pushed from the video clips of the public class videos according to the sequencing result;
correspondingly, the pushing the intercepted video clip to the user to be pushed includes:
and pushing the video clip to be pushed to the user to be pushed.
11. The public class video clip pushing method according to claim 10, wherein the pushing the video clip to be pushed to the user to be pushed comprises:
respectively pushing a plurality of video clips to be pushed to the users to be pushed according to the sorting sequence; or
And splicing a plurality of video clips to be pushed into the same video clip and pushing the same video clip to the user to be pushed.
12. The method as claimed in claim 10, wherein the selecting a plurality of video clips to be pushed from the video clips of the plurality of open class videos according to the sorting result further comprises:
when a plurality of video clips to be pushed have the same content tags, only the video clip to be pushed corresponding to the content tag with the highest weight is reserved.
13. An open course video clip pushing device, comprising:
the first acquisition module is used for acquiring user information of a user to be pushed and related to the user;
a forming module, configured to form a candidate public class video set according to the user information of the associated user, where the candidate public class video set includes public class videos that the associated users have participated in, and each public class video has a content tag with an associated timestamp;
the second acquisition module is used for acquiring the social platform account of the user to be pushed;
the third acquisition module is used for acquiring the user tag of the user to be pushed from the social platform according to the social platform account of the user to be pushed;
the matching module is used for intercepting video clips of the public class videos according to the matching of the user tags and the content tags of the public class videos in the candidate public class video set; and
and the pushing module is used for pushing the intercepted video clip to the user to be pushed.
14. An electronic device, characterized in that the electronic device comprises:
a processor;
memory having stored thereon a computer program which, when executed by the processor, performs the public class video clip pushing method according to any one of claims 1 to 12.
15. A storage medium having stored thereon a computer program which, when executed by a processor, performs the video clip pushing method for a public class according to any one of claims 1 to 12.
CN202010231146.2A 2020-03-27 2020-03-27 Public class video clip pushing method and device, electronic equipment and storage medium Pending CN111556326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010231146.2A CN111556326A (en) 2020-03-27 2020-03-27 Public class video clip pushing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010231146.2A CN111556326A (en) 2020-03-27 2020-03-27 Public class video clip pushing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111556326A true CN111556326A (en) 2020-08-18

Family

ID=72007264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010231146.2A Pending CN111556326A (en) 2020-03-27 2020-03-27 Public class video clip pushing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111556326A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235613A (en) * 2020-09-17 2021-01-15 百度在线网络技术(北京)有限公司 Video processing method and device, electronic equipment and storage medium
CN113271478A (en) * 2021-05-17 2021-08-17 北京大米科技有限公司 Learning video recommendation method, information interaction method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887459A (en) * 2010-06-28 2010-11-17 中国科学院计算技术研究所 Network video topic detection method and system thereof
CN104469508A (en) * 2013-09-13 2015-03-25 中国电信股份有限公司 Method, server and system for performing video positioning based on bullet screen information content
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment
US20180192138A1 (en) * 2016-12-29 2018-07-05 Arris Enterprises Llc Recommendation of segmented content
CN109145280A (en) * 2017-06-15 2019-01-04 北京京东尚科信息技术有限公司 The method and apparatus of information push
US20190082236A1 (en) * 2017-09-11 2019-03-14 The Provost, Fellows, Foundation Scholars, and the other Members of Board, of the College of the Determining Representative Content to be Used in Representing a Video
CN109587578A (en) * 2018-12-21 2019-04-05 麒麟合盛网络技术股份有限公司 The processing method and processing device of video clip

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887459A (en) * 2010-06-28 2010-11-17 中国科学院计算技术研究所 Network video topic detection method and system thereof
CN104469508A (en) * 2013-09-13 2015-03-25 中国电信股份有限公司 Method, server and system for performing video positioning based on bullet screen information content
US20180192138A1 (en) * 2016-12-29 2018-07-05 Arris Enterprises Llc Recommendation of segmented content
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment
CN109145280A (en) * 2017-06-15 2019-01-04 北京京东尚科信息技术有限公司 The method and apparatus of information push
US20190082236A1 (en) * 2017-09-11 2019-03-14 The Provost, Fellows, Foundation Scholars, and the other Members of Board, of the College of the Determining Representative Content to be Used in Representing a Video
CN109587578A (en) * 2018-12-21 2019-04-05 麒麟合盛网络技术股份有限公司 The processing method and processing device of video clip

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卜旭松: "基于物品协同过滤的个性化视频推荐算法改进研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235613A (en) * 2020-09-17 2021-01-15 百度在线网络技术(北京)有限公司 Video processing method and device, electronic equipment and storage medium
CN113271478A (en) * 2021-05-17 2021-08-17 北京大米科技有限公司 Learning video recommendation method, information interaction method and device
CN113271478B (en) * 2021-05-17 2023-01-10 北京大米科技有限公司 Learning video recommendation method, information interaction method and device

Similar Documents

Publication Publication Date Title
US10613719B2 (en) Generating a form response interface in an online application
CN109635155B (en) Method and device for pushing video to user, electronic equipment and storage medium
US20130325977A1 (en) Location estimation of social network users
CN110941738B (en) Recommendation method and device, electronic equipment and computer-readable storage medium
US10565401B2 (en) Sorting and displaying documents according to sentiment level in an online community
CN107193974B (en) Regional information determination method and device based on artificial intelligence
CN112749326B (en) Information processing method, information processing device, computer equipment and storage medium
CN109271509B (en) Live broadcast room topic generation method and device, computer equipment and storage medium
CN109275047B (en) Video information processing method and device, electronic equipment and storage medium
CN111259192A (en) Audio recommendation method and device
CN111314732A (en) Method for determining video label, server and storage medium
WO2024099171A1 (en) Video generation method and apparatus
CN111556326A (en) Public class video clip pushing method and device, electronic equipment and storage medium
CN112307318B (en) Content publishing method, system and device
CN114390368B (en) Live video data processing method and device, equipment and readable medium
CN111538830A (en) French retrieval method, French retrieval device, computer equipment and storage medium
CN115203539A (en) Media content recommendation method, device, equipment and storage medium
CN114491149A (en) Information processing method and apparatus, electronic device, storage medium, and program product
CN113011169B (en) Method, device, equipment and medium for processing conference summary
CN110475158B (en) Video learning material providing method and device, electronic equipment and readable medium
TWI725375B (en) Data search method and data search system thereof
CN116955591A (en) Recommendation language generation method, related device and medium for content recommendation
Aichroth et al. Mico-media in context
US11558471B1 (en) Multimedia content differentiation
US11575638B2 (en) Content analysis message routing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201231

Address after: 200030 unit 01, room 801, 166 Kaibin Road, Xuhui District, Shanghai

Applicant after: Shanghai Ping An Education Technology Co.,Ltd.

Address before: 152, 86 Tianshui Road, Hongkou District, Shanghai

Applicant before: TUTORABC NETWORK TECHNOLOGY (SHANGHAI) Co.,Ltd.

WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200818