CN111414494A - Multimedia work display method and device, electronic equipment and storage medium - Google Patents

Multimedia work display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111414494A
CN111414494A CN202010097599.0A CN202010097599A CN111414494A CN 111414494 A CN111414494 A CN 111414494A CN 202010097599 A CN202010097599 A CN 202010097599A CN 111414494 A CN111414494 A CN 111414494A
Authority
CN
China
Prior art keywords
work
label
sample
labels
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010097599.0A
Other languages
Chinese (zh)
Inventor
刘新宇
严引
程骏
杨秋歌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Reach Best Technology Co Ltd
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Reach Best Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Reach Best Technology Co Ltd filed Critical Reach Best Technology Co Ltd
Priority to CN202010097599.0A priority Critical patent/CN111414494A/en
Publication of CN111414494A publication Critical patent/CN111414494A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/45Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Image Analysis (AREA)

Abstract

According to the multimedia work display method provided by the embodiment of the disclosure, the abnormal work labels in the multimedia works identified as the image-text works are removed, so that the phenomenon that the work labels in the multimedia works do not accord with the content of the work labels per se is avoided, the accuracy of the work labels in the multimedia works is improved, a user can obtain the required resources according to the labels, and the user experience is improved.

Description

Multimedia work display method and device, electronic equipment and storage medium
Technical Field
The present disclosure belongs to the technical field of network information, and in particular, to a method and an apparatus for displaying multimedia works, an electronic device, and a storage medium.
Background
With the popularization of the internet and the development of multimedia technology, people are becoming accustomed to receiving their own required information by playing multimedia works.
The author of the multimedia works usually sets the cover of the works of the multimedia works and adds some text labels related to the works when generating and publishing the multimedia works, so as to obtain the graphics works with richer contents and attract other users to click and play.
However, limited by the creative ability of the author or other factors, when the author adds a label to the work, the label often does not conform to the content of the work, so that other users cannot accurately obtain the required work according to the label of the work, and the user experience is affected.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
The present disclosure provides a method, an apparatus, an electronic device and a storage medium for displaying multimedia works, so as to at least solve the problem that the label of the graphic works is not consistent with the content of the label in the related art,
the technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a method for displaying a multimedia work is provided, which includes:
identifying a target multimedia work to be displayed, and determining whether the target multimedia work is a picture and text work;
under the condition that the target multimedia works are determined to be image-text works, candidate work labels of the target multimedia works are obtained;
and removing abnormal work labels which do not accord with preset display conditions from the candidate work labels to obtain target work labels, and displaying the target multimedia works through the target work labels.
Optionally, the preset display condition includes that the candidate work label is related to the work content of the target multimedia work, and the abnormal work label which is not in accordance with the preset display condition is removed from the candidate work label to obtain the target work label, including:
inputting the candidate work label into a label identification model for identification, and determining an abnormal work label irrelevant to the content between the image-text works;
and removing the abnormal work label from the candidate work label to obtain a target work label.
Optionally, the preset label recognition model is obtained by pre-training through the following steps:
acquiring a target sample label from a label library, wherein the target sample label is a verification sample label for identifying the abnormal work label;
training the target sample label based on a preset machine algorithm to obtain a preset label identification model.
Optionally, the step of obtaining the target sample label from the label library includes:
acquiring a preset number of candidate sample labels from a label library, wherein the label library comprises labels which are arranged in a descending order according to the used frequency, and the candidate sample labels refer to the labels which are arranged in the front order;
obtaining sample work content corresponding to the candidate sample label;
respectively obtaining content characteristic vectors of the sample work content and label characteristic vectors of the candidate sample labels;
acquiring the similarity between the label characteristic vector and the content characteristic vector;
and if the similarity is smaller than a similarity threshold value, taking the candidate sample label as a target sample label.
Optionally, the step of respectively obtaining the content feature vector of the sample work content and the label feature vector of the candidate sample label includes:
performing word segmentation processing on the candidate sample labels and the corresponding sample work content respectively to obtain sample label word segmentation and sample content word segmentation;
respectively removing useless participles of a preset type in the sample label participles and the sample content participles to obtain a label characteristic vector and a content characteristic vector of the sample label, wherein the preset type comprises: at least one of adverb type, auxiliary word type, and symbol type.
Optionally, after the step of training the target sample label based on a preset machine algorithm to obtain a preset label recognition model, the method further includes:
and periodically entering the step of obtaining the target sample label from the label library according to a preset time interval so as to update the preset label identification model.
Optionally, the step of identifying the target multimedia work to be displayed and determining whether the target multimedia work is a graphic work comprises:
acquiring a characteristic image frame in a target multimedia work to be displayed, wherein the characteristic image frame comprises one of a cover image frame and a key image frame;
and inputting the cover image frame or the key image frame into a preset image-text recognition model for recognition, and determining whether the target multimedia work is an image-text work.
Optionally, the preset image-text recognition model is obtained by pre-training through the following steps:
obtaining a sample work, wherein a sample cover image frame or a sample key image frame is marked in the sample work in advance;
training the sample cover image frame or the sample key image frame based on a neural network algorithm to obtain a preset image-text recognition model.
Optionally, the step of obtaining the candidate work label of the target multimedia work includes:
determining description information in the multimedia work;
and extracting information in accordance with a preset label format from the description information to serve as a candidate work label.
According to a second aspect of the embodiments of the present disclosure, there is provided a presentation device of multimedia works, comprising:
the identification module is configured to identify a target multimedia work to be displayed and determine whether the target multimedia work is a graphic work;
the acquisition module is configured to acquire candidate work labels of the target multimedia work under the condition that the target multimedia work is determined to be a graphic work;
and the display module is configured to remove abnormal work labels which do not accord with preset display conditions from the candidate work labels to obtain target work labels, so that the target multimedia works are displayed through the target work labels.
Optionally, the preset display condition includes that the candidate work label is related to the work content of the target multimedia work, and the display module includes:
the first identification submodule is configured to input the candidate work labels into a label identification model for identification, and determine abnormal work labels irrelevant to the content between the image-text works;
and the screening submodule is configured to remove the abnormal work label from the candidate work labels to obtain a target work label.
Optionally, the preset label recognition model is obtained by pre-training through a first model training module, where the first model training module includes:
a first sample obtaining sub-module configured to obtain a target sample label from a label library, where the target sample label is a verification sample label for identifying the abnormal work label;
the first training submodule is configured to train the target sample label based on a preset machine algorithm to obtain a preset label recognition model.
Optionally, the first sample obtaining sub-module includes:
the system comprises a first sample acquisition unit, a second sample acquisition unit and a third sample acquisition unit, wherein the first sample acquisition unit is configured to acquire a preset number of candidate sample labels from a label library, the label library comprises labels which are arranged in a descending order according to the used frequency, and the candidate sample labels refer to the labels which are arranged in the front order;
a second sample obtaining unit, configured to obtain sample work content corresponding to the candidate sample label;
a third sample obtaining unit configured to obtain content feature vectors of the sample work content and label feature vectors of the candidate sample labels, respectively;
a fourth sample obtaining unit configured to obtain a similarity between the tag feature vector and a content feature vector;
a fifth sample acquiring unit configured to take the candidate sample label as a target sample label if the similarity is smaller than a similarity threshold.
Optionally, the third sample acquiring unit includes:
the first processing subunit is configured to perform word segmentation processing on the candidate sample labels and the corresponding sample work content respectively to obtain sample label word segmentation and sample content word segmentation;
a second processing subunit, configured to remove the sample label participle and the useless participle of a preset type in the sample content participle, respectively, to obtain a label feature vector and a content feature vector of the sample label, where the preset type includes: at least one of adverb type, auxiliary word type, and symbol type.
Optionally, the first model training module further includes:
and the updating submodule is configured to periodically enter the step of obtaining the target sample label from the label library according to a preset time interval so as to update the preset label identification model.
Optionally, the identification module includes:
the first obtaining sub-module is configured to obtain a feature image frame in a target multimedia work to be displayed, wherein the feature image frame comprises one of a cover image frame and a key image frame;
and the second identification submodule is configured to input the cover image frame or the key image frame into a preset image-text identification model for identification, and determine whether the target multimedia work is an image-text work.
Optionally, the preset image-text recognition model is obtained by pre-training through a second model training module, where the second model training module includes:
the second obtaining submodule is configured to obtain a sample work, and a sample cover image frame or a sample key image frame is marked in the sample work in advance;
and the second training submodule is configured to train the sample cover image frame or the sample key image frame based on a neural network algorithm to obtain a preset image-text recognition model.
Optionally, the obtaining module includes:
a determination submodule configured to determine description information in the multimedia work;
and the extracting sub-module is configured to extract information conforming to a preset label format from the description information to serve as a candidate work label.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method for displaying a multimedia work according to any one of the first aspect when executing the computer program.
According to a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, and when the computer program is executed by a processor, the method for displaying a multimedia work according to any one of the first aspect is implemented.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising one or more instructions which, when executed by a processor of an electronic device, enable the electronic device to perform the method of presenting a multimedia work.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the invention provides a multimedia work display method, a multimedia work display device, electronic equipment and a storage medium, wherein abnormal work labels in multimedia works identified as image-text works are removed, so that the phenomenon that the work labels in the multimedia works are inconsistent with the content of the work labels per se is avoided, the accuracy of the work labels in the multimedia works is improved, a user can obtain required resources according to the labels, and user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, are configured to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow chart illustrating a method of presentation of a multimedia work in accordance with an exemplary embodiment;
FIG. 2 is a flow chart illustrating another method of presentation of a multimedia work in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a method of training a teletext recognition module according to an exemplary embodiment;
FIG. 4 is a flow diagram illustrating a method of training a tag identification module in accordance with an exemplary embodiment;
FIG. 5 is a flow chart illustrating a method of obtaining a target specimen label in accordance with an exemplary embodiment;
FIG. 6 is a flow diagram illustrating a method of feature vector determination in accordance with an exemplary embodiment;
FIG. 7 is a block diagram illustrating a presentation apparatus for a multimedia work, according to an exemplary embodiment;
fig. 8 is a block diagram illustrating a structure of an electronic device according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are configured to distinguish similar objects and are not necessarily configured to describe a particular order or sequence. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a method for displaying a multimedia work, which is provided by an embodiment of the present disclosure, and the method includes:
step 101, identifying a target multimedia work to be displayed, and determining whether the target multimedia work is a picture and text work.
In the embodiment of the present disclosure, the target multimedia works may be pure videos or images which are produced by the author user who provides the works and do not include any description information, and other users who obtain the target multimedia works can only understand the content of the multimedia works by themselves; the target multimedia works can also be videos or images which are made by author users and carry corresponding text or voice description information so as to explain or transmit the expressed resource content to other users, and for the graphic works containing the description information, the actual content of the graphic resources can be determined according to the description information. The graphics work refers to video or images containing text and/or voice description information.
Before a multimedia resource platform shows multimedia works to other users, in order to ensure that the content of the work labels carried by the image-text works in the target multimedia works to be shown conforms to the content of the work labels, the image-text works need to be screened out so as to identify the work labels carried by the image-text works.
102, under the condition that the target multimedia works are determined to be image-text works, obtaining candidate work labels of the target multimedia works.
In the embodiment of the present disclosure, the candidate work tag refers to description information that is added to the target multimedia work in a preset tag format in advance, and is used for describing the content of the target multimedia work.
Generally, before a multimedia work in a multimedia resource platform is released, the platform requires an author user to add a corresponding work label to the multimedia work needing to be released so as to embody the content theme of the multimedia work and generate an image-text work, so that the multimedia resource platform can classify the multimedia work according to the added work label and provide the multimedia work for other users.
And 103, removing abnormal work labels which do not accord with preset display conditions from the candidate work labels to obtain target work labels, and displaying the target multimedia works through the target work labels.
In the embodiment of the disclosure, the abnormal work label is a candidate work label whose meaning is not consistent with the content of the target multimedia work, and the abnormal work label can be specifically identified by setting a preset display condition. Generally. For example: specific classification labels are added to target multimedia works in a multimedia resource platform according to the playing amount, the activation amount and the interaction times of the target multimedia works, so that more pushing amount is distributed to the image-text works, but in order to obtain more pushing resources, certain author users can add the specific classification labels to the multimedia works distributed by the author users privately, the characteristic classification labels are irrelevant to the content of the multimedia works, at the moment, a preset display condition can be set to determine whether the candidate work labels are the specific classification labels, and therefore the candidate work labels are used as abnormal labels to be automatically filtered under the condition that the candidate work labels are the specific classification labels.
Of course, the preset display condition may also be that the candidate work label and the work content are subjected to corpus analysis respectively. Firstly, removing the subordinate participles, so as to obtain the candidate work label and the key participles in the work content; then, determining the candidate work label and the participle with the highest weight in the work content as the corresponding characteristic word by weighting each participle; and finally, comparing the candidate work label with the feature vector of the work content to determine the corresponding similarity, and if the similarity is lower than a similarity threshold, determining that the candidate work label is an abnormal work label which is not consistent with the work content.
Specifically, the work content may include a plurality of candidate work tags, and the feature vector of the work content may include a plurality of dimensions, so if a certain candidate work tag is not an abnormal work tag of the image-text work, there is certainly a certain similarity between the candidate work tag and the feature vector of the work content, but the similarity is not too high, and therefore the similarity threshold value should not be set too high in setting, as long as the association between the candidate work tag and the work content can be embodied.
After the abnormal work labels in the candidate work labels of the target multimedia resource are removed, the target work labels remaining in the target multimedia resource can be in accordance with the work content of the target multimedia resource, and at the moment, the multimedia resource platform can display the image-text works after the abnormal work labels are removed and the multimedia works of other non-image-text works to other users.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the multimedia work display method provided by the embodiment of the disclosure, the abnormal work labels in the multimedia works identified as the image-text works are removed, so that the phenomenon that the work labels in the multimedia works do not accord with the content of the work labels per se is avoided, the accuracy of the work labels in the multimedia works is improved, other users obtaining multimedia resources can obtain the required resources according to the labels, and the user experience is improved.
Fig. 2 is a method for displaying another multimedia work provided by an embodiment of the present disclosure, where the method includes:
step 201, obtaining a feature image frame in a target multimedia work to be displayed, where the feature image frame includes one of a cover image frame and a key image frame.
In the embodiment of the present disclosure, the characteristic image frame refers to an image frame of a target multimedia work containing its main content, and for the multimedia work, the characteristic image frame containing its main content is usually a cover image frame of a starting position of the work, or one of the key frames. The multimedia resource platform can check the labels of the multimedia works when the multimedia works uploaded by the author user are received, so that the influence on the accuracy of the multimedia works acquired by other users according to the labels of the works after the multimedia works containing inaccurate labels of the works are released to the platform is avoided.
Step 202, inputting the cover image frame or the key image frame into a preset image-text recognition model for recognition, and determining whether the target multimedia work is an image-text work.
In the embodiment of the present disclosure, since the multimedia works which are graphics and text works can be determined through the cover image and the key frame image thereof, specifically, for different multimedia works, the key frame image and the cover image thereof generally include features which can embody the type of the multimedia works, such as text information, a work title, and the like, it is possible to identify whether the multimedia works are graphics and text works by inputting the key image frame and/or the cover image frame in the obtained multimedia works to a pre-trained graphics and text recognition model for prediction.
Specifically, for a multimedia work of a video type, if the feature image frame of the multimedia work contains description information, the multimedia work can be determined to be a graphic work, and for a multimedia work of an image type, the multimedia work generally does not contain a key image frame and only contains one image containing the description information, so that the multimedia work can be directly determined to be the graphic work according to the cover image. Whether the multimedia works are the image-text works or not is identified through the pre-trained image-text identification model, so that the influence of artificial participation on image-text work identification is avoided, and the efficiency and the accuracy of image-text work identification are improved.
Step 203, determining the description information in the multimedia work.
In the embodiment of the present disclosure, the description information refers to a textual description contained in a multimedia work that is a teletext work, and the textual description information exists in the multimedia work in the form of text data, for example: work title, work introduction, work subtitle, etc.
And 204, extracting information in accordance with a preset label format from the description information to serve as a candidate work label.
In the embodiment of the present disclosure, in order to identify the label of the work, the multimedia resource platform adds the label of the work to the multimedia work in a preset label format, for example, a "#" may be added to the front end and the back end of the label to highlight, so as to facilitate identification. Therefore, the literal description information in the multimedia works can be identified according to the preset label format, and the literal description information conforming to the preset label format is determined as a candidate work label of the multimedia works. Of course, the candidate work tag may also be added by the system after the multimedia work is released, and at this time, the candidate work tag does not necessarily conform to the content of the multimedia work itself, and is in an undetermined state.
According to the method and the device, the candidate work label in the text description information of the multimedia work is determined according to the preset label format, so that the accuracy of the candidate work label is guaranteed, the interference of irrelevant description information on the determination of the candidate work label is avoided, and the accuracy of the obtained candidate work label is improved.
And step 205, inputting the candidate work label into a label identification model for identification, and determining an abnormal work label irrelevant to the content between the image-text works.
In the embodiment of the disclosure, the label recognition model is a machine model with a label recognition function obtained by pre-training according to a label of a sample work.
And step 206, removing the abnormal work label from the candidate work labels to obtain a target work label, and displaying the target multimedia work through the target work label.
In the embodiment of the disclosure, the tag identification model is used for identifying the abnormal work tag in the target multimedia work, thereby reducing human participation and improving the accuracy of screening the abnormal work tag in the multimedia work.
Optionally, referring to fig. 3, the preset image-text recognition model in step 202 is obtained through the following steps a1 to a 2:
step A1, obtaining a sample work, wherein the sample work is marked with a sample cover image frame or a sample key image frame in advance.
In the embodiment of the disclosure, a multimedia work which is a graphic work and has relatively high quality is extracted from a work library of a multimedia resource platform to serve as the sample work, and a sample cover image frame or a sample key image frame in the sample work is labeled for subsequent model training.
And A2, training the sample cover image frame or the sample key image frame based on a neural network algorithm to obtain a preset image-text recognition model.
In the embodiment of the disclosure, the neural network algorithm is machine learning with a human brain as a model, is formed by connecting a plurality of neurons with adjustable connection weights, and has the characteristics of large-scale parallel processing, distributed information storage, good self-organizing self-learning capability and the like. The characteristics of the sample cover image frame or the sample key image frame in the sample work can be learned through an initial model constructed based on a neural network algorithm, so that an image-text recognition model with an image-text recognition function is obtained. In the training process of the image-text recognition model, after each training, the test set in the sample cover image frame or the sample key image frame can be used for verification to obtain a loss value, the image-text recognition model is continuously trained under the condition that the loss value is greater than a loss value threshold, and the training is finished under the condition that the loss value is less than or equal to the loss value threshold to obtain a preset image-text recognition model.
The multimedia works are identified by the image-text identification model obtained by the neural network algorithm, so that the labor input is reduced, the image-text work identification efficiency is improved, the error risk caused by artificial participation is reduced, and the image-text work identification accuracy is improved.
Alternatively, referring to fig. 4, the tag identification model in step 205 is obtained through the following steps B1 to B3:
and step B1, obtaining a target sample label from the label library, wherein the target sample label is a verification sample label for identifying the abnormal work label.
In the embodiment of the present disclosure, each work tag in the tag library is usually added by being referred to by each multimedia work, and has a corresponding number of times referred to by the multimedia work. Because some author users add some common abnormal labels which are inconsistent with the content of the multimedia works due to limited creative efforts or expected to quote the common labels to improve the exposure rate and the like, the common abnormal labels which are inconsistent with the content of the works and have more quoted times can be identified and obtained from the label library to be used as specific verification sample labels.
And step B2, training the target sample label based on a preset machine algorithm to obtain a preset label recognition model.
In the embodiment of the disclosure, iterative learning is performed on semantic features of the target sample label through a machine algorithm, and the trained label recognition model is tested after each iterative learning, so as to obtain the accuracy of the label recognition model. If the accuracy is smaller than the accuracy threshold, the label recognition model can be determined not to reach the expected standard, parameters of the label recognition model can be adjusted according to the loss value of the output prediction result, and then iterative training is continuously carried out on the label recognition model until the accuracy is larger than or equal to the accuracy threshold. If the label recognition model cannot meet the expected standard after long-time training in the training process, the sample label can be replaced, so that the sample quality is improved.
The multimedia works are identified by the image-text identification model obtained by the machine algorithm, so that the labor input is reduced, the image-text work identification efficiency is improved, the error risk caused by artificial participation is reduced, and the image-text work identification accuracy is improved.
And step B3, periodically entering the step of obtaining the target sample label from the label library according to a preset time interval so as to update the preset label identification model.
In the embodiment of the disclosure, since the specific tag is updated in real time, the specific tag can be periodically obtained from the preset tag library to train the tag identification model, and the specific period can be one quarter, one month, one week and the like, so that the tag identification model can accurately identify the abnormal tag in the multimedia work, and the instantaneity of the abnormal tag identification is improved.
Optionally, referring to fig. 5, the step B1 includes:
and a substep B11, obtaining a preset number of candidate sample labels from a label library, where the label library includes labels arranged in a descending order according to the frequency of being used, and the candidate sample labels are labels in the top order.
In the embodiment of the disclosure, the tag library stores the work tags contained in the distributed works in the multimedia resource platform and the frequency of using the work tags by the multimedia works. The work labels are arranged in descending order according to the use frequency, so that a preset number of work labels arranged at the top are screened out and used as candidate sample labels. It will be appreciated that the more frequent the usage, the more specific the work tag is, the more likely the risk of increased push volume added to its multimedia work is taken advantage of by the author user.
And a sub-step B12 of obtaining sample work content corresponding to the candidate sample label.
In the embodiment of the disclosure, the associated sample work is extracted from the work library of the multimedia resource platform according to the determined candidate sample label, and the cover image frame or the key image frame in the sample work is extracted. And performing voice recognition on voice data in the sample candidate image frame or the key image frame, converting the voice data into text data in a text format, and then performing linguistic data analysis together with the text data carried by the sample work to remove useless word segmentation such as punctuations, auxiliary words, adverbs and the like, thereby obtaining the sample work content of the sample work.
And a sub-step B13 of obtaining the content feature vector of the sample work content and the label feature vector of the candidate sample label respectively.
In the embodiment of the invention, the content feature vector in the sample work content and the label feature vector in the primary candidate sample label are determined through word frequency analysis after the word segmentation processing is carried out on the sample work content and the candidate sample label.
And a sub-step B14 of obtaining the similarity between the label feature vector and the content feature vector.
Sub-step B15, in case the similarity is smaller than a similarity threshold, taking the candidate sample label as a target sample label.
In the embodiment of the present disclosure, if the similarity between the sample content feature vector and the content feature vector is smaller than the similarity threshold, it may be determined that the sample label is not an abnormal work label in the candidate sample work, and therefore, the candidate sample label may be used as the target sample label.
According to the embodiment of the disclosure, the target sample label with the risk of obtaining the pushing amount by adding the abnormal condition of the user to the distributed multimedia work is determined according to whether the content correlation exists between the sample works corresponding to the candidate sample labels, so that the quality of the target sample label is improved.
Optionally, referring to fig. 6, the sub-step B13 includes:
and a substep B131, performing word segmentation processing on the candidate sample label and the corresponding sample work content respectively to obtain sample label word segmentation and sample content word segmentation.
In the embodiment of the disclosure, the candidate sample label and the sample work content can be participled through the dumb participle sending, so as to extract the label participle and the content participle in the sample label.
Substep B132, respectively removing useless participles of preset types from the sample label participles and the sample content participles to obtain label feature vectors and content feature vectors of the sample labels, where the preset types include: at least one of adverb type, auxiliary word type, and symbol type.
Since useless participles irrelevant to the semantics of the label participles may exist, and the participles are generally auxiliary words or adverbs in a grammar structure, the participles of the auxiliary word type and the adverb type in the label participles and the content participles can be removed, so that label feature vectors and content feature vectors capable of expressing the content of the work are left.
According to the embodiment of the disclosure, the candidate sample labels and the word segmentation useless word segmentation in the content of the candidate works are taken out, so that the interference of the useless word segmentation is avoided, and the accuracy of the obtained label feature vector and the content feature vector is improved. According to the another multimedia work display method provided by the embodiment of the disclosure, the preset image-text identification model is used for identifying the image-text works in the multimedia works, and the preset label identification model is used for determining the abnormal work labels in the multimedia works and removing the abnormal work labels, so that the accuracy and efficiency of removing the abnormal work labels in the image-text works are improved, the condition that the work labels in the image-text works are not consistent with the content of the work labels is avoided, a user can obtain the multimedia works consistent with the requirements of the user, and the user experience is improved. And the tag identification model is periodically updated, so that the instantaneity of abnormal tag identification is improved.
FIG. 7 is a block diagram illustrating the structure of a presentation device 30 of a multimedia work, which may include:
the identification module 301 is configured to identify a target multimedia work to be displayed, and determine whether the target multimedia work is a graphic work.
An obtaining module 302 configured to obtain candidate work tags of the target multimedia work under the condition that the target multimedia work is determined to be a graphics work.
The display module 303 is configured to remove the abnormal work label that does not meet the preset display condition from the candidate work labels to obtain a target work label, so as to display the target multimedia work through the target work label.
Optionally, the preset display condition includes that the candidate work label is related to the work content of the target multimedia work, and the display module 303 includes:
the first identification submodule 3031 is configured to input the candidate work labels to a label identification model for identification, and determine abnormal work labels which are irrelevant to the content between the image-text works.
A screening submodule 3032 configured to remove the abnormal work tag from the candidate work tags, so as to obtain a target work tag.
Optionally, the preset tag identification model is obtained by pre-training through a first model training module C1, where the first model training module C1 includes:
a first sample obtaining sub-module C11 configured to obtain a target sample label from a label library, the target sample label being a verification sample label for identifying the abnormal work label.
And the first training submodule C12 is configured to train on the target sample label based on a preset machine algorithm, so as to obtain a preset label recognition model.
Optionally, the first sample obtaining sub-module C11 includes:
the first sample acquiring unit C111 is configured to acquire a preset number of candidate sample tags from a tag library, where the tag library includes tags arranged in a descending order according to a frequency of being used, and the candidate sample tags are tags arranged in a top order.
A second sample obtaining unit C112 configured to obtain sample work content corresponding to the candidate sample label.
A third sample obtaining unit C113 configured to obtain content feature vectors of the sample work content and label feature vectors of the candidate sample labels, respectively.
A fourth sample obtaining unit C114 configured to obtain a similarity between the label feature vector and the content feature vector.
A fifth sample acquiring unit C115 configured to take the candidate sample label as a target sample label if the similarity is smaller than a similarity threshold.
Optionally, the third sample acquiring unit C113 includes:
the first processing subunit C1131 is configured to perform word segmentation processing on the candidate sample labels and the corresponding sample work content, respectively, to obtain sample label word segments and sample content word segments.
A second processing subunit C1132, configured to remove the sample label participle and the useless participle of a preset type in the sample content participle, respectively, to obtain a label feature vector and a content feature vector of the sample label, where the preset type includes: at least one of adverb type, auxiliary word type, and symbol type.
Optionally, the first model training module C1 further includes:
and the updating submodule C13 is configured to periodically enter the step of obtaining the target sample label from the label library according to a preset time interval so as to update the preset label identification model.
Optionally, the identifying module 301 includes:
the first obtaining sub-module 3011 is configured to obtain a feature image frame in a target multimedia work to be displayed, where the feature image frame includes one of a cover image frame and a key image frame.
And the second recognition sub-module 3012 is configured to input the cover image frame or the key image frame into a preset image-text recognition model for recognition, and determine whether the target multimedia work is an image-text work.
Optionally, the preset image-text recognition model is obtained by pre-training through a second model training module D1, where the second model training module D1 includes:
a second obtaining sub-module D11 configured to obtain a sample work in which a sample cover image frame or a sample key image frame is previously marked.
And the second training submodule D12 is configured to train the sample cover image frame or the sample key image frame based on a neural network algorithm to obtain a preset image-text recognition model.
Optionally, the obtaining module 302 includes:
a determination submodule 3021 configured to determine description information in the multimedia work.
An extracting sub-module 3022 configured to extract information conforming to a preset tag format from the description information as a candidate work tag.
The display device of the multimedia works provided by the embodiment of the disclosure identifies the image-text works in the multimedia works through the preset image-text identification model, and determines the abnormal work labels in the multimedia works by utilizing the preset label identification model and eliminates the abnormal work labels, so that the accuracy and efficiency of eliminating the abnormal labels in the image-text works are improved, the condition that the work labels in the image-text works are not consistent with the content of the work labels is avoided, a user can obtain the multimedia works consistent with the requirements of the user, and the user experience is improved. And the tag identification model is periodically updated, so that the instantaneity of abnormal tag identification is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned method for displaying multimedia works, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Fig. 8 is a block diagram illustrating an electronic device 400 according to an example embodiment. The electronic device may be a mobile terminal or a server, and in the embodiment of the present disclosure, the electronic device is taken as an example for description. For example, the electronic device 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, electronic device 400 may include one or more of the following components: a processing component 402, a memory 404, a power component 406, a multimedia component 408, a voice component 510, an input/output (I/O) interface 412, a sensor component 414, and a communication component 416.
The processing component 402 generally controls overall operation of the electronic device 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 402 may include one or more processors 420 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the electronic device 400. Examples of such data include instructions for any application or method operating on the electronic device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 406 provides power to the various components of the electronic device 400. Power components 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 400.
The multimedia component 408 includes a screen that provides an output interface between the electronic device 400 and a user, in some embodiments, the screen may include a liquid crystal display (L CD) and a Touch Panel (TP). if the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
The speech component 410 is configured to output and/or input speech signals. For example, the voice component 410 includes a Microphone (MIC) configured to receive external voice signals when the electronic device 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received voice signal may further be stored in memory 404 or transmitted via communications component 416. In some embodiments, the speech component 410 further comprises a speaker for outputting speech signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the electronic device 400. For example, the sensor assembly 414 may detect an open/closed state of the electronic device 400, the relative positioning of components, such as a display and keypad of the electronic device 400, the sensor assembly 414 may also detect a change in the position of the electronic device 400 or a component of the electronic device 400, the presence or absence of user contact with the electronic device 400, orientation or acceleration/deceleration of the electronic device 400, and a change in the temperature of the electronic device 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD spectrum sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the electronic device 400 and other devices. The electronic device 400 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 5G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), programmable logic devices (P L D), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described presentation method of multimedia works shown in fig. 1-2.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the electronic device 400 to perform the method of presenting a multimedia work as illustrated in fig. 1-6 above, is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product, wherein the instructions of the computer program product, when executed by the processor 420 of the electronic device 400, cause the electronic device 400 to perform the presentation method of a multimedia work as illustrated in fig. 1 to 6 described above.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As is readily imaginable to the person skilled in the art: any combination of the above embodiments is possible, and thus any combination between the above embodiments is an embodiment of the disclosure, but the disclosure is not necessarily detailed herein for reasons of brevity.
The method of presentation of a multimedia work provided herein is not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing a system incorporating aspects of the present disclosure will be apparent from the foregoing description. Moreover, this disclosure is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the present disclosure as described herein, and any descriptions above of specific languages are provided for disclosure of enablement and best mode of the present disclosure.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various disclosed aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, disclosed aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this disclosure.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Moreover, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the disclosure and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the disclosure may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of the method of presenting a multimedia work according to embodiments of the present disclosure. The present disclosure may also be embodied as device or apparatus programs (e.g., computer programs and computer program products) configured to perform a portion or all of the methods described herein. Such programs implementing the present disclosure may be stored on a computer-readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the disclosure, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The disclosure may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A method for displaying a multimedia work, comprising:
identifying a target multimedia work to be displayed, and determining whether the target multimedia work is a picture and text work;
under the condition that the target multimedia works are determined to be image-text works, candidate work labels of the target multimedia works are obtained;
and removing abnormal work labels which do not accord with preset display conditions from the candidate work labels to obtain target work labels, and displaying the target multimedia works through the target work labels.
2. The method according to claim 1, wherein the preset display condition includes that the candidate work label is related to the work content of the target multimedia work, and the step of removing the abnormal work label which does not meet the preset display condition from the candidate work label to obtain the target work label comprises:
inputting the candidate work label into a label identification model for identification, and determining an abnormal work label irrelevant to the content between the image-text works;
and removing the abnormal work label from the candidate work label to obtain a target work label.
3. The method of claim 2, wherein the preset label recognition model is pre-trained by:
acquiring a target sample label from a label library, wherein the target sample label is a verification sample label for identifying the abnormal work label;
training the target sample label based on a preset machine algorithm to obtain a preset label identification model.
4. The method of claim 3, wherein the step of obtaining the target sample label from the label library comprises:
acquiring a preset number of candidate sample labels from a label library, wherein the label library comprises labels which are arranged in a descending order according to the used frequency, and the candidate sample labels refer to the labels which are arranged in the front order;
obtaining sample work content corresponding to the candidate sample label;
respectively obtaining content characteristic vectors of the sample work content and label characteristic vectors of the candidate sample labels;
acquiring the similarity between the label characteristic vector and the content characteristic vector;
and if the similarity is smaller than a similarity threshold value, taking the candidate sample label as a target sample label.
5. The method of claim 4, wherein the step of obtaining the content feature vector of the sample work content and the label feature vector of the candidate sample label respectively comprises:
performing word segmentation processing on the candidate sample labels and the corresponding sample work content respectively to obtain sample label word segmentation and sample content word segmentation;
respectively removing useless participles of a preset type in the sample label participles and the sample content participles to obtain a label characteristic vector and a content characteristic vector of the sample label, wherein the preset type comprises: at least one of adverb type, auxiliary word type, and symbol type.
6. The method of claim 3, wherein after the step of training the target sample label based on the preset machine algorithm to obtain a preset label recognition model, the method further comprises:
and periodically entering the step of obtaining the target sample label from the label library according to a preset time interval so as to update the preset label identification model.
7. The method of claim 1, wherein the step of identifying the target multimedia work to be presented and determining whether the target multimedia work is a teletext work comprises:
acquiring a characteristic image frame in a target multimedia work to be displayed, wherein the characteristic image frame comprises one of a cover image frame and a key image frame;
and inputting the cover image frame or the key image frame into a preset image-text recognition model for recognition, and determining whether the target multimedia work is an image-text work.
8. A presentation device of multimedia works, comprising:
the identification module is configured to identify a target multimedia work to be displayed and determine whether the target multimedia work is a graphic work;
the acquisition module is configured to acquire candidate work labels of the target multimedia work under the condition that the target multimedia work is determined to be a graphic work;
and the display module is configured to remove abnormal work labels which do not accord with preset display conditions from the candidate work labels to obtain target work labels, so that the target multimedia works are displayed through the target work labels.
9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of presenting a multimedia work of any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method of presentation of a multimedia work of any one of claims 1 to 7.
CN202010097599.0A 2020-02-17 2020-02-17 Multimedia work display method and device, electronic equipment and storage medium Pending CN111414494A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010097599.0A CN111414494A (en) 2020-02-17 2020-02-17 Multimedia work display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010097599.0A CN111414494A (en) 2020-02-17 2020-02-17 Multimedia work display method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111414494A true CN111414494A (en) 2020-07-14

Family

ID=71492713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010097599.0A Pending CN111414494A (en) 2020-02-17 2020-02-17 Multimedia work display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111414494A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101448100A (en) * 2008-12-26 2009-06-03 西安交通大学 Method for extracting video captions quickly and accurately
US9280742B1 (en) * 2012-09-05 2016-03-08 Google Inc. Conceptual enhancement of automatic multimedia annotations
CN109684513A (en) * 2018-12-14 2019-04-26 北京奇艺世纪科技有限公司 A kind of low quality video recognition methods and device
CN110177295A (en) * 2019-06-06 2019-08-27 北京字节跳动网络技术有限公司 Processing method, device and the electronic equipment that subtitle crosses the border
CN110287375A (en) * 2019-05-30 2019-09-27 北京百度网讯科技有限公司 The determination method, apparatus and server of video tab
CN110781347A (en) * 2019-10-23 2020-02-11 腾讯科技(深圳)有限公司 Video processing method, device, equipment and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101448100A (en) * 2008-12-26 2009-06-03 西安交通大学 Method for extracting video captions quickly and accurately
US9280742B1 (en) * 2012-09-05 2016-03-08 Google Inc. Conceptual enhancement of automatic multimedia annotations
CN109684513A (en) * 2018-12-14 2019-04-26 北京奇艺世纪科技有限公司 A kind of low quality video recognition methods and device
CN110287375A (en) * 2019-05-30 2019-09-27 北京百度网讯科技有限公司 The determination method, apparatus and server of video tab
CN110177295A (en) * 2019-06-06 2019-08-27 北京字节跳动网络技术有限公司 Processing method, device and the electronic equipment that subtitle crosses the border
CN110781347A (en) * 2019-10-23 2020-02-11 腾讯科技(深圳)有限公司 Video processing method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US11520824B2 (en) Method for displaying information, electronic device and system
WO2019141042A1 (en) Image classification method, device, and terminal
CN111258435B (en) Comment method and device for multimedia resources, electronic equipment and storage medium
CN110874145A (en) Input method and device and electronic equipment
CN111556366A (en) Multimedia resource display method, device, terminal, server and system
CN109819288B (en) Method and device for determining advertisement delivery video, electronic equipment and storage medium
JP2021114277A (en) Information processing method, device and storage medium
CN107784034B (en) Page type identification method and device for page type identification
CN110992989A (en) Voice acquisition method and device and computer readable storage medium
CN111813932B (en) Text data processing method, text data classifying device and readable storage medium
CN110990534A (en) Data processing method and device and data processing device
CN112562675A (en) Voice information processing method, device and storage medium
CN112291614A (en) Video generation method and device
CN111160047A (en) Data processing method and device and data processing device
CN111797262A (en) Poetry generation method and device, electronic equipment and storage medium
CN112069951A (en) Video clip extraction method, video clip extraction device, and storage medium
CN112052316A (en) Model evaluation method, model evaluation device, storage medium and electronic equipment
CN110019965B (en) Method and device for recommending expression image, electronic equipment and storage medium
CN114722238B (en) Video recommendation method and device, electronic equipment, storage medium and program product
CN111428806A (en) Image tag determination method and device, electronic equipment and storage medium
CN113259754A (en) Video generation method and device, electronic equipment and storage medium
CN112784151A (en) Method and related device for determining recommendation information
CN111079421A (en) Text information word segmentation processing method, device, terminal and storage medium
CN107122801B (en) Image classification method and device
CN110650364A (en) Video attitude tag extraction method and video-based interaction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination