CN110688529A - Method and device for retrieving video and electronic equipment - Google Patents

Method and device for retrieving video and electronic equipment Download PDF

Info

Publication number
CN110688529A
CN110688529A CN201910917233.0A CN201910917233A CN110688529A CN 110688529 A CN110688529 A CN 110688529A CN 201910917233 A CN201910917233 A CN 201910917233A CN 110688529 A CN110688529 A CN 110688529A
Authority
CN
China
Prior art keywords
video
word
retrieving
text
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910917233.0A
Other languages
Chinese (zh)
Inventor
李伟健
王长虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910917233.0A priority Critical patent/CN110688529A/en
Publication of CN110688529A publication Critical patent/CN110688529A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for retrieving videos and an electronic device. One embodiment of the method comprises: acquiring a related text of a target video; performing word segmentation processing on the associated text to obtain at least one word; retrieving at least one video related to the target video; for each video in the at least one video, acquiring a related text of the video, and determining the number of times that the related text of the video comprises a word in the at least one word; and sequencing the at least one video according to the determined times to generate a video sequence. The embodiment realizes the adjustment of the video retrieval result, thereby effectively utilizing the video information and being beneficial to realizing the targeted video push.

Description

Method and device for retrieving video and electronic equipment
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method and a device for retrieving videos and electronic equipment.
Background
With the increasing progress of science and technology and the popularization of the internet, more and more people transmit information and share life segments through videos. Then, since the total number of videos is too large, there is a need to retrieve useful videos from a huge amount of videos.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose methods, apparatuses and electronic devices for retrieving videos.
In a first aspect, some embodiments of the present disclosure provide a method for retrieving a video, the method comprising: acquiring a related text of a target video; performing word segmentation processing on the associated text to obtain at least one word; retrieving at least one video related to the target video; for each video in the at least one video, acquiring a related text of the video, and determining the number of times that the related text of the video comprises a word in the at least one word; and sequencing the at least one video according to the determined times to generate a video sequence.
In a second aspect, some embodiments of the present disclosure provide a method for retrieving a video, the method comprising: acquiring a related text of a target video; performing word segmentation processing on the associated text to obtain at least one word; retrieving at least one video related to the target video; for each video in the at least one video, acquiring a related text of the video, and determining the number of times that the related text of the video comprises a word in the at least one word: acquiring user attention information of the at least one video; sequencing the at least one video according to the user attention information and the times to generate a video sequence; and pushing the video sequence to a target terminal.
In a third aspect, some embodiments of the present disclosure provide an apparatus for retrieving video, the apparatus comprising: an acquisition unit configured to associate text of a target video; the processing unit is configured to perform word segmentation processing on the associated text to obtain at least one word; retrieving at least one video based on the associated text; acquiring an associated text of each video in the at least one video; determining a number of times that the associated text of each of the at least one video includes a term of the at least one term; and the generating unit is configured to sort the at least one video according to the times to generate a video sequence.
In a fourth aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any one of the first and second aspects.
In a fifth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, where the program when executed by a processor implements a method as in any of the first and second aspects.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: and performing word segmentation processing on the associated text through the acquired associated text of the target video to obtain at least one word. Then, at least one video related to the target video is retrieved. Furthermore, for each video in the at least one video, obtaining the associated text of the video, and determining the number of times that the associated text of the video includes the word in the at least one word. And finally, sequencing the at least one video according to the determined times to generate a video sequence. According to actual needs, the video sequence can be used as a video retrieval result, and the adjustment of the video retrieval result is realized, so that the information of the video is effectively utilized, and the video push rich in pertinence is facilitated.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic diagram of an application scenario of a method of retrieving video according to some embodiments of the present disclosure.
Fig. 2 is a flow diagram of some embodiments of a method for retrieving video according to the present disclosure.
Fig. 3 is a flow diagram of further embodiments of methods for retrieving video according to the present disclosure.
Fig. 4 is a schematic block diagram of some embodiments of an apparatus for retrieving video according to the present disclosure.
FIG. 5 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows a schematic diagram of an application scenario for a method for retrieving video to which some embodiments of the present disclosure may be applied.
In the application scenario of fig. 1, first, a target video is obtained, and a text related to the target video is obtained. Next, the electronic device 101 (shown as a server in the figure) performs a word segmentation process 104 on the associated text 103 of the target video 102 to obtain at least one word. Again, step 105 is performed to retrieve at least one video related to the target video. From this, for each video in the at least one video, the associated text 106 of the video is obtained, and step 107 is performed to determine the number of times the associated text of the video includes a word in the at least one word. Finally, the at least one video is sorted according to the determined number of times, generating a video sequence 108.
As an example, the user selects a video entitled "first morning sun" as the target video. Next, the electronic device 101 (shown as a server in the figure) performs word segmentation processing on the associated text (the title "first morning sun") of the target video, and obtains "morning", "first sun", and "sun". And thirdly, searching according to the obtained words to obtain four videos related to the target video, wherein the associated texts of the videos are respectively 'push-open window in the early morning', 'air in the early morning', 'first strand of sunshine on my face' and 'beautiful sunshine in the early morning'. Then, the times of the associated text including the word are determined to be 1 time, 2 times and 3 times respectively. And finally, sequencing according to the numerical value of the times to generate a video sequence. The titles of the videos in the video sequence are displayed in the order "beautiful morning sun", "early morning air", "first sun shine on my face" and "push open window early morning".
The electronic device 101 may be hardware or software. When the electronic device is hardware, the electronic device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the electronic device is embodied as software, it may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. And is not particularly limited herein.
With continued reference to fig. 2, a flow 200 of some embodiments of a method of retrieving video in accordance with the present disclosure is shown. The method for retrieving the video comprises the following steps:
step 201, acquiring the associated text of the target video.
In some embodiments, an executing entity (e.g., the server shown in fig. 1) that retrieves the video may obtain the associated text of the target video through a wired connection or a wireless connection. Here, the target video may be a video specified by a user, or may be a video selected according to a condition, for example, a video whose playing amount exceeds a preset threshold in a preset time period. The associated text may be a title of the target video or a text obtained by performing speech recognition on the target video. For example, given a "landscape video" titled "city landscape," this video may be set as the target video, and its associated text may be "city landscape.
Step 202, performing word segmentation processing on the associated text to obtain at least one word.
In some embodiments, the execution subject may perform word segmentation processing on the associated text. Here, the word segmentation process is to segment a piece of text into individual words. At least one word is obtained, wherein the word can be a word or a word comprising at least two words. For example, given a target video, the associated text is "city landscape", and the word segmentation process may result in "city", "landscape".
Step 203, at least one video related to the target video is retrieved.
In some embodiments, the execution subject may retrieve at least one video related to the target video in various ways. As an example, the execution subject may be retrieved by a title of the target video. For example, if the title of the target video is "guitar teaching," then the executive may retrieve the title "guitar teaching" to obtain at least one video.
In some optional implementations of some embodiments, the execution body may perform word embedding on each of the at least one word to obtain a word vector of the word. And thirdly, generating a text vector of the associated text based on the obtained at least one word vector. And finally, at least one video is retrieved from the target video library according to the text vector and is used as the at least one video related to the target video.
Here, word embedding is a general term of a Language model and a characterization learning technique in Natural Language Processing (NLP). Conceptually, it refers to embedding a high-dimensional space with dimensions of the number of all words into a continuous vector space with much lower dimensions, each word or phrase being mapped as a vector on the real number domain. In particular, a Word embedding may be a vector in which words or phrases are mapped to real numbers. Conceptually, it involves mathematical embedding from a one-dimensional space of each word to a continuous vector space with lower dimensions. And combining the word vectors to obtain the text vector of the associated text. For example, the text "city landscape" is segmented into "city", "landscape". And carrying out vector mapping on the segmented words, wherein each word obtains a unique vector. For example, in the above example, the vector for "city" may be "0" and the vector for "landscape" may be "1".
In some optional implementation manners of some embodiments, the executing entity may input the target video into a pre-trained recommendation model to obtain the video features of the target video. Here, the recommendation model may be a neural network model for outputting video features of the video. For example, the neural Network model may be a Recurrent Neural Network (RNN). In particular, the video feature may be information about the target video. For example, the distribution time of the video, the image characteristics of the video.
In some optional implementations of some embodiments, the execution subject may retrieve at least one video from the target video library based on the text vector and the video feature. Here, in the video library, videos, text vectors of the videos, and video features of the videos are stored in association.
In some optional implementations of some embodiments, the retrieving at least one video from the target video library according to the text vector may further be performed as follows: first, a first number of videos are retrieved from the target video library based on the text vector. Then, a second number of videos are retrieved from the target video library according to the video characteristics. Finally, the at least one video is generated based on the first number of videos and the second number of videos. Here, the above-mentioned "first number" and "second number" may be numbers set in advance.
In some optional implementations of some embodiments, the retrieving at least one video from the target video library according to the text vector may further be performed as follows: first, a third number of videos are retrieved from the target video library according to the first video information of the target video. Then, for each video of the third number of videos, a similarity between second video information of the video and second video information of the target video is determined. And finally, selecting a video from the third number of videos according to the obtained similarity to obtain the at least one video. Wherein the first video information and the second video information are respectively one of the following information of a video and are different from each other: text vectors, video features.
Step 204, for each video in the at least one video, acquiring a related text of the video, and determining the number of times that the related text of the video includes a word in the at least one word.
In some embodiments, for each of the at least one video retrieved in step 203 above, the associated text of the video may be obtained first. Thereafter, a number of times that the associated text of the video includes a word of the at least one word may be determined. As an example, the execution subject may match the associated text of the video with each of the at least one word as a keyword, and obtain a number of times that the associated text of the video includes the word in the at least one word.
Step 205, according to the determined times, sequencing the at least one video to generate a video sequence.
In some embodiments, the execution subject may sort the at least one video according to the determined times and the numerical value of the times, and generate a video sequence.
With continued reference to fig. 3, a flow 300 of further embodiments of a method of retrieving video in accordance with the present disclosure is shown. The method for retrieving the video comprises the following steps:
step 301, acquiring a related text of a target video.
Step 302, performing word segmentation processing on the associated text to obtain at least one word.
Step 303, retrieving at least one video related to the target video.
Step 304, for each video in the at least one video, acquiring an associated text of the video, and determining the number of times that the associated text of the video includes a word in the at least one word.
In some embodiments, the specific implementation and technical effects of steps 301 and 304 may refer to the embodiment corresponding to fig. 2, which is not described herein again.
And 305, acquiring user attention information of the at least one video.
In some embodiments, the execution subject may acquire the user attention information of the at least one video in various ways. For example, the execution subject may acquire the user attention information through data of background statistics. Here, the user attention information includes, but is not limited to, at least one of: the video number of being clicked, the video number of being forwarded and the video playing times. For example, the user attention information may be the number of times that the video is praised 56 times, the number of times that the video is forwarded 108 times, and the number of times that the video is played 166 times.
And step 306, sequencing the at least one video according to the user attention information and the times to generate a video sequence.
In some embodiments, the execution subject may sort the at least one video according to the user attention information and the number of times. For example, the user attention information may be the number of times of playing of a video.
As an example, the associated text of the target video is "first morning sun", and first, the above associated text is subjected to word segmentation processing to obtain "morning", "first sun", and "sun". Secondly, searching is carried out according to the obtained words, four videos related to the target video are obtained, and the associated texts of the videos are respectively 'push-open window in the early morning', 'air in the early morning', 'first strand of sunshine on my face' and 'beautiful sunshine in the early morning'. Thirdly, determining the times that the associated text comprises the words, wherein the times are 1 time, 2 times and 3 times respectively. The playing times of the video are respectively 15 times, 28 times, 36 times and 49 times. Finally, sorting according to the user attention information (the playing times of the video) according to the numerical value to obtain a video sequence; and adjusting the video sequence according to the numerical value of the times to generate the video sequence. The titles of the videos in the video sequence are displayed in the order "sunbeautifully early morning", "first sun shines on my face", "air early morning" and "push-to-open early morning".
Step 307, the video sequence is pushed to the target terminal.
In some embodiments, the execution body may push the video sequence to the target terminal in various ways. For example, the execution subject may push the video sequence by way of bluetooth connection with the target terminal.
With further reference to fig. 4, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of an apparatus for retrieving video, which correspond to those shown in fig. 2, and which may be applied in various electronic devices in particular.
As shown in fig. 4, the retrieve video device 400 of some embodiments includes: an acquisition unit 401, a segmentation unit 402, a retrieval unit 403, a processing unit 404, and a generation unit 405. Wherein the obtaining unit 401 is configured to obtain the associated text of the target video. The word segmentation unit 402 is configured to perform word segmentation processing on the associated text to obtain at least one word. The retrieving unit 403 is configured to retrieve at least one video related to the target video. The processing unit 404 is configured to, for each of the at least one video, obtain associated text of the video, and determine a number of times the associated text of the video includes a word of the at least one word. The generating unit 405 is configured to sort the at least one video according to the determined number of times, and generate a video sequence.
In some optional implementations of some embodiments, the retrieving unit 403 may include: a word embedding subunit configured to perform word embedding on each word in the at least one word to obtain a word vector of the word; a generating subunit configured to generate a text vector of the associated text based on the obtained at least one word vector; and the first retrieval subunit is configured to retrieve at least one video from the target video library according to the text vector.
In some optional implementations of some embodiments, the apparatus further includes: and the input model unit is configured to input the target video into a pre-trained recommendation model to obtain the video characteristics of the target video.
In some optional implementations of some embodiments, the input model unit may be further configured as a second retrieving subunit configured to retrieve at least one video from the target video library according to the text vector and the video feature.
In some optional implementations of some embodiments, the input model unit may include: a third retrieving subunit, configured to retrieve a first number of videos from the target video library according to the text vector; a fourth retrieving subunit, configured to retrieve a second number of videos from the target video library according to the video characteristics; a second generating subunit configured to generate the at least one video based on the first number of videos and the second number of videos.
In some optional implementations of some embodiments, the input model unit may include: a fifth retrieving subunit configured to retrieve a third number of videos from the target video library according to the first video information of the target video; a determining subunit configured to determine, for each of the third number of videos, a similarity between second video information of the video and second video information of the target video; a selecting subunit configured to select a video from the third number of videos according to the obtained similarity to obtain the at least one video, wherein the first video information and the second video information are respectively one of the following information of the videos and are different from each other: text vectors, video features.
Referring now to fig. 5, a schematic diagram of an electronic device (e.g., the server of fig. 1) 500 suitable for use in implementing some embodiments of the present disclosure is shown. The server shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a related text of a target video; performing word segmentation processing on the associated text to obtain at least one word; retrieving at least one video related to the target video; for each video in the at least one video, acquiring a related text of the video, and determining the number of times that the related text of the video comprises a word in the at least one word; and sequencing the at least one video according to the determined times to generate a video sequence.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a word segmentation unit, a retrieval unit, a processing unit, and a generation unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the acquisition unit may also be described as a unit of "acquiring the associated text of the target video".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In accordance with one or more embodiments of the present disclosure, there is provided a method for retrieving a video, including: acquiring a related text of a target video; performing word segmentation processing on the associated text to obtain at least one word; retrieving at least one video related to the target video; for each video in the at least one video, acquiring a related text of the video, and determining the number of times that the related text of the video comprises a word in the at least one word; and sequencing the at least one video according to the determined times to generate a video sequence.
According to one or more embodiments of the present disclosure, retrieving at least one video related to the target video includes: performing word embedding on each word in the at least one word to obtain a word vector of the word; generating a text vector of the associated text based on the obtained at least one word vector; and retrieving at least one video from the target video library according to the text vector.
In accordance with one or more embodiments of the present disclosure, the method further comprises: and inputting the target video into a pre-trained recommendation model to obtain the video characteristics of the target video.
According to one or more embodiments of the present disclosure, retrieving at least one video from a target video library comprises: and retrieving at least one video from the target video library according to the text vector and the video characteristics.
According to one or more embodiments of the present disclosure, retrieving at least one video from a target video library according to the text vector includes: retrieving a first number of videos from the target video library based on the text vector; retrieving a second number of videos from the target video library based on the video characteristics; generating the at least one video based on the first number of videos and the second number of videos.
According to one or more embodiments of the present disclosure, retrieving at least one video from a target video library according to the text vector includes: retrieving a third number of videos from the target video library according to the first video information of the target video; for each video in the third number of videos, determining similarity between second video information of the video and second video information of the target video; selecting a video from the third number of videos according to the obtained similarity to obtain the at least one video; wherein the first video information and the second video information are respectively one of the following information of a video and are different from each other: text vectors, video features.
According to one or more embodiments of the present disclosure, sorting the at least one video according to the number of times to generate a video sequence includes: acquiring user attention information of the at least one video; sequencing the at least one video according to the user attention information and the times to generate a video sequence; and pushing the video sequence to a target terminal.
According to one or more embodiments of the present disclosure, there is provided an apparatus for retrieving a video, including: an acquisition unit configured to associate text of a target video; the word segmentation unit is configured to perform word segmentation processing on the associated text to obtain at least one word; a retrieval unit configured to retrieve at least one video related to the target video; a processing unit configured to, for each video of the at least one video, obtain an associated text of the video, and determine a number of times that the associated text of the video includes a word of the at least one word; and the generating unit is configured to sort the at least one video according to the times to generate a video sequence.
According to one or more embodiments of the present disclosure, the above retrieval unit includes: a word embedding subunit configured to perform word embedding on each word in the at least one word to obtain a word vector of the word; a generating subunit configured to generate a text vector of the associated text based on the obtained at least one word vector; and the first retrieval subunit is configured to retrieve at least one video from the target video library according to the text vector.
According to one or more embodiments of the present disclosure, the apparatus further includes: and the input model unit is configured to input the target video into a pre-trained recommendation model to obtain the video characteristics of the target video.
According to one or more embodiments of the present disclosure, the input model unit may be further configured as a second retrieving subunit configured to retrieve at least one video from the target video library according to the text vector and the video feature.
According to one or more embodiments of the present disclosure, the input model unit may include: a third retrieving subunit, configured to retrieve a first number of videos from the target video library according to the text vector; a fourth retrieving subunit, configured to retrieve a second number of videos from the target video library according to the video characteristics; a second generating subunit configured to generate the at least one video based on the first number of videos and the second number of videos.
According to one or more embodiments of the present disclosure, the input model unit may include: a fifth retrieving subunit configured to retrieve a third number of videos from the target video library according to the first video information of the target video; a determining subunit configured to determine, for each of the third number of videos, a similarity between second video information of the video and second video information of the target video; a selecting subunit configured to select a video from the third number of videos according to the obtained similarity to obtain the at least one video, wherein the first video information and the second video information are respectively one of the following information of the videos and are different from each other: text vectors, video features.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as described in any of the embodiments above.
According to one or more embodiments of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements the method as described in any of the embodiments above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. A method for retrieving video, comprising:
acquiring a related text of a target video;
performing word segmentation processing on the associated text to obtain at least one word;
retrieving at least one video related to the target video;
for each video in the at least one video, obtaining associated text of the video, and determining the number of times that the associated text of the video includes a word in the at least one word;
and sequencing the at least one video according to the determined times to generate a video sequence.
2. The method of claim 1, wherein said retrieving at least one video related to the target video comprises:
performing word embedding on each word in the at least one word to obtain a word vector of the word;
generating a text vector of the associated text based on the obtained at least one word vector;
and retrieving at least one video from a target video library according to the text vector.
3. The method of claim 2, wherein the method further comprises:
and inputting the target video into a pre-trained recommendation model to obtain the video characteristics of the target video.
4. The method of claim 3, wherein said retrieving at least one video from a target video library comprises:
and retrieving at least one video from the target video library according to the text vector and the video characteristics.
5. The method of claim 3, wherein said retrieving at least one video from a target video library based on said text vector comprises:
retrieving a first number of videos from the target video library according to the text vector;
retrieving a second number of videos from the target video library according to the video characteristics;
generating the at least one video based on the first number of videos and the second number of videos.
6. The method of claim 3, wherein said retrieving at least one video from a target video library based on said text vector comprises:
retrieving a third number of videos from the target video library according to the first video information of the target video;
for each video of the third number of videos, determining a similarity of second video information of the video and second video information of the target video;
selecting a video from the third number of videos according to the obtained similarity to obtain the at least one video;
wherein the first video information and the second video information are respectively one of the following information of a video and are different from each other: text vectors, video features.
7. The method of claim 1, wherein said sorting said at least one video according to said number of times, generating a video sequence, comprises:
acquiring user attention information of the at least one video;
sequencing the at least one video according to the user attention information and the times to generate a video sequence;
and pushing the video sequence to a target terminal.
8. An apparatus for retrieving video, comprising:
an acquisition unit configured to acquire an associated text of a target video;
the word segmentation unit is configured to perform word segmentation processing on the associated text to obtain at least one word;
a retrieval unit configured to retrieve at least one video related to the target video;
a processing unit configured to, for each of the at least one video, obtain associated text of the video and determine a number of times the associated text of the video includes a term of the at least one term;
and the generating unit is configured to sort the at least one video according to the times to generate a video sequence.
9. The apparatus of claim 8, wherein the retrieving unit comprises:
a word embedding unit configured to perform word embedding on each word in the at least one word to obtain a word vector of the word;
a generating subunit configured to generate a text vector of the associated text based on the obtained at least one word vector;
a first retrieving subunit configured to retrieve at least one video from a target video library based on the text vector.
10. The apparatus of claim 8, wherein the apparatus further comprises:
and the input model unit is configured to input the target video into a pre-trained recommendation model to obtain the video characteristics of the target video.
11. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
12. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN201910917233.0A 2019-09-26 2019-09-26 Method and device for retrieving video and electronic equipment Pending CN110688529A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910917233.0A CN110688529A (en) 2019-09-26 2019-09-26 Method and device for retrieving video and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910917233.0A CN110688529A (en) 2019-09-26 2019-09-26 Method and device for retrieving video and electronic equipment

Publications (1)

Publication Number Publication Date
CN110688529A true CN110688529A (en) 2020-01-14

Family

ID=69110415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910917233.0A Pending CN110688529A (en) 2019-09-26 2019-09-26 Method and device for retrieving video and electronic equipment

Country Status (1)

Country Link
CN (1) CN110688529A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230297613A1 (en) * 2020-09-30 2023-09-21 Nec Corporation Video search system, video search method, and computer program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462573A (en) * 2014-12-29 2015-03-25 北京奇艺世纪科技有限公司 Method and device for displaying video retrieval results
CN106919651A (en) * 2017-01-22 2017-07-04 北京奇艺世纪科技有限公司 The search ordering method and device of external website video
CN107066621A (en) * 2017-05-11 2017-08-18 腾讯科技(深圳)有限公司 A kind of search method of similar video, device and storage medium
CN108228915A (en) * 2018-03-29 2018-06-29 华南理工大学 A kind of video retrieval method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462573A (en) * 2014-12-29 2015-03-25 北京奇艺世纪科技有限公司 Method and device for displaying video retrieval results
CN106919651A (en) * 2017-01-22 2017-07-04 北京奇艺世纪科技有限公司 The search ordering method and device of external website video
CN107066621A (en) * 2017-05-11 2017-08-18 腾讯科技(深圳)有限公司 A kind of search method of similar video, device and storage medium
CN108228915A (en) * 2018-03-29 2018-06-29 华南理工大学 A kind of video retrieval method based on deep learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230297613A1 (en) * 2020-09-30 2023-09-21 Nec Corporation Video search system, video search method, and computer program

Similar Documents

Publication Publication Date Title
CN110688528B (en) Method, apparatus, electronic device, and medium for generating classification information of video
CN110969012B (en) Text error correction method and device, storage medium and electronic equipment
CN111738010B (en) Method and device for generating semantic matching model
CN109933217B (en) Method and device for pushing sentences
CN111414543B (en) Method, device, electronic equipment and medium for generating comment information sequence
CN112650841A (en) Information processing method and device and electronic equipment
CN111354345B (en) Method, apparatus, device and medium for generating speech model and speech recognition
US11763204B2 (en) Method and apparatus for training item coding model
CN110245334B (en) Method and device for outputting information
CN110457325B (en) Method and apparatus for outputting information
US20230367972A1 (en) Method and apparatus for processing model data, electronic device, and computer readable medium
CN116128055A (en) Map construction method, map construction device, electronic equipment and computer readable medium
CN112182255A (en) Method and apparatus for storing media files and for retrieving media files
CN111008213A (en) Method and apparatus for generating language conversion model
CN113468344A (en) Entity relationship extraction method and device, electronic equipment and computer readable medium
CN110598049A (en) Method, apparatus, electronic device and computer readable medium for retrieving video
CN110688529A (en) Method and device for retrieving video and electronic equipment
CN115203378B (en) Retrieval enhancement method, system and storage medium based on pre-training language model
WO2022121859A1 (en) Spoken language information processing method and apparatus, and electronic device
CN111754984B (en) Text selection method, apparatus, device and computer readable medium
CN114925680A (en) Logistics interest point information generation method, device, equipment and computer readable medium
CN110633476B (en) Method and device for acquiring knowledge annotation information
CN112328751A (en) Method and device for processing text
CN111666449A (en) Video retrieval method, video retrieval device, electronic equipment and computer readable medium
CN111782933A (en) Method and device for recommending book list

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination