CN112423112B - Method and equipment for releasing video information - Google Patents

Method and equipment for releasing video information Download PDF

Info

Publication number
CN112423112B
CN112423112B CN202011279513.2A CN202011279513A CN112423112B CN 112423112 B CN112423112 B CN 112423112B CN 202011279513 A CN202011279513 A CN 202011279513A CN 112423112 B CN112423112 B CN 112423112B
Authority
CN
China
Prior art keywords
information
original video
video information
video
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011279513.2A
Other languages
Chinese (zh)
Other versions
CN112423112A (en
Inventor
钟名铖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yijiao Wenshu Technology Co ltd
Original Assignee
Beijing Yijiao Wenshu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yijiao Wenshu Technology Co ltd filed Critical Beijing Yijiao Wenshu Technology Co ltd
Priority to CN202011279513.2A priority Critical patent/CN112423112B/en
Publication of CN112423112A publication Critical patent/CN112423112A/en
Application granted granted Critical
Publication of CN112423112B publication Critical patent/CN112423112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4405Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video stream decryption
    • H04N21/44055Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video stream decryption by partially decrypting, e.g. decrypting a video stream that has been partially encrypted
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The purpose of the application is to provide a method and equipment for distributing video information, wherein the method comprises the following steps: responding to a learning trigger operation executed by a first user for previous video information, and restoring the previous video information according to processing process information corresponding to the previous video information to obtain first original video information corresponding to the previous video information; acquiring second original video information shot by the first user with reference to the first original video information; processing the second original video information according to the processing process information to obtain subsequent video information; and releasing the subsequent video information, wherein the subsequent video information comprises the identification information of the prior video information.

Description

Method and equipment for releasing video information
Technical Field
The present application relates to the field of communications, and in particular, to a technique for distributing video information.
Background
With the development of communication technology, people increasingly use terminal equipment for entertainment and interaction, wherein the production of short videos is rapidly developed, and a large amount of short videos are uploaded and released every day. People release the short video generated after shooting and processing by themselves through corresponding applications in the terminal equipment, and interact with people with the same hobbies and interests. The greatest advantage of short video over long video is UGC (User Generated Content), but many users have insufficient ability to capture short video.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for distributing video information.
According to an aspect of the present application, there is provided a method of distributing video information, the method comprising:
responding to a learning trigger operation executed by a first user for previous video information, and restoring the previous video information according to processing process information corresponding to the previous video information to obtain first original video information corresponding to the previous video information;
acquiring second original video information shot by the first user with reference to the first original video information;
processing the second original video information according to the processing process information to obtain subsequent video information;
and releasing the subsequent video information, wherein the subsequent video information comprises the identification information of the prior video information.
According to an aspect of the present application, there is provided a first user equipment for distributing video information, the apparatus comprising:
the one-to-one module is used for responding to the learning triggering operation executed by a first user aiming at the prior video information, and restoring the prior video information according to the processing process information corresponding to the prior video information to obtain first original video information corresponding to the prior video information;
a second module, configured to obtain second original video information that is shot by the first user with reference to the first original video information;
a third module, configured to process the second original video information according to the processing procedure information, and obtain subsequent video information;
and a fourth module for publishing the subsequent video information, wherein the subsequent video information comprises the identification information of the previous video information.
According to an aspect of the present application, there is provided an apparatus for distributing video information, wherein the apparatus includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
responding to a learning trigger operation executed by a first user for previous video information, and restoring the previous video information according to processing process information corresponding to the previous video information to obtain first original video information corresponding to the previous video information;
acquiring second original video information shot by the first user with reference to the first original video information;
processing the second original video information according to the processing process information to obtain subsequent video information;
and releasing the subsequent video information, wherein the subsequent video information comprises the identification information of the prior video information.
According to one aspect of the application, there is provided a computer-readable medium storing instructions that, when executed, cause a system to:
responding to a learning trigger operation executed by a first user for previous video information, and restoring the previous video information according to processing process information corresponding to the previous video information to obtain first original video information corresponding to the previous video information;
acquiring second original video information shot by the first user with reference to the first original video information;
processing the second original video information according to the processing process information to obtain post video information;
and releasing the subsequent video information, wherein the subsequent video information comprises the identification information of the prior video information.
Compared with the prior art, for a shooting user of an original video, the video propagation degree can be improved through learning of the original video by other users, propagation of the corresponding subsequent video can also bring broken propagation of original recognition, benefit closed loops are formed, sharing and popularization of the original video are facilitated, for a video learning user, the video shooting and processing capacity can be improved, a video with higher quality is generated, video generation efficiency is greatly improved, more creatives can become mature hands for a target application (for example, the video application), content ecology of the target application can be prosperous, further, combination of social and video can be promoted, video publishing willingness of the user in the target application is enhanced, and social user system construction of the target application is promoted.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 illustrates a flow diagram of a method of distributing video information according to one embodiment of the present application;
FIG. 2 illustrates a block diagram of a first user equipment for distributing video information, according to one embodiment of the present application;
FIG. 3 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include forms of volatile Memory, random Access Memory (RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase-Change Memory (PCM), programmable Random Access Memory (PRAM), static Random-Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash Memory or other Memory technology, compact Disc Read Only Memory (CD-ROM), digital Versatile Disc (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), such as a smart phone, a tablet computer, and the like, and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, and the like. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 shows a flowchart of a method for distributing video information according to an embodiment of the present application, the method comprising step S11, step S12, step S13 and step S14. In step S11, in response to a learning trigger operation executed by a first user with respect to previous video information, the first user equipment restores the previous video information according to processing procedure information corresponding to the previous video information to obtain first original video information corresponding to the previous video information; in step S12, a first user device obtains second original video information captured by the first user with reference to the first original video information; in step S13, the first user equipment processes the second original video information according to the processing procedure information to obtain subsequent video information; in step S14, the first user equipment issues the subsequent video information, where the subsequent video information includes the identification information of the previous video information.
In step S11, in response to a learning trigger operation executed by a first user with respect to previous video information, the first user equipment restores the previous video information according to processing procedure information corresponding to the previous video information, so as to obtain first original video information corresponding to the previous video information. In some embodiments, the previous video information is video information of the learned video, the video information including, but not limited to, video playing address information, video ID information, video name information, video profile information, video popularity information, video rating information, video actor information, video type information, video interaction information (pop-up information, like information, comment information), and the like, preferably, the previous video is a short video, which is a video having a video duration less than or equal to a predetermined duration threshold (e.g., 5 minutes). In some embodiments, the first user selects previous video information to be learned among a plurality of video information to be learned that has been published by a server of a target application (e.g., a video application), wherein each video information to be learned includes identification information indicating that learning is allowed. In some embodiments, the server sends (e.g., pushes) online look-ahead information directly to the first user device based on the portrait information of the first user. In some embodiments, the processing procedure information refers to procedure data of a second user corresponding to the previous video information (i.e., a shooting user of the previous video information) processing the original video information after shooting the original video information, where the procedure data includes one or more processing steps (e.g., filter, double speed, clip, sound configuration, music addition, etc.), and the previous video information can be restored according to the processing procedure information (e.g., according to the inverse processing procedure information corresponding to the processing procedure information), so as to completely restore the previous video information to an initial shooting state, and obtain the first original video information corresponding to the previous video information.
In step S12, the first user device obtains second original video information captured by the first user with reference to the first original video information. In some embodiments, the first user captures the second original video information by comparing the first original video information. In some embodiments, the comparison may be performed simultaneously with the shooting, or may be performed asynchronously, and if the comparison is performed in a synchronous manner, the first original video and the video shot picture may be displayed on the same screen, specifically in a parallel display manner, or in a picture-in-picture manner, or the first original video is processed (for example, in a transparency manner, in a grayscale display manner, etc.), and the video shot picture is directly superimposed and presented on the first original video; if the video is in an asynchronous mode, the first original video can be played once first and then shot, after shooting is completed, the second original video and the first original video can be compared, if the first original video comprises a plurality of segmented video segments, video segmentation operation can be executed by a shooting user of the previous video information, or a server of a target application can be executed by identifying video content of the previous video information, the segmented video segment is played once first for each segmented video segment and then shot, and after shooting is completed, the second original video and the first original video can be compared by a first user (after the segmented shooting is completed, the second original segmented video segment obtained by the segmented shooting is independently compared with the corresponding first original segmented video segment, or after the complete shooting is completed, the second original video obtained by shooting is integrally compared with the first original video).
In step S13, the first user equipment processes the second original video information according to the processing procedure information, and obtains the subsequent video information. In some embodiments, the process information may be directly overlaid on the second original video information to obtain the subsequent video information with a single key. In some embodiments, the second original video information may be directly processed according to one or more processing step information in the processing procedure information to obtain the subsequent video information. In some embodiments, one or more of the process information may be presented to the first user, at least one target process step information selected by the first user from the one or more process information, and the second original video information may be processed according to the at least one target process step information to obtain the subsequent video information. In some embodiments, after obtaining the post-video information, the capturing and processing effect of the post-video information may be reviewed in its entirety, and the post-video information and the prior video information may be compared, or the second original video information and the first original video information may be compared and presented.
In step S14, the first user equipment publishes the subsequent video information, wherein the subsequent video information includes the identification information of the previous video information. In some embodiments, the publishing may be performed directly after the post-video information is generated, or may be performed in response to a publishing trigger operation of the first user. In some embodiments, the post-video information may be published in a server of a target application (e.g., a video application), and after the publication is successful, other users corresponding to the target application may browse and view the post-video information. In some embodiments, it may also be that the first user publishes (e.g., shares) the later video information to at least one other user in the target application. In some embodiments, it may also be that the first user publishes the post video information to at least one friend of the first user in the target application. In some embodiments, the subsequent video information includes identification information (e.g., a video link address, a video name, a video ID, etc.) of its corresponding previous video information, and in response to a predetermined operation (e.g., a click operation) performed by another user with respect to the identification information, the user may jump to a playing page of its corresponding previous video information, so as to facilitate sharing and promotion of the original video.
In some embodiments, for a shooting user of an original video, the original video can be learned by other users, the video propagation degree is improved, the propagation of the corresponding subsequent video can also bring the propagation of original recognition, a benefit closed loop is formed, so that the sharing and popularization of the original video are facilitated, for a video learning user, the capability of shooting and processing the video can be improved, so that a video with higher quality is generated, the video generation efficiency is greatly improved, for a target application (for example, a video application), more creatives can become mature hands, so that the content ecology of the target application can be enriched, further, the combination of social contact and video can be promoted, the video publishing intention of the user in the target application is enhanced, and the social contact user system construction of the target application is promoted.
In some embodiments, the method further comprises: the method comprises the steps that a first user device responds to selection operation performed by a first user on issued one or more pieces of video information to be learned, previous video information selected by the first user in the one or more pieces of video information to be learned is obtained, wherein each video to be learned in the one or more pieces of video information to be learned comprises identification information used for indicating that learning is allowed. In some embodiments, the identification information indicating that learning is allowed includes, but is not limited to, any style of identification information, which may be a specific text information, such as "learnable" text, or a specific icon information, such as an icon in the style of "book".
In some embodiments, each piece of video information to be learned further includes learning difficulty information, and the learning difficulty information is determined according to the processing procedure information corresponding to each piece of video information to be learned. In some embodiments, the learning difficulty information may be of a discrete type, such as "easy," "simple," "general," "difficult," "extremely difficult," and so forth. In some embodiments, the learning difficulty information may also be a continuous type, such as a difficulty coefficient which may be any value from 0 to 100. In some embodiments, the learning difficulty information is determined according to the processing procedure information corresponding to each piece of to-be-learned video information, and may be determined, for example, by the number of processing step information in the processing procedure information, where the more processing steps, the higher the learning difficulty indicated by the learning difficulty information, and may also be determined, for example, by the complexity of the processing step information in the processing procedure information, where the more complex the processing steps, the higher the learning difficulty indicated by the learning difficulty information, and the learning difficulty information may be determined, for example, by the number and complexity of the processing step information in the processing procedure information. In some embodiments, the learning difficulty information may also be specified by a shooting user corresponding to each piece of video information to be learned.
In some embodiments, each piece of video information to be learned further includes learning cost information. In some embodiments, the learning cost information includes whether the video information to be learned needs to be learned and the amount of the cost, so that the shooting user of the original video can encourage other users to learn, and thus benefit from learning of other users. In some embodiments, the learning cost information is specified by a shooting user corresponding to each piece of video information to be learned, or may be determined by a server corresponding to a target application (e.g., a video application) according to video quality corresponding to each piece of video information to be learned, or may be determined by the server according to learning difficulty corresponding to each piece of video information to be learned, or may be determined by the server according to user history learning feedback or user history learning evaluation corresponding to each piece of video information to be learned.
In some embodiments, the processing procedure information includes at least one processing step information and processing sequence information corresponding to the at least one processing step information. In some embodiments, the processing step information includes, but is not limited to, filters, multiple speeds, clips, sound configurations, music additions, etc., and the processing step information includes, but is not limited to, a starting state of the processing step, a course of the processing step, an ending state of the processing step, etc. In some embodiments, the processing procedure information further includes processing sequence information corresponding to at least one piece of processing step information, and the processing sequence information is used to indicate the processing sequence of each piece of processing step information, for example, processing step a is performed first, processing step B is performed, and processing step C is performed.
In some embodiments, the restoring the previous video information according to the processing procedure information corresponding to the previous video information to obtain first original video information corresponding to the previous video information includes: and according to the reverse order information corresponding to the processing order information, sequentially restoring the prior video information according to the reverse processing step information corresponding to each processing step information to obtain first original video information corresponding to the prior video information. For example, the processing sequence information is to execute the processing step a first, then execute the processing step B, and then execute the processing step C, the reverse sequence information is to execute the processing step C first, then execute the processing step B, then execute the processing step a, according to the reverse sequence information, restore the previous video information according to the reverse processing step C1 corresponding to the processing step C to obtain the first temporary video information, then restore the first temporary video information according to the reverse processing step B1 corresponding to the processing step B to obtain the second temporary video information, and then restore the second temporary video information according to the reverse processing step A1 corresponding to the processing step a to obtain the first original video information corresponding to the previous video information.
In some embodiments, the step S12 includes: and the first user equipment acquires second original video information shot by the first user by referring to the first original video information in a synchronous mode, wherein a shot picture of the second original video information and the first original video information are displayed on the same screen. In some embodiments, the first user synchronously compares the first original video information and shoots the second original video information, the comparison is performed at the same time, and the video shooting pictures of the first original video information and the second original video information are displayed on the same screen, and the specific display mode includes but is not limited to parallel display, picture-in-picture, overlay display and the like.
In some embodiments, the on-screen display mode includes but is not limited to:
1) Parallel display
In some embodiments, the parallel display mode refers to that the video shooting pictures of the first original video information and the second original video information are displayed in parallel without overlapping on the screen of the first user equipment, and includes, but is not limited to, a horizontal parallel display, a vertical parallel display, and the like.
2) Picture-in-picture display
In some embodiments, the picture-in-picture display mode refers to that one of the video shots of the first original video information and the second original video information is presented in the form of a larger window on the screen of the first user equipment, and the other one is presented in the form of a smaller window superposed on the larger window.
3) Overlay display
In some embodiments, the first raw video information is processed (e.g., transparentized, gray-scale rendered, etc.) and then the rendered video capture is superimposed directly on top of the first raw video information.
In some embodiments, the step S12 includes a step S121 (not shown). In step S121, the first user equipment plays the first original video information, and after the playing is completed, obtains second original video information that is shot by the first user with reference to the first original video information in an asynchronous manner. In some embodiments, the first original video may be played first, and after the playing is completed, the first user shoots the second original video information by asynchronously comparing the first original video information. In some embodiments, after the second original video information is captured, the second original video and the first original video may be compared.
In some embodiments, the first original video information comprises one or more first original video clip information, and the second original video information comprises one or more second original video clip information; wherein the step S121 includes a step S1211 (not shown), a step S1212 (not shown), and a step S1213 (not shown). In step S1211, the first user equipment takes, according to a video clip order corresponding to the one or more pieces of first original video clip information, a first original video clip information with a top order in the one or more pieces of first original video clip information as target original video clip information; in step S1212, the first user equipment plays the target video clip, and after the playing is completed, obtains second original video clip information corresponding to the target original video clip information, which is shot by the first user by referring to the target original video clip information in an asynchronous manner; in step S1213, the first user equipment uses the next first original video clip information after the target original video clip information as new target original video clip information according to the video clip sequence, and repeats step S1212 until the target video clip is the first original video clip information in the last sequence. For example, the first original video information includes first original video clip information a, first original video clip information B, and first original video clip information C, the video clip sequence corresponding to the first original video clip information a, the first original video clip information B, and the first original video clip information C is the first original video clip information a, the first original video clip information a closest to the first original video clip information a in the order is used as the target original video clip information according to the video clip sequence, the first original video clip information a is played once, after the playing is completed, the second original video clip information A1 corresponding to the first original video clip information a and captured by the first user in an asynchronous manner with reference to the first original video clip information a is obtained, then the first original video clip information B is used as the new target original video clip information according to the video clip sequence, the above steps are repeatedly executed, the second original video clip information corresponding to the new target original video clip information captured by the first user in an asynchronous manner with reference to the new target original video clip information a is obtained, until the second original video clip information C is obtained by the first original video clip information C, and the second original video clip information C is continuously captured by the first original video clip information C1 in the video clip information, and the second original video clip information C, and the second original video clip information is obtained by the video clip information.
In some embodiments, the step S1213 includes: and the first user equipment responds to the comparison operation executed by the first user for the second original video clip information, compares and presents the second original video clip information and the first original video clip information, responds to the confirmation operation executed by the first user for the comparison operation, if the confirmation operation indicates that the comparison of the second original video clip information is successful, uses the next original video clip information after the target original video clip information as new target original video clip information according to the video clip sequence, and repeats the step S1212 until the target video clip is the first original video clip information with the latest sequence. In some embodiments, after obtaining second original video clip information corresponding to a certain first original video clip information, which is shot by the first user by referring to the first original video clip information in an asynchronous manner, the first user may compare the second original video clip information obtained by shooting this time with the first original video clip information according to the same time axis, and play the second original video clip information and the first original video clip information, and if the first user confirms that the comparison is successful, the next first original video clip information after the first original video clip information is taken as new target original video clip information according to the corresponding video clip sequence, and the above steps are repeatedly executed.
In some embodiments, the method further comprises: and if the first user equipment indicates that the comparison of the second original video clip information fails, the first user equipment acquires second original video clip information which is shot by the first user by referring to the target original video clip information in an asynchronous mode and corresponds to the target original video clip information again. In some embodiments, after obtaining second original video clip information corresponding to a certain first original video clip information, which is shot by the first user by referring to the first original video clip information in an asynchronous manner, the first user may compare the second original video clip information obtained by shooting this time with the first original video clip information according to the same time axis, and if the first user confirms that the comparison fails, it is necessary to obtain new second original video clip information corresponding to the first original video clip information, which is shot by the first user by referring to the first original video clip information again in an asynchronous manner, again.
In some embodiments, the method further comprises: if the first user equipment has obtained the one or more pieces of second original video clip information, in response to a comparison operation performed by the first user for the one or more pieces of second original video clip information, the one or more pieces of second original video clip information and the one or more pieces of first original video clip information are presented by comparison, in response to a confirmation operation performed by the first user for the comparison operation, and if the confirmation operation indicates that at least one piece of first original video clip information in the one or more pieces of first original video clip information fails to be compared, the one or more pieces of second original video clip information corresponding to the one or more pieces of first original video clip information, which is shot by the first user by referring to the one or more pieces of first original video clip information in an asynchronous manner, are re-obtained. In some embodiments, if one or more pieces of second original video clip information corresponding to one or more pieces of first original video clip information, which is shot by the first user by referring to the one or more pieces of first original video clip information in an asynchronous manner, are completely obtained, and a union of the one or more pieces of second original video clip information is the second original video information, the first user may perform an overall comparison and play on the one or more pieces of second original video clip information and the one or more pieces of first original video clip information according to the same time axis, and if the first user confirms that the comparison fails, it is necessary to obtain again one or more pieces of second original video clip information corresponding to the one or more pieces of first original video clip information, which is shot by the first user by referring to the one or more pieces of first original video clip information in an asynchronous manner.
In some embodiments, the process information includes at least one process step information; wherein the step S13 includes a step S131 (not shown). In step S131, the first user equipment processes the second original video information according to the at least one processing step information, and obtains subsequent video information corresponding to the second original video. In some embodiments, the second original video information may be directly processed according to one or more processing step information in the processing procedure information to obtain the subsequent video information. In some embodiments, one or more processing step information of the processing procedure information may be presented to the first user, at least one target processing step information is selected from the one or more processing step information by the first user, and the second original video information is processed according to the at least one target processing step information to obtain the post-video information.
In some embodiments, the step S131 includes: and the first user equipment presents the at least one piece of processing step information, responds to one or more pieces of target processing step information selected by the first user in the at least one piece of processing step information, processes the second original video according to the one or more pieces of target processing step information, and obtains subsequent video information corresponding to the second original video. In some embodiments, one or more processing step information of the processing procedure information may be presented to the first user, the first user selects at least one target processing step information from the one or more processing step information, and the second original video information is processed according to the at least one target processing step information to obtain subsequent video information corresponding to the second original video information.
In some embodiments, the processing procedure information further includes processing sequence information corresponding to the at least one processing step information; wherein the step S131 includes: and the first user equipment sequentially processes the second original video information according to the processing sequence information and each piece of processing step information to obtain the subsequent video information corresponding to the second original video information. In some embodiments, the second original video information may be sequentially processed according to the processing sequence information corresponding to one or more processing step information in the processing procedure information, directly according to each processing step information, to obtain the subsequent video information corresponding to the second original video information.
In some embodiments, the step S14 includes: and the first user equipment responds to the release triggering operation of the first user, and releases the subsequent video information to the friend of the first user, wherein the subsequent video information comprises the identification information of the prior video information, and the subsequent video information does not comprise any presentation information for revenue. In some embodiments, in response to a posting trigger operation of a first user, post video information is posted (e.g., shared) to at least one friend of the first user in a target application, but the post video information may not be posted to other users in the target application that are not friends of the first user, and the post video information does not include any presentation information for revenue (e.g., revenue-based advertisement), so that social user architecture of the target application may be promoted, and a user may quickly promote a video shooting level by learning an original video shot by friends.
In some embodiments, the succeeding video information further includes identification information indicating that learning is allowed, and the descendant video information obtained by learning the succeeding video information includes the identification information of the preceding video information. In some embodiments, after the post-video information is captured and processed, the post-video information includes identification information indicating that learning is allowed by default, or, when the post-video information is distributed, the post-video information is provided with identification information indicating that learning is allowed by default. In some embodiments, the descendant video information obtained by learning after the post-video information is shot and processed includes identification information of the previous video information, or the descendant video information does not include identification information of its corresponding previous video information but also includes identification information of its corresponding post-video information, wherein the descendant video may be obtained by learning after the post-video information is shot and processed, or may be obtained by learning other descendant video shots and processed. For example, the previous video information a, the subsequent video information B obtained by learning the previous video information a, the descendant video information C obtained by learning the subsequent video information B, and the descendant video information D obtained by learning the descendant video information C, wherein the subsequent video information B includes identification information of the previous video information a, and the descendant video information C may include identification information of only the previous video information a, or may further include identification information of both the previous video information a and the subsequent video information B, and the descendant video information D may include identification information of only the previous video information a, or may further include identification information of both the previous video information a and the descendant video information C, or may further include identification information of both the previous video information a, the subsequent video information B, and the descendant video information C.
In some embodiments, the preceding video information participates in revenue sharing of the following video information and the descendant video information. In some embodiments, the previous video information participates in the revenue share of the corresponding subsequent video information and the descendant video information, or on the basis, the subsequent video information also participates in the revenue share of the corresponding descendant video information, and the revenue share is less than or equal to the revenue share of the previous video information, or on the basis, the descendant video information also participates in the revenue share of other descendant video information which directly or indirectly learns the descendant video information. For example, the 20% profit sharing of the previous video information participating in its corresponding later video information, or the 15% profit sharing of the previous video information participating in its corresponding descendant video information, the 10% profit sharing of the later video information corresponding to the previous video information participating in the descendant video information. In some embodiments, a chain relationship is established among the descendant video information, the later video information, and the earlier video information, and a certain video information node in the chain participates in the revenue share of other video information nodes after the certain video information node, and the revenue share of the video information nodes which are farther forward in the chain is more. In some embodiments, the prior video information participates in the corresponding subsequent video information and the corresponding benefit division of the descendant video information, so that the benefit of the shooting user of the original video can be ensured, the shooting user of the original video is ensured to wish to release the original video of the shooting user to learn for people, and the level of the target application whole-network creator is further enhanced.
In some embodiments, the method further comprises step S15 (not shown). In step S15, the first user equipment determines heat information of the previous video information according to the subsequent video information and the descendant video information, where the heat information is used to recommend the previous video information. In some embodiments, the learning popularity of the original video is an important sign of the popularity of the original video, video recommendation can be performed according to the popularity, and since data of the later video and data of the descendant video are collected under the original video, the chance of the original video being out of the ground is greatly increased. In some embodiments, the popularity information of the previous video information may be determined based on learning-related information (e.g., the number of learning times, learning feedback, learning evaluation, etc., that the previous video information was learned by its corresponding subsequent video information and descendant video information). In some embodiments, the popularity information of the previous video information may be determined according to user play related information (e.g., play times, play completion rate, etc.) and/or user interaction related information (e.g., praise times, collected times, number of barracks, number of comments, etc.) of the subsequent video information and the descendant video information corresponding to the previous video information. In some embodiments, the heat addition of the previous video information may also be determined according to the subsequent video information and the descendant video information, so as to determine the final heat after the addition of the previous video information.
In some embodiments, the step S15 includes: and the first user equipment determines the popularity information of the prior video information according to the learning related information of the prior video information learned by the subsequent video information and the descendant video information. In some embodiments, the popularity information of the previous video information may be determined based on learning-related information (e.g., learning times, learning feedback, learning evaluation, etc. learned by the previous video information being learned by its corresponding subsequent video information and descendant video information).
In some embodiments, the step S15 includes: and the first user equipment determines the popularity information of the prior video information according to the user playing related information and/or the user interaction related information of the subsequent video information and the descendant video information. In some embodiments, the popularity information of the previous video information may be determined according to user play related information (e.g., play times, play completion rate, etc.) and/or user interaction related information (e.g., praise times, collected times, number of barracks, number of comments, etc.) of the subsequent video information and the descendant video information corresponding to the previous video information.
In some embodiments, the processing information is obtained during processing of the first raw video information and generating the previous video information. In some embodiments, the processing procedure information is obtained in a process in which a network device processes the first original video information according to processing request information sent by the second user and generates the previous video information, and the second user is a shooting user of the previous video information. In some embodiments, the processing procedure information is obtained in a process of processing the first original video information and generating the previous video information by a second user, where the second user is a shooting user of the previous video information, and a second user device corresponding to the second user uploads the processing procedure information to a network device.
In some embodiments, the processing procedure information is obtained in a process in which a network device processes the first original video information according to processing request information sent by the second user and generates the previous video information, and the second user is a shooting user of the previous video information. In some embodiments, the network device processes first original video information captured by a second user (i.e., a user who captured previous video information) according to processing request information sent by the second user and generates previous video information, and the processing procedure information is procedure data of the processing to generate the previous video information, the procedure data including one or more processing steps (e.g., filter, speed, clip, sound configuration, music addition, etc.), and the processing procedure information is recorded and stored in the network device during the processing to generate the previous video information.
In some embodiments, the processing procedure information is obtained in a process of processing the first original video information and generating the previous video information by a second user, where the second user is a shooting user of the previous video information, and a second user device corresponding to the second user uploads the processing procedure information to a network device. In some embodiments, the processing procedure information refers to procedure data of a second user corresponding to the previous video information (i.e., a shooting user of the previous video information) processing the original video information to generate the previous video information after shooting the original video information, wherein the procedure data includes one or more processing steps (e.g., filter, double speed, clip, sound configuration, music addition, etc.), and the processing procedure information is recorded during the process of generating the previous video information after shooting and processing and is uploaded to the network device.
In some embodiments, the processing information is stored in a second user device corresponding to a second user corresponding to the previous video information, and the second user device uploads the previous video information and the processing information to the network device in response to a publishing operation performed by the second user for the previous video information, so as to publish the previous video information in the network device and store the processing information. In some embodiments, the processing procedure information corresponding to the previous video information is stored locally at the second user equipment corresponding to the shooting user of the previous video information, and is uploaded to the network equipment along with the previous video information in response to the publishing action of the previous video information, so as to publish the previous video information in the network equipment, and store the corresponding processing procedure information in the network equipment.
In some embodiments, the processing procedure information includes at least one processing step information, and the second user equipment synchronously uploads each processing step information of the at least one processing step information to the network device to store the each processing step information in the network device. In some embodiments, the processing procedure information corresponding to the previous video information includes at least one processing step information, and after the execution of each processing step information is completed, the processing step information is synchronously uploaded to the network device to store each processing step information in the network device, and the network device obtains the processing procedure information corresponding to the previous video information according to each processing step information stored in the network device.
Fig. 2 shows a block diagram of a first user equipment for distributing video information according to an embodiment of the present application, which includes a one-module 11, a two-module 12, a three-module 13 and a four-module 14. A one-to-one module 11, configured to respond to a learning trigger operation performed by a first user for previous video information, and restore the previous video information according to processing process information corresponding to the previous video information to obtain first original video information corresponding to the previous video information; a second module 12, configured to obtain second original video information that is shot by the first user with reference to the first original video information; a third module 13, configured to process the second original video information according to the processing procedure information, so as to obtain subsequent video information; a fourth module 14, configured to publish the subsequent video information, wherein the subsequent video information includes identification information of the previous video information.
The one-to-one module 11 is configured to, in response to a learning trigger operation performed by a first user for previous video information, restore the previous video information according to processing procedure information corresponding to the previous video information, and obtain first original video information corresponding to the previous video information. In some embodiments, the previous video information is video information of the learned video, the video information including, but not limited to, video playing address information, video ID information, video name information, video profile information, video popularity information, video rating information, video actor information, video type information, video interaction information (pop-up information, like information, comment information), and the like, preferably, the previous video is a short video, which is a video having a video duration less than or equal to a predetermined duration threshold (e.g., 5 minutes). In some embodiments, the first user selects previous video information to be learned among a plurality of video information to be learned that has been published by a server of a target application (e.g., a video application), wherein each video information to be learned includes identification information indicating that learning is allowed. In some embodiments, the server sends (e.g., pushes) online look-ahead information directly to the first user device based on the portrait information of the first user. In some embodiments, the processing procedure information refers to procedure data of a second user corresponding to the previous video information (i.e., a shooting user of the previous video information) processing the original video information after shooting the original video information, where the procedure data includes one or more processing steps (e.g., filter, double speed, clip, sound configuration, music addition, etc.), and the previous video information can be restored according to the processing procedure information (e.g., according to the inverse processing procedure information corresponding to the processing procedure information), so as to completely restore the previous video information to an initial shooting state, and obtain the first original video information corresponding to the previous video information.
A second module 12, configured to obtain second original video information that is captured by the first user with reference to the first original video information. In some embodiments, the first user captures the second original video information by comparing the first original video information. In some embodiments, the comparison may be performed simultaneously with the shooting, or asynchronously, and if the comparison is performed in a synchronous manner, the first original video and the video shot picture may be displayed on the same screen, specifically in a parallel display manner, a picture-in-picture manner, or the first original video is processed (e.g., transparentized, gray-scale display, etc.), and the video shot picture is directly overlaid on the first original video; if the video is in an asynchronous mode, the first original video can be played once first and then shot, after shooting is completed, the second original video and the first original video can be compared, if the first original video comprises a plurality of segmented video segments, video segmentation operation can be executed by a shooting user of the previous video information, or a server of a target application can be executed by identifying video content of the previous video information, the segmented video segment is played once first for each segmented video segment and then shot, and after shooting is completed, the second original video and the first original video can be compared by a first user (after the segmented shooting is completed, the second original segmented video segment obtained by the segmented shooting is independently compared with the corresponding first original segmented video segment, or after the complete shooting is completed, the second original video obtained by shooting is integrally compared with the first original video).
And a third module 13, configured to process the second original video information according to the processing procedure information, so as to obtain subsequent video information. In some embodiments, the process information may be directly applied to the second original video information in a single key to obtain the subsequent video information. In some embodiments, the second original video information may be directly processed according to one or more processing step information in the processing procedure information to obtain the subsequent video information. In some embodiments, one or more of the process information may be presented to the first user, at least one target process step information selected by the first user from the one or more process step information, and the second original video information may be processed according to the at least one target process step information to obtain the subsequent video information. In some embodiments, after obtaining the post-video information, the capturing and processing effect of the post-video information may be reviewed in its entirety, and the post-video information and the prior video information may be compared, or the second original video information and the first original video information may be compared and presented.
A fourth module 14, configured to publish the subsequent video information, wherein the subsequent video information includes identification information of the previous video information. In some embodiments, the publishing may be performed directly after the post-video information is generated, or may be performed in response to a publishing trigger operation of the first user. In some embodiments, the post-video information may be published in a server of a target application (e.g., a video application), and after the publication is successful, other users corresponding to the target application may browse and view the post-video information. In some embodiments, it may also be that the first user publishes (e.g., shares) the later video information to at least one other user in the target application. In some embodiments, it may also be that the first user publishes the post video information to at least one friend of the first user in the target application. In some embodiments, the subsequent video information includes identification information (e.g., a video link address, a video name, a video ID, etc.) of its corresponding previous video information, and in response to a predetermined operation (e.g., a click operation) performed by another user with respect to the identification information, the user may jump to a playing page of its corresponding previous video information, so as to facilitate sharing and promotion of the original video.
In some embodiments, for a shooting user of an original video, the original video can be learned by other users, the video propagation degree is improved, the propagation of the corresponding subsequent video can also bring the propagation of original recognition, a benefit closed loop is formed, so that the sharing and popularization of the original video are facilitated, for a video learning user, the capability of shooting and processing the video can be improved, so that a video with higher quality is generated, the video generation efficiency is greatly improved, for a target application (for example, a video application), more creatives can become mature hands, so that the content ecology of the target application can be enriched, further, the combination of social contact and video can be promoted, the video publishing intention of the user in the target application is enhanced, and the social contact user system construction of the target application is promoted.
In some embodiments, the apparatus is further configured to: in response to a selection operation performed by a first user for published one or more pieces of video information to be learned, obtaining previous video information selected by the first user in the one or more pieces of video information to be learned, wherein each video to be learned in the one or more pieces of video information to be learned includes identification information for indicating that learning is allowed. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, each piece of video information to be learned further includes learning difficulty information, and the learning difficulty information is determined according to the processing procedure information corresponding to each piece of video information to be learned. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, each of the video information to be learned further includes learning cost information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the processing procedure information includes at least one processing step information and processing sequence information corresponding to the at least one processing step information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the restoring the previous video information according to the processing procedure information corresponding to the previous video information to obtain first original video information corresponding to the previous video information includes: and according to the reverse order information corresponding to the processing order information, sequentially restoring the prior video information according to the reverse processing step information corresponding to each processing step information to obtain first original video information corresponding to the prior video information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the secondary module 12 is configured to: and acquiring second original video information shot by the first user by referring to the first original video information in a synchronous mode, wherein a shot picture of the second original video information and the first original video information are displayed on the same screen. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the on-screen display mode includes but is not limited to:
1) Parallel display
2) Picture-in-picture display
3) Overlay display
Here, the related on-screen display manner is the same as or similar to that of the embodiment shown in fig. 1, and therefore, the description thereof is omitted, and the related on-screen display manner is incorporated herein by reference.
In some embodiments, the two-module 12 includes a two-one module 121 (not shown). A second-first module 121, configured to play the first original video information, and after the playing is completed, obtain second original video information that is shot by the first user by referring to the first original video information in an asynchronous manner. Here, the specific implementation of the first, second and first modules 121 is the same as or similar to the embodiment related to step S121 in fig. 1, and therefore, the detailed description is omitted, and the detailed description is incorporated herein by reference.
In some embodiments, the first original video information comprises one or more first original video clip information, and the second original video information comprises one or more second original video clip information; the one-two-one module 121 includes a one-two-one module 1211 (not shown), a one-two module 1212 (not shown), and a one-two-three module 1213 (not shown). A two-to-one module 1211, configured to use, according to a video clip sequence corresponding to the one or more first original video clip information, a first original video clip information with a top sequence in the one or more first original video clip information as a target original video clip information; a second-second module 1212, configured to play the target video clip, and after the playing is completed, obtain second original video clip information corresponding to the target original video clip information, where the second original video clip information is shot by the first user by referring to the target original video clip information in an asynchronous manner; a second-third module 1213, configured to take the next first original video clip information after the target original video clip information as new target original video clip information according to the video clip sequence, and repeat the second-second module 1212 until the target video clip is the first original video clip information in the latest sequence. Here, the specific implementation manners of the first, second, and first one-one modules 1211, the second, and third modules 1212 and 1213 are the same as or similar to the embodiments related to steps S1211, S1212, and S1213 in fig. 1, and therefore, the detailed descriptions thereof are omitted, and the detailed descriptions thereof are incorporated herein by reference.
In some embodiments, the one, two, one, and three modules 1213 are configured to: responding to a comparison operation executed by the first user for the second original video clip information, comparing and presenting the second original video clip information and the first original video clip information, responding to a confirmation operation executed by the first user for the comparison operation, if the confirmation operation indicates that the comparison of the second original video clip information is successful, taking the next first original video clip information after the target original video clip information as new target original video clip information according to the video clip sequence, and repeating the first-second-third module 1212 until the target video clip is the first original video clip information with the most reliable sequence. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the apparatus is further configured to: and if the confirmation operation indicates that the comparison of the second original video clip information fails, acquiring second original video clip information which is shot by the first user by referring to the target original video clip information in an asynchronous mode and corresponds to the target original video clip information again. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the apparatus is further configured to: if the one or more pieces of second original video clip information are obtained, in response to a comparison operation performed by the first user for the one or more pieces of second original video clip information, the one or more pieces of second original video clip information are presented by comparison, in response to a confirmation operation performed by the first user for the comparison operation, if the confirmation operation indicates that at least one piece of first original video clip information in the one or more pieces of first original video clip information fails to be compared, at least one piece of second original video clip information corresponding to the at least one piece of first original video clip information, which is shot by the first user with reference to the at least one piece of first original video clip information in an asynchronous manner, is obtained again. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the process information includes at least one process step information; wherein the one-three module 13 includes one-three-one module 131 (not shown). A module 131, configured to process the second original video information according to the at least one processing step information, and obtain subsequent video information corresponding to the second original video. Here, the specific implementation manner of the three-in-one module 131 is the same as or similar to the embodiment related to step S131 in fig. 1, and therefore, the detailed description is omitted, and the detailed description is incorporated herein by reference.
In some embodiments, the one-three-one module 131 is configured to: and presenting the at least one piece of processing step information, responding to one or more pieces of target processing step information selected by the first user in the at least one piece of processing step information, processing the second original video according to the one or more pieces of target processing step information, and obtaining subsequent video information corresponding to the second original video. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the processing procedure information further includes processing sequence information corresponding to the at least one processing step information; wherein the one-three-one module 131 is configured to: and the first user equipment sequentially processes the second original video information according to the processing sequence information and the information of each processing step to obtain the subsequent video information corresponding to the second original video information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the one-four module 14 is configured to: and in response to the release triggering operation of the first user, releasing the subsequent video information to the friend of the first user, wherein the subsequent video information comprises the identification information of the prior video information, and the subsequent video information does not comprise any presentation information for revenue. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the succeeding video information further includes identification information indicating that learning is allowed, and the descendant video information obtained by learning the succeeding video information includes the identification information of the preceding video information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the preceding video information participates in revenue sharing of the following video information and the descendant video information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
In some embodiments, the apparatus further comprises a five module 15 (not shown). A fifth module 15, configured to determine heat information of the previous video information according to the subsequent video information and the descendant video information, where the heat information is used to recommend the previous video information. Here, the specific implementation manner of the fifth module 15 is the same as or similar to the embodiment related to step S15 in fig. 1, and therefore, the detailed description is not repeated here, and is incorporated herein by reference.
In some embodiments, the one-five module 15 is configured to: and determining the popularity information of the previous video information according to the learning related information of the previous video information learned by the subsequent video information and the descendant video information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the one-five module 15 is configured to: and determining the popularity information of the previous video information according to the user playing related information and/or the user interaction related information of the subsequent video information and the descendant video information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the processing information is obtained during processing of the first raw video information and generating the previous video information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the processing procedure information is obtained in a process in which a network device processes the first original video information according to processing request information sent by the second user and generates the previous video information, and the second user is a shooting user of the previous video information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the processing procedure information is obtained in a process of processing the first original video information and generating the previous video information by a second user, where the second user is a shooting user of the previous video information, and a second user device corresponding to the second user uploads the processing procedure information to a network device. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
In some embodiments, the processing information is stored in a second user device corresponding to a second user corresponding to the previous video information, and the second user device uploads the previous video information and the processing information to the network device in response to a publishing operation performed by the second user for the previous video information, so as to publish the previous video information in the network device and store the processing information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
In some embodiments, the processing procedure information includes at least one processing step information, and the second user equipment synchronously uploads each processing step information of the at least one processing step information to the network device to store the each processing step information in the network device. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
FIG. 3 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
In some embodiments, as shown in FIG. 3, the system 300 can be implemented as any of the devices in the various embodiments described. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a holding computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
The present application also provides a computer readable storage medium having stored thereon computer code which, when executed, performs a method as in any one of the preceding.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. As such, the software programs (including associated data structures) of the present application can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, feRAM); and magnetic and optical storage devices (hard disk, magnetic tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Various aspects of various embodiments are defined in the claims. These and other aspects of the various embodiments are specified in the following numbered clauses:
1. a method for distributing video information is applied to first user equipment, wherein the method comprises the following steps:
responding to a learning trigger operation executed by a first user for previous video information, and restoring the previous video information according to processing process information corresponding to the previous video information to obtain first original video information corresponding to the previous video information;
acquiring second original video information shot by the first user with reference to the first original video information;
processing the second original video information according to the processing process information to obtain subsequent video information;
and releasing the subsequent video information, wherein the subsequent video information comprises the identification information of the prior video information.
2. The method of clause 1, wherein the method further comprises:
in response to a selection operation performed by a first user for published one or more pieces of video information to be learned, obtaining previous video information selected by the first user in the one or more pieces of video information to be learned, wherein each video to be learned in the one or more pieces of video information to be learned includes identification information for indicating that learning is allowed.
3. The method according to clause 1, wherein the processing procedure information includes at least one piece of processing step information and processing sequence information corresponding to the at least one piece of processing step information.
4. The method according to clause 3, wherein the restoring the previous video information according to the processing procedure information corresponding to the previous video information to obtain first original video information corresponding to the previous video information includes:
and according to the reverse order information corresponding to the processing order information, sequentially restoring the prior video information according to the reverse processing step information corresponding to each processing step information to obtain first original video information corresponding to the prior video information.
5. The method of clause 1, wherein the obtaining second original video information captured by the first user with reference to the first original video information comprises:
and acquiring second original video information shot by the first user by referring to the first original video information in a synchronous mode, wherein a shot picture of the second original video information and the first original video information are displayed on the same screen.
6. The method of clause 5, wherein the on-screen display comprises at least one of:
displaying in parallel;
picture-in-picture display;
and (5) displaying in an overlapping manner.
7. The method of clause 1, wherein the obtaining second original video information captured by the first user with reference to the first original video information comprises:
and playing the first original video information, and after the playing is finished, acquiring second original video information shot by the first user by referring to the first original video information in an asynchronous mode.
8. The method of clause 7, wherein the first original video information comprises one or more first original video clip information and the second original video information comprises one or more second original video clip information;
wherein, the playing the first original video information, and after the playing is completed, obtaining second original video information shot by the first user by referring to the first original video information in an asynchronous manner, includes:
according to the video clip sequence corresponding to the one or more first original video clip information, taking the first original video clip information with the top sequence in the one or more first original video clip information as the target original video clip information;
playing the target video clip, and after the playing is finished, acquiring second original video clip information which is shot by the first user in an asynchronous mode by referring to the target original video clip information and corresponds to the target original video clip information;
and according to the video clip sequence, taking the next first original video clip information after the target original video clip information as new target original video clip information, and repeating the playing of the target video clip, and after the playing is finished, obtaining second original video clip information which is shot by the first user by referring to the target original video clip information in an asynchronous mode and corresponds to the target original video clip information until the target video clip is the first original video clip information with the most back sequence.
9. The method according to clause 8, wherein the step of taking, according to the video clip sequence, next first original video clip information after the target original video clip information as new target original video clip information, and repeating the playing of the target video clip, and after the playing is completed, obtaining second original video clip information corresponding to the target original video clip information, which is shot by the first user by referring to the target original video clip information in an asynchronous manner, until the target video clip is first original video clip information that is the latest in sequence, includes:
responding to a comparison operation executed by the first user for the second original video clip information, comparing and presenting the second original video clip information and the first original video clip information, responding to a confirmation operation executed by the first user for the comparison operation, if the confirmation operation indicates that the second original video clip information is successfully compared, taking the next first original video clip information after the target original video clip information as new target original video clip information according to the video clip sequence, and repeating the playing of the target video clip, and after the playing is finished, obtaining the second original video clip information which is shot by the first user by referring to the target original video clip information in an asynchronous mode and corresponds to the target original video clip information until the target video clip is the first original video clip information with the most back sequence.
10. The method of clause 9, wherein the method further comprises:
and if the confirmation operation indicates that the comparison of the second original video clip information fails, acquiring second original video clip information which is shot by the first user by referring to the target original video clip information in an asynchronous mode and corresponds to the target original video clip information again.
11. The method of clause 8, wherein the method further comprises:
if the one or more pieces of second original video clip information are obtained, in response to a comparison operation performed by the first user for the one or more pieces of second original video clip information, the one or more pieces of second original video clip information are presented by comparison, in response to a confirmation operation performed by the first user for the comparison operation, if the confirmation operation indicates that at least one piece of first original video clip information in the one or more pieces of first original video clip information fails to be compared, the one or more pieces of second original video clip information corresponding to the one or more pieces of first original video clip information, which is shot by the first user with reference to the one or more pieces of first original video clip information in an asynchronous manner, are obtained again.
12. The method of clause 1, wherein the process information includes at least one process step information;
wherein, the processing the second original video information according to the processing procedure information to obtain the subsequent video information includes:
and processing the second original video information according to the at least one piece of processing step information to obtain the subsequent video information corresponding to the second original video.
13. The method according to clause 12, wherein the processing the second original video information according to the at least one processing step information to obtain the subsequent video information corresponding to the second original video comprises:
and presenting the at least one piece of processing step information, responding to one or more pieces of target processing step information selected by the first user in the at least one piece of processing step information, processing the second original video according to the one or more pieces of target processing step information, and obtaining subsequent video information corresponding to the second original video.
14. The method according to clause 12, wherein the processing procedure information further includes processing sequence information corresponding to the at least one processing step information;
wherein, the processing the second original video information according to the at least one processing step information to obtain the subsequent video information corresponding to the second original video includes:
and processing the second original video information in sequence according to the processing sequence information and the information of each processing step to obtain the subsequent video information corresponding to the second original video information.
15. The method according to clause 1, wherein the succeeding video information further includes identification information indicating that learning is permitted, and the descendant video information obtained by learning the succeeding video information includes identification information of the preceding video information.
16. The method of clause 15, wherein the prior video information participates in revenue sharing of the subsequent video information and the descendant video information.
17. The method of clause 16, wherein the method further comprises:
and determining the popularity information of the previous video information according to the subsequent video information and the descendant video information, wherein the popularity information is used for recommending the previous video information.
18. The method according to clause 17, wherein the determining heat information of the previous video information according to the later video information and the descendant video information, wherein the heat information is used for recommending the previous video information, includes:
and determining the popularity information of the previous video information according to the learning related information of the previous video information learned by the subsequent video information and the descendant video information.
19. The method according to clause 17, wherein the determining heat information of the previous video information according to the later video information and the descendant video information, wherein the heat information is used for recommending the previous video information, comprises:
and determining the popularity information of the previous video information according to the user playing related information and/or the user interaction related information of the subsequent video information and the descendant video information.
20. The method of clause 1, wherein the processing information is obtained during processing of the first raw video information and generating the prior video information.
21. The method according to clause 20, wherein the processing procedure information is obtained during a process in which a network device processes the first original video information according to processing request information sent by the second user and generates the previous video information, and the second user is a shooting user of the previous video information.
22. The method according to clause 20, wherein the processing procedure information is obtained during a process in which a second user processes the first original video information and generates the previous video information, the second user is a shooting user of the previous video information, and second user equipment corresponding to the second user uploads the processing procedure information to network equipment.
23. The method of clause 22, wherein the process information is stored at the second user device, the second user device uploading the prior video information and the process information to the network device in response to a publishing operation performed by the second user for the prior video information to publish the prior video information in the network device and store the process information.
24. The method of clause 22, wherein the process information includes at least one process step information, the second user device synchronously uploading each of the at least one process step information to the network device for storage therein.
25. An apparatus for distributing video information, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the method of any of clauses 1 to 24.
26. A computer readable medium storing instructions that, when executed, cause a system to perform the operations of the method of any of clauses 1 to 24.

Claims (26)

1. A method for distributing video information is applied to first user equipment, wherein the method comprises the following steps:
responding to a learning triggering operation executed by a first user for previous video information, and restoring the previous video information according to processing process information corresponding to the previous video information to obtain first original video information corresponding to the previous video information, wherein the first original video information corresponds to an initial shooting state of the previous video information, and the processing process information is process data of the previous video information generated by processing the first original video information after a shooting user of the previous video information obtains the first original video information;
acquiring second original video information shot by the first user with reference to the first original video information;
processing the second original video information according to the processing process information to obtain subsequent video information;
and releasing the subsequent video information, wherein the subsequent video information comprises the identification information of the prior video information.
2. The method of claim 1, wherein the method further comprises:
in response to a selection operation performed by a first user for published one or more pieces of video information to be learned, obtaining previous video information selected by the first user in the one or more pieces of video information to be learned, wherein each video to be learned in the one or more pieces of video information to be learned includes identification information for indicating that learning is allowed.
3. The method of claim 1, wherein the processing procedure information includes at least one processing step information and processing sequence information corresponding to the at least one processing step information.
4. The method according to claim 3, wherein the restoring the previous video information according to the processing procedure information corresponding to the previous video information to obtain the first original video information corresponding to the previous video information includes:
and according to the reverse order information corresponding to the processing order information, sequentially restoring the prior video information according to the reverse processing step information corresponding to each processing step information to obtain first original video information corresponding to the prior video information.
5. The method of claim 1, wherein the obtaining second original video information captured by the first user with reference to the first original video information comprises:
and acquiring second original video information shot by the first user by referring to the first original video information in a synchronous mode, wherein a shot picture of the second original video information and the first original video information are displayed on the same screen.
6. The method of claim 5, wherein the on-screen display comprises at least one of:
displaying in parallel;
displaying picture in picture;
and (5) displaying in an overlapping manner.
7. The method of claim 1, wherein the obtaining second original video information captured by the first user with reference to the first original video information comprises:
and playing the first original video information, and after the playing is finished, acquiring second original video information shot by the first user by referring to the first original video information in an asynchronous mode.
8. The method of claim 7, wherein the first original video information comprises one or more first original video clip information and the second original video information comprises one or more second original video clip information;
wherein, the playing the first original video information, and after the playing is completed, obtaining second original video information shot by the first user by referring to the first original video information in an asynchronous manner, includes:
according to the video clip sequence corresponding to the one or more first original video clip information, taking the first original video clip information with the top sequence in the one or more first original video clip information as the target original video clip information;
playing the target original video clip information, and after the playing is finished, acquiring second original video clip information which is shot by the first user in an asynchronous mode by referring to the target original video clip information and corresponds to the target original video clip information;
and according to the video clip sequence, taking the next first original video clip information after the target original video clip information as new target original video clip information, repeating the playing of the target original video clip information, and after the playing is finished, obtaining second original video clip information which is shot by the first user by referring to the target original video clip information in an asynchronous mode and corresponds to the target original video clip information until the target original video clip information is the first original video clip information with the most back sequence.
9. The method of claim 8, wherein the step of, according to the video clip sequence, using next first original video clip information after the target original video clip information as new target original video clip information, and repeating the playing of the target original video clip information, and after the playing is completed, obtaining second original video clip information corresponding to the target original video clip information, which is shot by the first user by referring to the target original video clip information in an asynchronous manner, until the target original video clip information is a first original video clip information in a last sequence, comprises:
responding to a comparison operation executed by the first user for the second original video clip information, comparing and presenting the second original video clip information and the first original video clip information, responding to a confirmation operation executed by the first user for the comparison operation, if the confirmation operation indicates that the second original video clip information is successfully compared, taking the next first original video clip information after the target original video clip information as new target original video clip information according to the video clip sequence, and repeating the playing of the target original video clip information, and after the playing is finished, obtaining the second original video clip information which is shot by the first user by referring to the target original video clip information in an asynchronous mode and corresponds to the target original video clip information until the target original video clip information is the first original video clip information with the most back sequence.
10. The method of claim 9, wherein the method further comprises:
and if the confirmation operation indicates that the comparison of the second original video clip information fails, acquiring second original video clip information which is shot by the first user by referring to the target original video clip information in an asynchronous mode and corresponds to the target original video clip information again.
11. The method of claim 8, wherein the method further comprises:
if the one or more pieces of second original video clip information are obtained, in response to a comparison operation performed by the first user for the one or more pieces of second original video clip information, the one or more pieces of second original video clip information are presented by comparison, in response to a confirmation operation performed by the first user for the comparison operation, if the confirmation operation indicates that at least one piece of first original video clip information in the one or more pieces of first original video clip information fails to be compared, the one or more pieces of second original video clip information corresponding to the one or more pieces of first original video clip information, which is shot by the first user with reference to the one or more pieces of first original video clip information in an asynchronous manner, are obtained again.
12. The method of claim 1, wherein the process information includes at least one process step information;
wherein the processing the second original video information according to the processing procedure information to obtain the subsequent video information includes:
and processing the second original video information according to the at least one piece of processing step information to obtain the subsequent video information corresponding to the second original video.
13. The method of claim 12, wherein the processing the second original video information according to the at least one processing step information to obtain the subsequent video information corresponding to the second original video comprises:
and presenting the at least one piece of processing step information, responding to one or more pieces of target processing step information selected by the first user in the at least one piece of processing step information, processing the second original video according to the one or more pieces of target processing step information, and obtaining subsequent video information corresponding to the second original video.
14. The method of claim 12, wherein the processing procedure information further includes processing sequence information corresponding to the at least one processing step information;
wherein the processing the second original video information according to the at least one processing step information to obtain subsequent video information corresponding to the second original video comprises:
and processing the second original video information in sequence according to the processing sequence information and the information of each processing step to obtain the subsequent video information corresponding to the second original video information.
15. The method according to claim 1, wherein the succeeding video information further includes identification information indicating that learning is allowed, and the descendant video information obtained by learning the succeeding video information includes identification information of the preceding video information.
16. The method of claim 15, wherein the previous video information participates in revenue sharing of the later video information and the descendant video information.
17. The method of claim 16, wherein the method further comprises:
and determining the popularity information of the previous video information according to the subsequent video information and the descendant video information, wherein the popularity information is used for recommending the previous video information.
18. The method of claim 17, wherein the determining heat information of the previous video information according to the later video information and the descendant video information, wherein the heat information is used for recommending the previous video information comprises:
and determining the popularity information of the previous video information according to the learning related information of the previous video information learned by the subsequent video information and the descendant video information.
19. The method of claim 17, wherein the determining heat information of the previous video information according to the later video information and the descendant video information, wherein the heat information is used for recommending the previous video information comprises:
and determining the popularity information of the previous video information according to the user playing related information and/or the user interaction related information of the subsequent video information and the descendant video information.
20. The method of claim 1, wherein the processing information is obtained during processing of the first raw video information and generating the prior video information.
21. The method according to claim 20, wherein the processing information is obtained during a process in which a network device processes the first original video information according to processing request information transmitted by a second user and generates the previous video information, the second user being a shooting user of the previous video information.
22. The method of claim 20, wherein the processing information is obtained during a process of processing the first original video information and generating the previous video information by a second user, the second user is a shooting user of the previous video information, and a second user device corresponding to the second user uploads the processing information to a network device.
23. The method of claim 22, wherein the process information is stored at the second user device, and the second user device uploads the previous video information and the process information to the network device in response to a publishing operation performed by the second user for the previous video information to publish the previous video information in the network device and store the process information.
24. The method of claim 22, wherein the process information includes at least one process step information, and the second user equipment synchronously uploads each of the at least one process step information to the network device for storage therein.
25. An apparatus for distributing video information, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the method of any one of claims 1 to 24.
26. A computer-readable medium storing instructions that, when executed, cause a system to perform the operations of any of the methods of claims 1-24.
CN202011279513.2A 2020-11-16 2020-11-16 Method and equipment for releasing video information Active CN112423112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011279513.2A CN112423112B (en) 2020-11-16 2020-11-16 Method and equipment for releasing video information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011279513.2A CN112423112B (en) 2020-11-16 2020-11-16 Method and equipment for releasing video information

Publications (2)

Publication Number Publication Date
CN112423112A CN112423112A (en) 2021-02-26
CN112423112B true CN112423112B (en) 2023-03-21

Family

ID=74832273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011279513.2A Active CN112423112B (en) 2020-11-16 2020-11-16 Method and equipment for releasing video information

Country Status (1)

Country Link
CN (1) CN112423112B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111866587A (en) * 2020-07-30 2020-10-30 口碑(上海)信息技术有限公司 Short video generation method and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9940970B2 (en) * 2012-06-29 2018-04-10 Provenance Asset Group Llc Video remixing system
CN107682654B (en) * 2017-09-30 2019-11-26 北京金山安全软件有限公司 Video recording method, shooting device, electronic equipment and medium
CN108566519B (en) * 2018-04-28 2022-04-12 腾讯科技(深圳)有限公司 Video production method, device, terminal and storage medium
CN110798752B (en) * 2018-08-03 2021-10-15 北京京东尚科信息技术有限公司 Method and system for generating video summary
CN110876036B (en) * 2018-08-31 2022-08-02 腾讯数码(天津)有限公司 Video generation method and related device
CN109547841B (en) * 2018-12-20 2020-02-07 北京微播视界科技有限公司 Short video data processing method and device and electronic equipment
CN110062163B (en) * 2019-04-22 2020-10-20 珠海格力电器股份有限公司 Multimedia data processing method and device
CN110855893A (en) * 2019-11-28 2020-02-28 维沃移动通信有限公司 Video shooting method and electronic equipment
CN111641861B (en) * 2020-05-27 2022-08-02 维沃移动通信有限公司 Video playing method and electronic equipment
CN111726536B (en) * 2020-07-03 2024-01-05 腾讯科技(深圳)有限公司 Video generation method, device, storage medium and computer equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111866587A (en) * 2020-07-30 2020-10-30 口碑(上海)信息技术有限公司 Short video generation method and device

Also Published As

Publication number Publication date
CN112423112A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
WO2020221158A1 (en) Method and device for sharing information in social application
CN110827061B (en) Method and equipment for providing presentation information in novel reading process
CN110781397B (en) Method and equipment for providing novel information
CN107770046B (en) Method and equipment for picture arrangement
CN112822431B (en) Method and equipment for private audio and video call
CN111488096B (en) Method and equipment for displaying interactive presentation information in reading application
CN110782274A (en) Method and device for providing motivational video information in reading application
CN112423112B (en) Method and equipment for releasing video information
CN112866302A (en) Method, apparatus, medium and program product for integrity checking of cluster data
CN112822419A (en) Method and equipment for generating video information
US20230326489A1 (en) Generation of visual effects based on text
CN113329237B (en) Method and equipment for presenting event label information
CN111680249B (en) Method and device for pushing presentation information
CN111666250B (en) Method and device for processing book promotion request information in reading application
CN112787831A (en) Method and equipment for splitting conference group
CN112788004A (en) Method and equipment for executing instructions through virtual conference robot
CN111079039B (en) Method and equipment for collecting books
CN110413800B (en) Method and equipment for providing novel information
CN112905081A (en) Method, device, medium and program product for displaying presence information
CN111930667A (en) Method and device for book recommendation in reading application
WO2021100109A1 (en) Event prediction method, event prediction device, and program
CN111666449A (en) Video retrieval method, video retrieval device, electronic equipment and computer readable medium
CN112533061B (en) Method and equipment for collaboratively shooting and editing video
CN112910757A (en) Picture interaction method and equipment
CN115086265A (en) Method and equipment for generating group head portrait information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant