CN111314777B - Video generation method and device, computer storage medium and electronic equipment - Google Patents

Video generation method and device, computer storage medium and electronic equipment Download PDF

Info

Publication number
CN111314777B
CN111314777B CN201911241580.2A CN201911241580A CN111314777B CN 111314777 B CN111314777 B CN 111314777B CN 201911241580 A CN201911241580 A CN 201911241580A CN 111314777 B CN111314777 B CN 111314777B
Authority
CN
China
Prior art keywords
video
video stream
screen
screen capture
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911241580.2A
Other languages
Chinese (zh)
Other versions
CN111314777A (en
Inventor
董英姿
杨兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Wodong Tianjun Information Technology Co Ltd
Priority to CN201911241580.2A priority Critical patent/CN111314777B/en
Publication of CN111314777A publication Critical patent/CN111314777A/en
Application granted granted Critical
Publication of CN111314777B publication Critical patent/CN111314777B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure relates to the technical field of image processing, and provides a video generation method, a video generation apparatus, a computer storage medium, and an electronic device, wherein the video generation method includes: fusing a video content material and a video template into an initial video stream; step A: capturing the screen of the initial video stream according to the preset screen capturing times to generate a screen capturing image; and B: determining the number of processing threads of the screen capture image according to the playing parameters and the screen capture times of the initial video stream; and C: performing parallel processing on the screen capture images based on the processing thread number to synthesize the screen capture images into a target video stream; adjusting the screen capturing times within a preset value range, and circularly executing the steps A to C to obtain a plurality of target video streams; and determining the processing thread number corresponding to the target video stream with the shortest synthesis time in the plurality of target video streams as the optimal thread number corresponding to the video template. The video generation method can improve the generation efficiency of the video.

Description

Video generation method and device, computer storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a video generation method, a video generation apparatus, a computer storage medium, and an electronic device.
Background
With the rapid development of computer and internet technologies, the related image processing field is also rapidly developed, and how to rapidly generate a video meeting the user requirements according to pictures becomes a focus of attention of related developers.
At present, videos are generally generated based on Node-canvas, that is, the generation process of the videos is a serial processing process, the video generation time is long, for example, a 30S video may need to be generated in more than 60S. Thus, the video generation efficiency of the related method is low.
In view of the above, there is a need in the art to develop a new video generation method and apparatus.
It is to be noted that the information disclosed in the background section above is only used to enhance understanding of the background of the present disclosure.
Disclosure of Invention
The present disclosure is directed to a video generation method, a video generation apparatus, a computer storage medium, and an electronic device, so as to avoid the defect of low efficiency in the prior art at least to a certain extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a video generation method, including: fusing a video content material and a video template into an initial video stream; step A: capturing the screen of the initial video stream according to the preset screen capturing times to generate a screen capturing image; and B: determining the processing procedure number of the screen capture image according to the playing parameters of the initial video stream and the screen capture times; and C: performing parallel processing on the screen capture images based on the processing thread number to synthesize the screen capture images into a target video stream; adjusting the screen capturing times within a preset numerical range, and circularly executing the step A, the step B and the step C to obtain a plurality of target video streams; and determining the processing thread number corresponding to the target video stream with the shortest synthesis time in the plurality of target video streams as the optimal thread number corresponding to the video template.
In an exemplary embodiment of the present disclosure, after determining, as an optimal number of threads corresponding to the video template, a number of processing threads corresponding to a target video stream whose composition time is the shortest among the plurality of target video streams, the method further includes: acquiring the target screen capturing times corresponding to the optimal thread number; performing screen capture on the initial video stream based on the target screen capture times to generate a target screen capture image; and carrying out parallel processing on the target screen capture image based on the optimal thread number so as to synthesize the target screen capture image into an optimal video stream.
In an exemplary embodiment of the present disclosure, the playing parameters include at least a playing duration and a video frame rate; the determining the processing procedure number of the screen capture image according to the playing parameters and the screen capture times comprises the following steps: determining a number of processing procedures for the screen capture image based on the following formula:
Figure BDA0002306405520000021
wherein, ProcessNum is the number of the processing threads; the duration time is the play duration; FPS is the video frame rate; RunNum is the screen capture times.
In an exemplary embodiment of the present disclosure, the video content material includes at least picture material and text material; the fusing the video content material and the video template into the initial video stream comprises: filling the picture material to a first designated position in the video template; filling the text material to a second appointed position in the video template; and fusing the video template filled with the picture material and the text material into the initial video stream.
In an exemplary embodiment of the present disclosure, the populating the picture material to the first specified location in the video template includes: acquiring the size information of the picture which can be contained in the first designated position; adjusting the size of the picture material according to the picture size information; and filling the adjusted picture material to the first designated position.
In an exemplary embodiment of the present disclosure, the filling the text material to a second designated position in the video template includes: acquiring the number information of words which can be contained at the second appointed position; according to the word number information, carrying out segmentation processing on the character material; and filling the text material after the segmentation processing to the second appointed position in the video template.
In an exemplary embodiment of the present disclosure, the method further comprises: and sending the target video stream to a front-end display device so that the front-end display device plays the target video stream.
According to a second aspect of the present disclosure, there is provided a video generating apparatus comprising: the fusion module is used for fusing the video content material and the video template into an initial video stream; the screen capturing module is used for capturing the screen of the initial video stream according to the preset screen capturing times to generate a screen capturing image; the first determining module is used for determining the processing thread number of the screen capture image according to the playing parameters of the initial video stream and the screen capture times; the synthesis module is used for carrying out parallel processing on the screen capture images based on the processing thread number so as to synthesize the screen capture images into a target video stream; the adjusting module is used for carrying out numerical adjustment on the screen capturing times within a preset numerical range and circularly executing the step A, the step B and the step C to obtain a plurality of target video streams; and the second determining module is used for determining the processing thread number corresponding to the target video stream with the shortest synthesis time in the plurality of target video streams as the optimal thread number corresponding to the video template.
According to a third aspect of the present disclosure, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the video generation method of the first aspect described above.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the video generation method of the first aspect described above via execution of the executable instructions.
As can be seen from the foregoing technical solutions, the video generation method, the video generation apparatus, the computer storage medium and the electronic device in the exemplary embodiments of the present disclosure have at least the following advantages and positive effects:
in the technical solutions provided in some embodiments of the present disclosure, on one hand, a video content material and a video template are fused into an initial video stream, in step a, the initial video stream is subjected to screen capture according to a preset number of screen capture times to generate a screen capture image, in step B, a processing thread number of the screen capture image is determined according to a playing parameter and the number of screen capture times of the initial video stream, and in step C, the screen capture image is subjected to parallel processing based on the processing thread number to synthesize the screen capture image into a target video stream, so that technical problems of slow speed, time and labor consumption and low efficiency caused by serial processing in the prior art can be solved, and the generation efficiency of a video is improved. On the other hand, the screen capturing times are adjusted within a preset value range, the steps A, B and C are executed in a circulating mode, a plurality of target video streams are obtained, the processing thread number corresponding to the target video stream with the shortest synthesis time in the target video streams is determined as the optimal thread number corresponding to the video template, therefore, the optimal thread number capable of reducing the waiting time of a user can be determined on the basis of multiple adjustment, the number of related processing threads or servers can be distributed directly according to the optimal thread number in the subsequent processing process, the processing threads are configured reasonably, and the video generation efficiency is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 shows a schematic flow diagram of a video generation method in an exemplary embodiment of the present disclosure;
FIG. 2 shows a schematic flow diagram of a video generation method in another exemplary embodiment of the present disclosure;
FIG. 3 shows a flow diagram of a video generation method in yet another exemplary embodiment of the present disclosure;
FIG. 4 shows a flow diagram of a video generation method in yet another exemplary embodiment of the present disclosure;
FIG. 5 is a schematic flow chart diagram illustrating a video generation method in an exemplary embodiment of the present disclosure;
fig. 6 shows a schematic structural diagram of a video generation apparatus in an exemplary embodiment of the present disclosure;
FIG. 7 shows a schematic diagram of a structure of a computer storage medium in an exemplary embodiment of the disclosure;
fig. 8 shows a schematic structural diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/parts/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first" and "second", etc. are used merely as labels, and are not limiting on the number of their objects.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
At present, videos are generally generated based on Node-canvas, that is, the generation process of the videos is a serial processing process, the video generation time is long, for example, a 30S video may need to be generated in more than 60S. Thus, the video generation efficiency of the related method is low.
In the embodiments of the present disclosure, a video generation method is provided first, which overcomes, at least to some extent, the drawback of the video generation method provided in the prior art that is relatively inefficient.
Fig. 1 shows a flowchart of a video generation method in an exemplary embodiment of the present disclosure, and an execution subject of the video generation method may be a server that generates a video.
Referring to fig. 1, a video generation method according to one embodiment of the present disclosure includes the steps of:
step S110, fusing the video content material and the video template into an initial video stream;
step S120 (step A), the initial video stream is subjected to screen capturing according to the preset screen capturing times, and a screen capturing image is generated;
step S130 (step B), determining the processing thread number of the screen capture image according to the playing parameters and the screen capture times of the initial video stream;
step S140 (step C) of performing parallel processing on the screen capture images based on the processing thread number to synthesize the screen capture images into a target video stream;
s150, carrying out numerical adjustment on the screen capturing times within a preset numerical range, and circularly executing the step A, the step B and the step C to obtain a plurality of target video streams;
step S160, determining the processing thread number corresponding to the target video stream with the shortest synthesis time among the plurality of target video streams as the optimal thread number corresponding to the video template.
In the technical scheme provided by the embodiment shown in fig. 1, on one hand, a video content material and a video template are fused into an initial video stream, in step a, the initial video stream is subjected to screen capturing according to preset screen capturing times to generate a screen capturing image, in step B, the processing thread number of the screen capturing image is determined according to the playing parameters and the screen capturing times of the initial video stream, and in step C, the screen capturing image is subjected to parallel processing based on the processing thread number to synthesize the screen capturing image into a target video stream. On the other hand, the screen capturing times are adjusted within a preset value range, the steps A, B and C are executed in a circulating mode, a plurality of target video streams are obtained, the processing thread number corresponding to the target video stream with the shortest synthesis time in the target video streams is determined as the optimal thread number corresponding to the video template, therefore, the optimal thread number capable of reducing the waiting time of a user can be determined on the basis of multiple adjustment, the number of related processing threads or servers can be distributed directly according to the optimal thread number in the subsequent processing process, the processing threads are configured reasonably, and the video generation efficiency is further improved.
The following describes the specific implementation of each step in fig. 1 in detail:
in step S110, the video content material and the video template are fused into an initial video stream.
In an exemplary embodiment of the present disclosure, video content material and a video template may be fused into an initial video stream. The initial video stream contains video content materials, but the video stream before final synthesis by relevant video processing software does not contain corresponding special effects, filters and the like, so that the display effect of the video is poor.
In an exemplary embodiment of the present disclosure, the video content material may be picture material, text material, or the like, which is required to generate a video. The method can be used for processing the video and audio signals, and can be set according to the actual situation. The video template can simplify the video file in the video generation process, is similar to a webpage template, a website template and the like, and can improve the video generation speed.
In an exemplary embodiment of the present disclosure, referring to fig. 2, fig. 2 shows a flowchart of a video generation method in another exemplary embodiment of the present disclosure, specifically shows a flowchart of fusing a video content material and a video template into an initial video stream, including steps S201 to S203, and step S110 is explained below with reference to fig. 2.
In step S201, a picture material is filled to a first designated position in the video template.
In an exemplary embodiment of the present disclosure, the first designated location may be a location in the video template that is pre-marked to fill in the picture material. It should be noted that the first designated location may include a plurality of locations, and may be set according to actual situations, which belongs to the protection scope of the present disclosure.
In an exemplary embodiment of the present disclosure, referring to fig. 3, fig. 3 shows a flowchart of a video generation method in yet another exemplary embodiment of the present disclosure, specifically showing a flowchart of filling a picture material into a first designated position in a video template, including steps S301 to S303, where step S201 is explained below with reference to fig. 3.
In step S301, picture size information that can be accommodated by the first specified position is acquired.
In an exemplary embodiment of the present disclosure, the size information of the picture that can be accommodated by the first designated position may be acquired, specifically, the length, the width, the pixel size, and the like of the picture may be acquired, and may be set according to an actual situation. For example, the size information of the picture that can be accommodated in the acquired first designated position may be 20mm × 30 mm.
In step S302, the size of the picture material is adjusted according to the picture size information.
In an exemplary embodiment of the present disclosure, after the picture size information is acquired, the size of the picture material may be adjusted according to the picture size information, for example, the size of the picture material may be adjusted to 20mm × 30mm, or the size of the picture material may be adjusted to any size smaller than 20mm × 30mm, which may be set according to actual situations, and belongs to the protection scope of the present disclosure.
In step S303, the adjusted picture material is filled to the first designated position.
In an exemplary embodiment of the present disclosure, after the picture material is adjusted, the picture material may be filled to the first designated position, for example: and inserting the first designated position. Therefore, the technical problems of overlarge size and incomplete display of the picture material can be solved, and the display effect of the picture material is optimized.
With continued reference to fig. 2, in step S202, the text material is filled into the second designated position in the video template.
In an exemplary embodiment of the present disclosure, the second designated location may be a location in the video template that is pre-marked to fill in text material. The second designated position may include a plurality of positions, and may be set according to actual conditions, which falls within the scope of the present disclosure.
In an exemplary embodiment of the present disclosure, referring to fig. 4, fig. 4 shows a flowchart of a video generation method in another exemplary embodiment of the present disclosure, specifically showing a flowchart of filling text material into a second designated position in a video template, including steps S401-S403, and step S202 is explained below with reference to fig. 4.
In step S401, word number information that can be accommodated at the second designated position is acquired.
In an exemplary embodiment of the present disclosure, the word number information that can be accommodated at the second designated position may be acquired, and for example, the word number information that can be accommodated at each second designated position may be 100 words, and specifically, may be set by itself according to actual situations, and belongs to the protection scope of the present disclosure.
In step S402, the text material is segmented according to the word count information.
In an exemplary embodiment of the present disclosure, after the word number information is obtained, the text material may be segmented according to the word number information, for example, the text material may be divided into text segments of 100 words and a paragraph, and the text material may also be divided into text segments of less than 100 words, which may be specifically set according to actual situations and belongs to the protection scope of the present disclosure.
In step S403, the text material after the segmentation process is filled in a second designated position in the video template.
In an exemplary embodiment of the disclosure, after the text material is segmented, the segmented text material may be filled (e.g., inserted) in segments to the second designated positions in the video template. Therefore, the technical problem that the word number is lost when the word number of the character material is too large can be avoided, and the completeness of the character material is ensured.
With continued reference to fig. 2, in step S203, the video template after the picture material and the text material are filled is determined as the initial video stream.
In an exemplary embodiment of the present disclosure, after the above picture material and text material are filled in the above video template, the above video template may be determined as an initial video stream.
With continued reference to fig. 1, in step S120 (step a), the initial video stream is captured according to a preset number of screen captures, and a screen capture image is generated.
In the exemplary embodiment of the present disclosure, it should be noted that step S120 is step a.
In an exemplary embodiment of the present disclosure, the initial video stream may be subjected to screen capturing (which is a way of capturing pictures or texts, and is also a computer application technology) according to a preset number of screen capturing times, and through this technology, the interested article pictures may be captured from the internet for viewing, which may help people better understand the use knowledge, so as to generate a plurality of screen capturing images. For example, the preset screen capturing times may be 100, and may be set according to actual situations, which belongs to the protection scope of the present disclosure.
In step S130 (step B), the number of processing procedures for capturing the screen image is determined according to the playback parameters and the number of times of screen capturing of the initial video stream.
In the exemplary embodiment of the present disclosure, it should be noted that step S130 is step B.
In an exemplary embodiment of the present disclosure, the playback parameters may include a playback time length and a video Frame rate (Frame rate, which is the rate at which bitmap images appear continuously on the display in units of frames, in short, the Frame rate is the number of pictures taken by the camera per second, which are played continuously to form a dynamic video).
In an exemplary embodiment of the present disclosure, the value of the playing time is determined by the file size of the initial video stream, and for example, when the file size of the initial video stream is 100MB, the corresponding playing time may be 60 seconds. The video frame rate can be set according to the actual situation, and generally, in order to ensure the definition of video playing, the video frame rate can be set to 30 frames.
In an exemplary embodiment of the present disclosure, the number of processing processes of the screen capture image may be determined based on the following formula 1:
Figure BDA0002306405520000091
wherein, ProcessNum is the number of processing threads; duration time is the play duration; FPS is video frame rate; RunNum is the number of screen shots. Illustratively, referring to the above-mentioned steps for explanation, when the play duration time is 60 seconds, the video frame rate FPS is 30 frames, the screen capture number RunNum is 100 times,
Figure BDA0002306405520000092
that is, the number of processing processes required for processing the screen shot image is 18.
It should be noted that the unit corresponding to the playing time of the video may be set according to the actual situation, for example, when the playing time of the video is in milliseconds, the processing thread number of the screenshot image may be determined based on the following formula 2:
Figure BDA0002306405520000093
illustratively, when the play duration is 60000 ms, the video frame rate FPS is 30 frames, the number of screenshots RunNum is 100,
Figure BDA0002306405520000094
i.e. the number of required process threads is 18.
In step S140, the screen capture images are subjected to parallel processing based on the number of processing threads to synthesize the screen capture images into a target video stream.
In the exemplary embodiment of the present disclosure, it should be noted that step S140 is step C.
In an exemplary embodiment of the present disclosure, after the number of processing threads is determined, according to the number of processing threads, for example, 18 puppeteer threads may be started to perform parallel processing on the screenshot images to synthesize the screenshot images into a target video stream. The target video stream is a video stream obtained after the initial video stream is optimized (for example, special effects are added, and video fluency is improved).
Among them, Parallel Processing (Parallel Processing, which is a computing method capable of executing two or more processes simultaneously in a computer system). Parallel processing may work on different aspects of the same program at the same time. Theoretically, the execution speed of n parallel processes may be n times faster than the execution speed of a single processor. Through parallel processing, data processing work can be distributed to a plurality of different processing threads, the time for solving large and complex problems is saved, and the processing speed is improved.
Specifically, the screenshot images can be synthesized into the target video stream based on ffmpeg software (which is an open-source free cross-platform video and audio stream scheme and provides a complete solution for recording, converting and streaming audio and video), PS software (Photoshop, which is image processing software developed and distributed by Adobe Systems and mainly processes digital images formed by pixels.
In step S150, the screen capturing times are numerically adjusted within a preset numerical range, and the steps a, B, and C are performed in a loop to obtain a plurality of target video streams.
In an exemplary embodiment of the present disclosure, the number of screen shots may be adjusted within a preset range, for example, M-1 times (M is a positive integer greater than 1) may be adjusted based on the current calculation, and the steps S120 to S140 (i.e., steps a to C) are performed in a loop, so as to calculate M-1 different processing thread numbers according to M-1 different screen shots, thereby synthesizing M-1 target video streams.
For example, the preset value range when the screen capturing number is adjusted may be 100 and 1000, then for example, when the screen capturing number used in the first calculation is 100, in the subsequent adjustment process, 50 may be added on the basis of 100 each time, for example: 150. 200, 250 … … 1000, to obtain a plurality of corresponding target video streams according to different screen capturing times.
Further, the screen capturing times corresponding to two target video streams with shorter synthesis time in the plurality of target video streams may be screened out first, and for example, the screen capturing times corresponding to two target video streams with shorter synthesis time are 500 times and 600 times. Further, the preset value range when the screen capturing times are adjusted may be updated to 500-: 510. 520, 530 … … 600 to obtain a plurality of corresponding target video streams again according to different screen capturing times.
In an exemplary embodiment of the present disclosure, it should be noted that the preset value range may be continuously updated according to actual situations, for example, when the number of screen capturing times corresponding to two target video streams with a shorter synthesis time in the plurality of target video streams screened out is 520 and 530, the preset value range when the number of screen capturing times is adjusted may be 520 and 530, and in a subsequent adjustment process, 1 may be added on the basis of 520 each time, for example: 521. 522, … … 530, to further obtain a plurality of corresponding target video streams according to different screen capturing times.
In step S160, the processing thread number corresponding to the target video stream having the shortest combination time among the plurality of target video streams is determined as the optimal thread number corresponding to the video template.
In an exemplary embodiment of the present disclosure, after obtaining the plurality of target video streams, a processing thread number corresponding to a target video stream having a shortest composition time in the target video streams may be determined as an optimal thread number corresponding to the video template. Therefore, the optimal thread number capable of reducing the waiting time of the user can be determined on the basis of multiple times of adjustment, so that the related processing processes or the number of servers can be directly distributed according to the optimal thread number in the subsequent processing process, the processing threads are reasonably configured, and the video generation efficiency is further improved.
For example, when the screen capturing time is 100 times and the number of processing threads is 18, and the corresponding synthesis time of the target video stream is the shortest, the number of processing threads 18 may be determined as the optimal number of threads corresponding to the video template. Illustratively, the optimal number of threads corresponding to the video template may also be recorded to prevent data loss.
In the exemplary embodiment of the present disclosure, after determining the optimal number of threads corresponding to the video template, refer to fig. 5, where fig. 5 shows a flowchart of a video generation method in an exemplary embodiment of the present disclosure, and specifically shows a flowchart of generating a video according to the optimal number of processing threads in a subsequent processing process, including steps S501 to S503, and the following explains a specific implementation manner with reference to fig. 5.
In step S501, a target screen capture frequency corresponding to the optimal thread count is obtained.
In an exemplary embodiment of the present disclosure, after the optimal thread number is determined, in an actual application process, for example, when the user selects the video template again, the optimal thread number may be directly obtained, and then, the target screen capturing times corresponding to the optimal thread number may be calculated. For example, when the optimal number of threads is determined to be 18, the corresponding target screen capturing times can be calculated to be 100 times according to the above formula 1 (or formula 2).
In step S502, the initial video stream is captured based on the target screen capturing times, and a target screen capturing image is generated.
In an exemplary embodiment of the present disclosure, when the target screen capturing times are obtained, the initial video stream may be captured according to the target screen capturing times to generate a target screen capturing image.
In step S503, the target screen capture images are subjected to parallel processing based on the optimal thread number to synthesize the target screen capture images into an optimal video stream.
In an exemplary embodiment of the present disclosure, after the target screenshot image is generated, the target screenshot image may be processed in parallel according to the optimal number of threads, and for example, 18 puppeter threads may be started to synthesize the target screenshot image into an optimal video stream (with the shortest generation time), where the optimal video stream is a video stream with the shortest generation time and the shortest user waiting time.
In an exemplary embodiment of the present disclosure, after the optimal video stream is generated, for example, the optimal video stream may also be sent to a front-end display device (e.g., a mobile phone, a computer, a tablet computer, etc.) so that the front-end display device displays the optimal video stream, and thus, a user can preview the optimal video stream to know the display condition of the video in advance.
The present disclosure also provides a video generating apparatus, and fig. 6 shows a schematic structural diagram of the video generating apparatus in an exemplary embodiment of the present disclosure; as shown in fig. 6, the video generating apparatus 600 may include a fusion module 601, a screen capture module 602, a first determination module 603, a composition module 604, and a second determination module 605. Wherein:
the fusion module 601 is configured to fuse the video content material and the video template into an initial video stream.
In an exemplary embodiment of the present disclosure, the video content material at least includes a picture material and a text material, and the fusion module is configured to fill the picture material to a first designated position in the video template; filling the text material to a second appointed position in the video template; and fusing the video template filled with the picture material and the text material into an initial video stream.
In an exemplary embodiment of the present disclosure, the fusion module is configured to obtain size information of a picture that can be accommodated by the first designated position; adjusting the size of the picture material according to the picture size information; and filling the adjusted picture material to a first designated position.
In an exemplary embodiment of the disclosure, the fusion module is configured to obtain word number information that can be accommodated by the second designated location; according to the word number information, carrying out segmentation processing on the character material; and filling the text material after the segmentation processing to a second appointed position in the video template.
And a screen capture module 602, configured to capture a screen of the initial video stream according to the randomly set screen capture times, and generate a screen capture image.
In an exemplary embodiment of the disclosure, the screen capture module is configured to capture the initial video stream according to a randomly set number of screen captures, and generate a screen capture image.
The first determining module 603 is configured to determine the number of processing threads of the initial video stream according to the playing parameter and the screen capturing frequency of the initial video stream.
In an exemplary embodiment of the present disclosure, the play parameters include at least a play duration and a video frame rate; the first determination module is used for determining the processing process number of the screen capture image based on the following formula:
Figure BDA0002306405520000131
wherein, ProcessNum is the number of processing threads; duration time is the play duration; FPS is video frame rate; RunNum is the number of screen shots.
And a synthesizing module 604, configured to perform parallel processing on the screen capture images based on the processing thread number, so as to synthesize the screen capture images into the target video stream.
In an exemplary embodiment of the present disclosure, the composition module is configured to perform parallel processing on the screen capture images based on the number of processing threads to compose the screen capture images into the target video stream.
And an adjusting module 605, configured to perform numerical adjustment on the screen capturing times within a preset numerical range, and perform the step a, the step B, and the step C in a circulating manner to obtain multiple target video streams.
In an exemplary embodiment of the disclosure, the adjusting module is configured to perform numerical adjustment on the screen capturing times within a preset numerical range, and perform the above step a, step B, and step C in a circulating manner to obtain a plurality of target video streams.
The second determining module 606 is configured to determine, as the optimal thread number corresponding to the video template, the processing thread number corresponding to the target video stream with the shortest synthesis time among the plurality of target video streams.
In an exemplary embodiment of the disclosure, the second determining module is configured to determine, as the optimal number of threads corresponding to the video template, a processing thread number corresponding to a target video stream with a shortest composition time among the plurality of target video streams.
In an exemplary embodiment of the disclosure, the second determining module is configured to obtain a target screen capturing frequency corresponding to the optimal thread number; performing screen capture on the initial video stream based on the target screen capture times to generate a target screen capture image; and carrying out parallel processing on the target screen capture images based on the optimal thread number so as to synthesize the target screen capture images into an optimal video stream.
In an exemplary embodiment of the disclosure, the second determining module is configured to send the target video stream to the front-end display device, so that the front-end display device performs a playing process on the target video stream.
The specific details of each module in the video generation apparatus have been described in detail in the corresponding video generation method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer storage medium capable of implementing the above method. On which a program product capable of implementing the above-described method of the present specification is stored. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the "exemplary methods" section above of this specification, when the program product is run on the terminal device.
Referring to fig. 7, a program product 700 for implementing the above method according to an embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to this embodiment of the disclosure is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 8, electronic device 800 is in the form of a general purpose computing device. The components of the electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, and a bus 830 that couples the various system components including the memory unit 820 and the processing unit 810.
Wherein the storage unit stores program code that is executable by the processing unit 810 to cause the processing unit 810 to perform steps according to various exemplary embodiments of the present disclosure as described in the "exemplary methods" section above in this specification. For example, the processing unit 810 may perform the following as shown in fig. 1: step S110, fusing the video content material and the video template into an initial video stream; step S120, capturing the screen of the initial video stream according to the preset screen capturing times to generate a screen capturing image; step S130, determining the processing procedure number of the screen capture image according to the playing parameters of the initial video stream and the screen capture times; step S140, performing parallel processing on the screen capture images based on the processing thread number to synthesize the screen capture images into a target video stream; step S150, performing numerical adjustment on the screen capturing times within a preset numerical range, and circularly executing the steps 120-140 to obtain a plurality of target video streams; step S160, determining the processing thread number corresponding to the target video stream with the shortest synthesis time among the plurality of target video streams as the optimal thread number corresponding to the video template.
The storage unit 820 may include readable media in the form of volatile memory units such as a random access memory unit (RAM)8201 and/or a cache memory unit 8202, and may further include a read only memory unit (ROM) 8203.
The storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 830 may be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 900 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 800, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 800 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 850. Also, the electronic device 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 860. As shown, the network adapter 860 communicates with the other modules of the electronic device 800 via the bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A method of video generation, comprising:
fusing a video content material and a video template into an initial video stream;
step A: capturing the screen of the initial video stream according to the preset screen capturing times to generate a screen capturing image;
and B: determining the processing procedure number of the screen capture image according to the playing parameters of the initial video stream and the screen capture times;
and C: performing parallel processing on the screen capture images based on the processing thread number to synthesize the screen capture images into a target video stream;
adjusting the screen capturing times within a preset numerical range, and circularly executing the step A, the step B and the step C to obtain a plurality of target video streams;
and determining the processing thread number corresponding to the target video stream with the shortest synthesis time in the plurality of target video streams as the optimal thread number corresponding to the video template.
2. The method according to claim 1, wherein after determining, as the optimal number of threads corresponding to the video template, the number of processing threads corresponding to the target video stream with the shortest composition time among the plurality of target video streams, the method further comprises:
acquiring the target screen capturing times corresponding to the optimal thread number;
performing screen capture on the initial video stream based on the target screen capture times to generate a target screen capture image;
and carrying out parallel processing on the target screen capture image based on the optimal thread number so as to synthesize the target screen capture image into an optimal video stream.
3. The method of claim 1, wherein the playback parameters include at least a playback duration and a video frame rate;
the determining the processing procedure number of the screen capture image according to the playing parameters and the screen capture times comprises the following steps: determining a number of processing procedures for the screen capture image based on the following formula:
Figure FDA0002773507950000011
wherein, ProcessNum is the number of the processing threads; the duration time is the play duration; FPS is the video frame rate; RunNum is the screen capture times.
4. The method according to claim 1 or 2, wherein the video content material comprises at least picture material and text material;
the fusing the video content material and the video template into the initial video stream comprises:
filling the picture material to a first designated position in the video template;
filling the text material to a second appointed position in the video template;
and fusing the video template filled with the picture material and the text material into the initial video stream.
5. The method of claim 4, wherein the populating the picture material to the first specified location in the video template includes:
acquiring the size information of the picture which can be contained in the first designated position;
adjusting the size of the picture material according to the picture size information;
and filling the adjusted picture material to the first designated position.
6. The method of claim 4, wherein the populating the text material to a second designated location in the video template comprises:
acquiring the number information of words which can be contained at the second appointed position;
according to the word number information, carrying out segmentation processing on the character material;
and filling the text material after the segmentation processing to the second appointed position in the video template.
7. The method of claim 1, further comprising:
and sending the target video stream to a front-end display device so that the front-end display device plays the target video stream.
8. A video generation apparatus, comprising:
the fusion module is used for fusing the video content material and the video template into an initial video stream;
the screen capturing module is used for capturing the screen of the initial video stream according to the preset screen capturing times to generate a screen capturing image;
the first determining module is used for determining the processing thread number of the screen capture image according to the playing parameters of the initial video stream and the screen capture times;
the synthesis module is used for carrying out parallel processing on the screen capture images based on the processing thread number so as to synthesize the screen capture images into a target video stream;
an adjusting module, configured to perform numerical adjustment on the screen capturing times within a preset numerical range, and perform step a, step B, and step C in a circulating manner as described in claim 1 to obtain a plurality of target video streams;
and the second determining module is used for determining the processing thread number corresponding to the target video stream with the shortest synthesis time in the plurality of target video streams as the optimal thread number corresponding to the video template.
9. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the video generation method of any of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the video generation method of any of claims 1-7 via execution of the executable instructions.
CN201911241580.2A 2019-12-06 2019-12-06 Video generation method and device, computer storage medium and electronic equipment Active CN111314777B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911241580.2A CN111314777B (en) 2019-12-06 2019-12-06 Video generation method and device, computer storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911241580.2A CN111314777B (en) 2019-12-06 2019-12-06 Video generation method and device, computer storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111314777A CN111314777A (en) 2020-06-19
CN111314777B true CN111314777B (en) 2021-03-30

Family

ID=71159700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911241580.2A Active CN111314777B (en) 2019-12-06 2019-12-06 Video generation method and device, computer storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111314777B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114667733A (en) * 2019-11-25 2022-06-24 Vid拓展公司 Method and apparatus for performing real-time VVC decoding

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838779A (en) * 2012-11-27 2014-06-04 深圳市腾讯计算机***有限公司 Idle computing resource multiplexing type cloud transcoding method and system and distributed file device
WO2016057817A1 (en) * 2014-10-08 2016-04-14 Vid Scale, Inc. Optimization using multi-threaded parallel processing framework
CN106604144A (en) * 2015-10-16 2017-04-26 上海龙旗科技股份有限公司 Video processing method and device
CN107295285A (en) * 2017-08-11 2017-10-24 腾讯科技(深圳)有限公司 Processing method, processing unit and the storage medium of video data
CN109040618A (en) * 2018-09-05 2018-12-18 Oppo广东移动通信有限公司 Video generation method and device, storage medium, electronic equipment
CN109788339A (en) * 2019-01-31 2019-05-21 北京字节跳动网络技术有限公司 A kind of video recording method, device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10297047B2 (en) * 2017-04-21 2019-05-21 Intel Corporation Interleaved multisample render targets for lossless compression

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838779A (en) * 2012-11-27 2014-06-04 深圳市腾讯计算机***有限公司 Idle computing resource multiplexing type cloud transcoding method and system and distributed file device
WO2016057817A1 (en) * 2014-10-08 2016-04-14 Vid Scale, Inc. Optimization using multi-threaded parallel processing framework
CN106604144A (en) * 2015-10-16 2017-04-26 上海龙旗科技股份有限公司 Video processing method and device
CN107295285A (en) * 2017-08-11 2017-10-24 腾讯科技(深圳)有限公司 Processing method, processing unit and the storage medium of video data
CN109040618A (en) * 2018-09-05 2018-12-18 Oppo广东移动通信有限公司 Video generation method and device, storage medium, electronic equipment
CN109788339A (en) * 2019-01-31 2019-05-21 北京字节跳动网络技术有限公司 A kind of video recording method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111314777A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
JP6438598B2 (en) Method and device for displaying information on a video image
US11943486B2 (en) Live video broadcast method, live broadcast device and storage medium
CN109640188B (en) Video preview method and device, electronic equipment and computer readable storage medium
CN112291627B (en) Video editing method and device, mobile terminal and storage medium
US20200021795A1 (en) Method and client for playing back panoramic video
US9462301B2 (en) Generating videos with multiple viewpoints
CN108427589B (en) Data processing method and electronic equipment
US11785195B2 (en) Method and apparatus for processing three-dimensional video, readable storage medium and electronic device
US20200404345A1 (en) Video system and video processing method, device and computer readable medium
CN111683260A (en) Program video generation method, system and storage medium based on virtual anchor
CN113115095B (en) Video processing method, video processing device, electronic equipment and storage medium
CN112019907A (en) Live broadcast picture distribution method, computer equipment and readable storage medium
CN111726688A (en) Method and device for self-adapting screen projection picture in network teaching
CN114598937B (en) Animation video generation and playing method and device
JP2020028096A (en) Image processing apparatus, control method of the same, and program
CN111314777B (en) Video generation method and device, computer storage medium and electronic equipment
CN114445600A (en) Method, device and equipment for displaying special effect prop and storage medium
CN113965665A (en) Method and equipment for determining virtual live broadcast image
KR20180027917A (en) Display apparatus and control method thereof
CN112153472A (en) Method and device for generating special picture effect, storage medium and electronic equipment
US20230412891A1 (en) Video processing method, electronic device and medium
CN114584709B (en) Method, device, equipment and storage medium for generating zooming special effects
EP3522525B1 (en) Method and apparatus for processing video playing
CN109640023B (en) Video recording method, device, server and storage medium
CN107707930B (en) Video processing method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant