CN113613066A - Real-time video special effect rendering method, system, device and storage medium - Google Patents

Real-time video special effect rendering method, system, device and storage medium Download PDF

Info

Publication number
CN113613066A
CN113613066A CN202110884172.XA CN202110884172A CN113613066A CN 113613066 A CN113613066 A CN 113613066A CN 202110884172 A CN202110884172 A CN 202110884172A CN 113613066 A CN113613066 A CN 113613066A
Authority
CN
China
Prior art keywords
rendering
special effect
content
video
material content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110884172.XA
Other languages
Chinese (zh)
Other versions
CN113613066B (en
Inventor
陆赞信
王伟
曾荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iMusic Culture and Technology Co Ltd
Original Assignee
iMusic Culture and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iMusic Culture and Technology Co Ltd filed Critical iMusic Culture and Technology Co Ltd
Priority to CN202110884172.XA priority Critical patent/CN113613066B/en
Publication of CN113613066A publication Critical patent/CN113613066A/en
Application granted granted Critical
Publication of CN113613066B publication Critical patent/CN113613066B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a method, a system, a device and a medium for rendering a real-time video special effect, wherein the method comprises the following steps: acquiring a first video material, and dividing according to a rendering process of the first video material to obtain a first material content and a second material content; obtaining a first priority of the first material content and a second priority of the second material content through rendering prediction, and determining a rendering queue according to the first priority and the second priority; performing thread distribution of graphics processing resources according to the rendering queue, performing slice special effect rendering on the first material content according to the thread distribution result to obtain first rendering content, and performing special effect rendering on the second part of material content to obtain second rendering content; performing fusion rendering on the first rendering content and the second rendering content to obtain a special effect video; the method improves the video rendering efficiency through rendering prediction and slice rendering, reduces the creation threshold, improves the user experience, and can be widely applied to the technical field of image processing.

Description

Real-time video special effect rendering method, system, device and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a system, a device and a medium for rendering a real-time video special effect.
Background
Along with the popularization of smart phones, the way and the content of daily entertainment of people are increasingly enriched. Content output is performed in a video mode, the method is a new internet content transmission mode, and with the popularization of mobile terminals and the acceleration of networks, large-flow transmission content such as videos is gradually favored by various large platforms and users. However, there is a certain technical threshold for video production, and taking a video rendering process as an example, the methods commonly used in the prior art mainly include: a rendering synthesis program is customized and developed aiming at a fixed video special effect template, and a designer needs to design the video template through a professional design tool and use a plug-in to export a template configuration file or fill in the configuration file manually.
However, the above method has significant drawbacks: firstly, a rendering and synthesizing program is customized and developed for each template, a large amount of manpower and time are required to be uninterruptedly invested in development and testing, the template manufacturing period is too long, and the cost is difficult to control; the method comprises the steps that a template configuration file is exported by using a plug-in, the plug-in of a corresponding design tool needs to be relied on or developed, more development resources are invested, and meanwhile, the video template design process cannot be directly opened for a user; secondly, the current professional design tool does not have AI special effect design capability, does not have AI special effect real-time preview capability, and has strong dependence on a terminal; thirdly, when the user side performs creation, due to the fact that the configuration file is filled in manually, human errors are possible and a large amount of labor time cost exists.
Disclosure of Invention
In view of the above, to at least partially solve one of the above technical problems, an embodiment of the present invention aims to provide a method for rendering a real-time video special effect, which is simple, convenient, intelligent, and capable of performing complex content creation such as an AI special effect, and further provides a system, an apparatus, and a computer-readable storage medium capable of correspondingly implementing the method.
In a first aspect, a technical solution of the present application provides a method for rendering a real-time video special effect, which includes the steps of:
acquiring a first video material, and dividing according to a rendering process of the first video material to obtain a first material content and a second material content;
obtaining a first priority of the first material content and a second priority of the second material content through rendering prediction, and determining a rendering queue according to the first priority and the second priority;
performing thread distribution of graphics processing resources according to the rendering queue, performing slice special effect rendering on the first material content according to the thread distribution result to obtain first rendering content, and performing special effect rendering on the second part of material content to obtain second rendering content;
and performing fusion rendering on the first rendering content and the second rendering content, and outputting to obtain a special effect video.
In a possible embodiment of the present disclosure, the step of obtaining the first priority of the first material content and the second priority of the second material content by rendering prediction includes:
acquiring a first complexity of the average frame special effect of the first material content and a special effect continuous frame number of the first material content;
characterizing the first priority by a ratio of the first complexity multiplied by the number of special effect persistence frames to the sequence of frames of the first video material.
In a possible embodiment of the present disclosure, the step of obtaining the first priority of the first material content and the second priority of the second material content by rendering prediction further includes:
acquiring a second complexity of the second material content average frame special effect and a frame rate of the first video material;
the second priority is characterized by a ratio of the second complexity multiplied by the frame rate to a sequence of frames of the first video material.
In a feasible embodiment of the present application, the step of rendering the first material content according to the slice special effect of the result of the thread allocation to obtain a first rendered content includes:
and grouping the frame sequence according to the special effect continuous frame number, sequencing grouping results to obtain a slice sequence, and performing special effect rendering according to the slice sequence to obtain the first rendering content.
In a feasible embodiment of the present application, the step of rendering the first material content according to the slice special effect of the result of the thread allocation to obtain a first rendered content further includes:
acquiring a rendering task request of the first material content;
confirming that the request passes, and acquiring network performance parameters;
and determining the slicing frame number information of the first material content by combining the network performance parameters according to the preset tolerable delay time and the normal distribution peak value of the rendering delay.
In a possible embodiment of the present disclosure, the step of performing thread allocation of graphics processing resources according to the rendering queue includes:
distributing the first material content to a server-side graphic processing resource for special effect rendering;
and distributing the second pixel content to a local graphic processing resource and/or the server-side graphic processing resource for special effect rendering.
In a possible embodiment of the solution of the present application, the method further comprises the steps of:
integrating the first rendering content and the second rendering content to obtain a video special effect template and/or a special effect material;
and acquiring a second video material, and synthesizing the video special effect template and/or the special effect material with the first video material to obtain the special effect video.
In a second aspect, a technical solution of the present invention further provides a system for rendering a real-time video special effect, including:
the image interface module is used for acquiring a first video material;
the image rendering module is used for obtaining first material content and second material content according to the rendering process division of the first video material and performing thread distribution of graphics processing resources according to the rendering queue; performing special effect rendering on the second part of the material content according to the thread distribution result to obtain second rendering content;
the AI module is used for carrying out slice special effect rendering on the first material content according to the thread distribution result to obtain first rendering content;
and the video output module is used for performing fusion rendering on the first rendering content and the second rendering content and outputting to obtain a special effect video.
In a third aspect, a technical solution of the present invention further provides a device for rendering a real-time video special effect, including:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is caused to execute a method for rendering a real-time video special effect according to the first aspect.
In a fourth aspect, the present invention also provides a storage medium, in which a processor-executable program is stored, and the processor-executable program is used for executing the method in the first aspect when being executed by a processor.
Advantages and benefits of the present invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention:
according to the technical scheme, the video material to be rendered is divided into two parts according to the complexity of the rendering process, the priorities of rendering tasks of the two parts are determined, and image processing resources are distributed according to the rendering queue formed by the priorities, so that the resources are more reasonably called, and the video rendering efficiency is greatly improved; different rendering modes are adopted for different rendering tasks, so that the rendering process is more efficient, more complex special effect rendering can be realized, and the user experience is improved; in the scheme process, only the video material to be rendered needs to be uploaded by a user, and configuration files or parameters do not need to be filled, so that manual error operation is avoided, the creation threshold is reduced, and the user experience is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a method for rendering a real-time video special effect according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a rendering process at an Android, iOS, h5 end in the embodiment of the present invention;
FIG. 3 is a diagram illustrating an applet rendering process according to an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating a source of material and a distribution manner of a video template according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a real-time video special effect rendering system according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
As described in the background art, the existing template-based video rendering method has the problems of high open cost, high creation cost for users, high research and development learning cost, incapability of supporting more complicated special effect preview and fusion, and the like. Therefore, the technical scheme of the application mainly aims at breaking the limitation that a professional animation design tool does not have the design and real-time preview of complex special effects such as AI special effects and the like; professional designers can design the video template with the professional effect more conveniently, so that common users can design special effect videos with the professional effect quickly.
Therefore, in a first aspect, as shown in fig. 1, the present application provides a method for rendering a real-time video special effect, which includes steps S100-S400:
s100, acquiring a first video material, and dividing according to a rendering process of the first video material to obtain a first material content and a second material content;
specifically, the embodiment can obtain the transcoded first video material, namely the video material to be rendered, through a data interface service of the intelligent terminal; and the video material contains the target special effect selected by the user. An application layer in the intelligent terminal in the embodiment divides a target special effect in a video material according to a rendering process according to a built-in graphic rendering tool, such as various Android/iOS SDKs or Javascript SDKs; exemplarily, in the embodiment, the AI special effect rendering has the characteristics of one-time acquisition and repeated processing of materials; and the basic special effect rendering is to continuously perform material acquisition and special effect processing, so that the AI special effect is used as the first material content, and the basic special effect is used as the second material content.
S200, obtaining a first priority of the first material content and a second priority of the second material content through rendering prediction, and determining a rendering queue according to the first priority and the second priority;
the rendering prediction in the embodiment refers to a process of calculating the rendering complexity and the rendering priority of a rendering task divided into two parts, which is executed at an application layer of an end terminal or a server. Specifically, after the materials of the AI effect and the basic effect are obtained by division in step S100, the embodiment determines the rendering time complexity of the basic effect and the AI effect and the priority of the rendering task through rendering prediction, and puts the rendering tasks into a rendering queue according to the order of the priorities from high to low, where the comparison of the priorities is performed by comparing the specific values of the priorities of the two effect types, and if the value is large, the priority is high. In addition, it should be noted that, in the embodiment, when the priorities of the basic effect and the AI effect are the same, a mode of rendering the basic effect and the AI effect in parallel may be adopted, and image processing resources may be allocated according to a ratio of the complexity of the basic effect and the complexity of the AI effect, so that optimal allocation of resource utilization is achieved.
S300, performing thread distribution of graphics processing resources according to the rendering queue, performing slice special effect rendering on the first material content according to the thread distribution result to obtain first rendering content, and performing special effect rendering on the second part of material content to obtain second rendering content;
specifically, after the image processing resources are allocated to the two special effects in step S200, the rendering material of the basic special effect and the rendering material of the AI special effect are distributed to corresponding rendering tools for performing the image special effect, where the rendering tools may be a CPU processor or a GPU processor of the intelligent terminal, or a CPU or a GPU processor of the server side. In the embodiment, the slice special effect rendering refers to grouping a material to be rendered, such as an AI special effect, into a frame sequence of a basic special effect to be rendered, wherein the frame number in the grouping is determined by the frame number lasting for the AI special effect to be rendered, and by performing slice rendering on the AI special effect, real-time preview of the AI special effect rendering can be realized, and a user can conveniently control and adjust the rendering process of a special effect video. Illustratively, when the priority of the AI special effect rendering process in the embodiment is greater than that of the basic special effect, the embodiment preferentially concentrates the CPU thread to upload AI special effect material resources to the GPU, the GPU renders the AI special effect in a slice special effect rendering mode, and then the basic special effect material resources are continuously and centrally uploaded to the GPU after the AI special effect rendering task is completed; before the AI current slice is processed, the centralized CPU thread downloads the resource data of which the basic special effect processing is finished to the CPU for continuous processing; when the AI slice is processed, the centralized CPU thread downloads the resource data processed by the current AI slice to the CPU for continuous processing. It should be noted that, since the CPU consumes resources generally in the embodiment, the CPU resource allocation may not be specially processed.
S400, performing fusion rendering on the first rendering content and the second rendering content, and outputting to obtain a special effect video;
for example, in the embodiment, the CPU process may integrate or fuse the video frame of the rendered AI special effect and the video frame of the basic special effect to obtain a complete special effect video, which may be previewed or cached by the user.
In some alternative embodiments, the process of predicting the first priority of the first material content and the second priority of the second material content by rendering in step S200 may include steps S210 to S220:
s210, acquiring a first complexity of the average frame special effect of the first material content and a special effect continuous frame number of the first material content;
exemplarily, a first complexity of the average frame effect of the first material content (AI effect) in the embodiment is denoted as OAIThe effect duration frame number m of the first material (AI effect) content. The first complexity is used for representing image rendering resources occupied by the AI special effect video frame to be rendered.
S220, representing a first priority by the ratio of the product of the first complexity and the special effect continuous frame number to the frame sequence of the first video material;
illustratively, the average frame special effect time complexity O is obtained according to the AI special effect obtained in step S210AIAnd obtaining an AI special effect rendering priority P by the AI special effect continuous frame number mAIComprises the following steps:
Figure BDA0003193375260000061
wherein f isAIIs the starting frame sequence for the AI effect.
In some alternative embodiments, the process of predicting the first priority of the first material content and the second priority of the second material content by rendering in step S200 may include steps S230 to S240:
s230, acquiring a second complexity of the average frame special effect of the second material content and the frame rate of the first video material
In particular, the second complexity of the second material (base effect) content average frame effect is the base effect rendering time complexity per frame, and the first embodiment requires determining the base effect time complexity O per layer in each video frame to be renderednIn combination with the traversal time complexity O of the frame pixels in the embodiment, the rendering time complexity O of the basic special effect of each frame can be obtainedf
Figure BDA0003193375260000062
Wherein i is 1,2,3, …, n; in addition, the embodiment also obtains the frame rate of the first video material, namely the frame rate of the video to be rendered is recorded as r.
S240, representing a second priority by the ratio of the second complexity multiplied by the frame rate to the frame sequence of the first video material;
in particular, the embodiment renders the temporal complexity O according to the base special effect per frame determined in step S230fFrame rate r and frame sequence f of the video to be rendered, and the obtained basic special effect rendering priority P of each framef
Figure BDA0003193375260000063
In some alternative embodiments, in step S300, the process of rendering the first material content according to the slice special effect to obtain the first rendered content according to the result of the thread assignment may include more specific step S310:
s310, grouping the frame sequence according to the special effect continuous frame number, sequencing grouping results to obtain a slice sequence, and performing special effect rendering according to the slice sequence to obtain first rendering content;
illustratively, in the embodiments, the current base special effect rendering frame sequence fmThe time complexity of each layer of basic special effect is OnAI Special Effect Start frame sequencef0Average frame effect time complexity of AI effect OAIAnd the AI special effect duration frame number m. On the basis of the above parameters having been determined, the sequence of frames fmTo f0+ m frames are grouped, each group having a number of frames fn=(f0-fm) (f) the last set of frame numbers0-fm+m)%fn. In the embodiment, the size of each group of elements is n, and the AI special effect complexity of each group is
Figure BDA0003193375260000071
Number of AI frames per groupAI2Is fAI2=Ogroup/OAI(ii) a The last group of AI frames is m-sigma fAI2(ii) a Finally, the embodiment performs AI effect rendering processing according to the slice order. And after each slice is processed, notifying in real time, entering an image rendering task queue, and waiting for fusion rendering of the basic effect and the AI effect.
In some alternative embodiments, the process of rendering the first material content according to the slice special effect to obtain the first rendered content according to the result of the thread assignment may further include steps S300a-S300 c:
s300a, acquiring a rendering task request of the first material content;
specifically, the intelligent terminal and the server in the embodiment adopt a long connection mode to achieve the effect of rendering a real-time preview by depending on the special effect of the server, and a connection request needs to be sent before data exchange.
S300b, confirming that the request passes, and acquiring network performance parameters;
specifically, in the embodiment, after the connection of the first high-level AI special effect request in the whole video rendering task is successful, a test packet is sent from the server to the application end in a network prediction manner, where the length of the test packet is L, the packet receiving time is T, and the simplified network performance is Vl-L/T.
S300c, determining the slicing frame number information of the first material content by combining network performance parameters according to the preset tolerable delay time and the normal distribution peak value of the rendering delay;
in particular toIn the embodiment, first, the tolerable delay time seconds (time) preset by the user is obtained and recorded as D (for example, the value of D is 1 to 3s), and it is determined that the fixed bitrate is cbr, so that the time duration required for transmitting the video per second is VcbrCbr/8. Embodiments further obtain the normal distribution peak of the rendering delay of the timing statistics as Dg(ii) a It may be determined that the slice includes the number of frames N-1 (V1/Vcbr))24(D-Dg) and (V1)>Vcbr); the application end sends the information of the frame number of the cutting blocks to the server end, and the server end carries out processing and rendering according to the rendering prediction and AI slicing methods; the final embodiment renders the slice video according to the tiles, and the slice video is returned to the application end.
In some alternative embodiments, the process of thread allocation of graphics processing resources according to the rendering queue in step S400 may include steps S410 and S420:
distributing the first material content to a server-side graphic processing resource for special effect rendering;
and distributing the second pixel content to local graphic processing resources and/or server-side graphic processing resources for special effect rendering.
Specifically, the embodiment may upload the task of the AI special effect to the server side for rendering, and locally complete the rendering task of the basic special effect or upload the rendering task of the basic special effect to the server side for completion according to the specific situation of the intelligent terminal.
Illustratively, as shown in fig. 2, when the smart terminal is the case of the app side and the h5 side, the embodiment adopts an efficient real-time rendering manner of local and service side combination. The app end and the h5 end respectively realize real-time rendering of basic special effects of the terminal through OpenGL ES and WebGL technologies, and the advanced special effects are rendered in real time through realized efficient server rendering and transmission technologies. Firstly, through a rendering prediction method, the resource proportion of image rendering and AI processing is predicted and distributed according to an image rendering layer, basic special effect complexity and AI special effect complexity, and when the two kinds of rendering are processed simultaneously and parallelly is determined. An AI module of the application end is divided into a basic AI special effect and a high-level AI special effect according to the AI special effect complexity level; and processing the basic AI special effect at an application end, and carrying out AI processing on the AI special effect according to the slice sequence according to the slice AI method. And after each slice is processed, informing the image rendering module in real time to enter an image rendering task queue. The advanced AI special effects invoke the image interface service over a long connection. And calculating the current network performance by a network prediction method. And then the application end cuts the AI special effect processing information into blocks by a block prediction method, and sends the processing information to an image interface service according to the block cutting sequence. The image interface service informs an AI module in the image rendering service to process, the AI module slices the processing information block, AI special effect processing is carried out according to the slicing sequence, the processing result is sent back to the application end in time through the image interface service, and the application end informs the image rendering module of the returned result and enters an image rendering task queue. In addition, in some optional embodiments, rendering can be performed by adopting a template design mode, when parameters are configured in the template design mode, rendering information is calculated and analyzed in real time according to the parameters in the rendering pre-extraction mode, and after the configuration parameters are confirmed, rendering area information is submitted; and when the image rendering module is used for rendering, rendering only the rendering area which is pre-extracted in the rendering process.
For another example, the smart terminal may also be an applet, as shown in fig. 3, since the applet does not support terminal rendering, the applet implements real-time rendering through an efficient server rendering and transmission technology. Firstly, the small program end calculates the current network performance through a network prediction method. The image interface service then notifies the image rendering service to process. The image rendering service processes resource allocation of the basic effect rendering and the AI effect rendering through a rendering prediction method, and processes priorities. The AI special effect module slices the AI special effect processing by a slicing AI special effect method. And after each slice is processed, informing the image rendering module in real time to enter an image rendering task queue. And the image rendering module executes video pre-slicing according to the video frame sequence and the network performance, and the rendered and synthesized video slices are sent to the small program end according to the slicing sequence. And the small program end plays the slice video content in real time after receiving the current playing serial number slice, and the slice video with the non-current playing serial number is stored in the cache so as to be played quickly when the serial number is arranged.
In some feasible embodiments, the real-time video special effect rendering method may further integrate the first rendering content and the second rendering content to obtain a video special effect template and/or a special effect material, and synthesize the video special effect template and/or the special effect material and the video material to be rendered to obtain the special effect video.
In addition, as shown in fig. 4, the source ways of the user video design template material include, but are not limited to, the following three types: firstly, designing a particle special effect and a sound special effect by using a professional tool, and uploading special effect files to the system of the embodiment; secondly, selecting the existing special effect of the embodiment system, wherein the special effect can be provided by the embodiment system or published by a user; and thirdly, selecting the video template (provided by the system or published by the user) published by the system of the embodiment to carry out secondary design.
In a second aspect, a technical solution of the present application further provides a system for rendering a real-time video special effect, which mainly includes: the image interface module is used for acquiring a first video material;
the image rendering module is used for obtaining first material content and second material content according to the rendering process division of the first video material and performing thread distribution of graphic processing resources according to the rendering queue; performing special effect rendering on the second part of the material content according to the thread distribution result to obtain second rendering content;
the AI module is used for performing slice special effect rendering on the first material content according to the thread distribution result to obtain first rendering content;
and the video output module is used for performing fusion rendering on the first rendering content and the second rendering content and outputting to obtain the special effect video.
In addition, as shown in fig. 5, according to different intelligent terminals, the system according to the embodiment may further include an Android/iOS SDK, and the system may be further divided into an AI processing module and an OpenGLES-based image rendering module; or the system comprises a Javascript SDK and can be divided into an AI processing module and a WebGL-based image rendering module. It is understood that a data layer, which includes a database and a data interface service, may also be included in an embodiment system.
In a third aspect, a technical solution of the present application further provides a device for rendering a real-time video special effect, which includes at least one processor; at least one memory for storing at least one program; when the at least one program is executed by the at least one processor, the at least one processor is caused to execute a method of rendering a real-time video effect as in the first aspect.
An embodiment of the present invention further provides a storage medium storing a program, where the program is executed by a processor to implement the method in the first aspect.
From the above specific implementation process, it can be concluded that the technical solution provided by the present invention has the following advantages or advantages compared to the prior art:
1. the technical scheme of the application can support real-time preview, provides methods of 'rendering prediction', 'slicing AI' and 'slicing AI' in order to improve rendering efficiency and ensure real-time preview when a video template is designed and synthesized into a video, and optimizes each item according to rendering requirements and network performance;
2. according to the technical scheme, the advantages of the app end, the browser end and the server end are integrated, multi-end integrated rendering is carried out, and the rendering efficiency is higher;
3. the technical scheme of the application supports user material publishing and sharing, video template publishing and sharing and secondary video template design.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the functions and/or features may be integrated in a single physical device and/or software module, or one or more of the functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for rendering a real-time video special effect is characterized by comprising the following steps:
acquiring a first video material, and dividing according to a rendering process of the first video material to obtain a first material content and a second material content;
obtaining a first priority of the first material content and a second priority of the second material content through rendering prediction, and determining a rendering queue according to the first priority and the second priority;
performing thread distribution of graphics processing resources according to the rendering queue, performing slice special effect rendering on the first material content according to the thread distribution result to obtain first rendering content, and performing special effect rendering on the second part of material content to obtain second rendering content;
and performing fusion rendering on the first rendering content and the second rendering content, and outputting to obtain a special effect video.
2. The method for rendering the real-time video special effect according to claim 1, wherein the step of predicting the first priority of the first material content and the second priority of the second material content by rendering comprises:
acquiring a first complexity of the average frame special effect of the first material content and a special effect continuous frame number of the first material content;
characterizing the first priority by a ratio of the first complexity multiplied by the number of special effect persistence frames to the sequence of frames of the first video material.
3. The method for rendering the real-time video special effect according to claim 1, wherein the step of predicting the first priority of the first material content and the second priority of the second material content by rendering comprises:
acquiring a second complexity of the second material content average frame special effect and a frame rate of the first video material;
the second priority is characterized by a ratio of the second complexity multiplied by the frame rate to a sequence of frames of the first video material.
4. The method according to claim 2, wherein the step of performing slice special effect rendering on the first material content according to the result of the thread allocation to obtain a first rendered content comprises: and grouping the frame sequence according to the special effect continuous frame number, sequencing grouping results to obtain a slice sequence, and performing special effect rendering according to the slice sequence to obtain the first rendering content.
5. The method for rendering real-time video special effects according to claim 2, wherein the step of rendering the first material content into the first rendered content by performing slice special effects according to the result of the thread assignment further comprises:
acquiring a rendering task request of the first material content;
confirming that the request passes, and acquiring network performance parameters;
and determining the slicing frame number information of the first material content by combining the network performance parameters according to the preset tolerable delay time and the normal distribution peak value of the rendering delay.
6. A method for rendering real-time video special effects according to any one of claims 1 to 5, wherein the step of performing thread allocation of graphics processing resources according to the rendering queue comprises:
distributing the first material content to a server-side graphic processing resource for special effect rendering;
and distributing the second pixel content to a local graphic processing resource and/or the server-side graphic processing resource for special effect rendering.
7. A method for rendering real-time video effects according to any of claims 1 to 5, wherein the method further comprises the steps of:
integrating the first rendering content and the second rendering content to obtain a video special effect template and/or a special effect material;
and acquiring a second video material, and synthesizing the video special effect template and/or the special effect material with the second video material to obtain the special effect video.
8. A system for rendering a real-time video special effect, comprising:
the image interface module is used for acquiring a first video material;
the image rendering module is used for obtaining first material content and second material content according to the rendering process division of the first video material and performing thread distribution of graphics processing resources according to the rendering queue; performing special effect rendering on the second part of the material content according to the thread distribution result to obtain second rendering content;
the AI module is used for carrying out slice special effect rendering on the first material content according to the thread distribution result to obtain first rendering content;
and the video output module is used for performing fusion rendering on the first rendering content and the second rendering content and outputting to obtain a special effect video.
9. An apparatus for rendering a real-time video special effect, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to perform a method of rendering a real-time video effect as claimed in any one of claims 1 to 7.
10. A storage medium having stored therein a processor-executable program, the processor-executable program when executed by a processor being adapted to execute a method of rendering a real-time video effect according to any of claims 1-7.
CN202110884172.XA 2021-08-03 2021-08-03 Rendering method, system and device for real-time video special effect and storage medium Active CN113613066B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110884172.XA CN113613066B (en) 2021-08-03 2021-08-03 Rendering method, system and device for real-time video special effect and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110884172.XA CN113613066B (en) 2021-08-03 2021-08-03 Rendering method, system and device for real-time video special effect and storage medium

Publications (2)

Publication Number Publication Date
CN113613066A true CN113613066A (en) 2021-11-05
CN113613066B CN113613066B (en) 2023-03-28

Family

ID=78339153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110884172.XA Active CN113613066B (en) 2021-08-03 2021-08-03 Rendering method, system and device for real-time video special effect and storage medium

Country Status (1)

Country Link
CN (1) CN113613066B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114205359A (en) * 2022-01-27 2022-03-18 腾讯科技(深圳)有限公司 Video rendering coordination method, device and equipment
CN114302229A (en) * 2021-12-30 2022-04-08 重庆杰夫与友文化创意有限公司 Method, system and storage medium for converting scene materials into videos
CN114827722A (en) * 2022-04-12 2022-07-29 咪咕文化科技有限公司 Video preview method, device, equipment and storage medium
CN115103226A (en) * 2022-06-16 2022-09-23 慧之安信息技术股份有限公司 Video stream transmission method based on KCP protocol

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100083336A1 (en) * 2008-09-29 2010-04-01 Samsung Electronics Co., Ltd. Method and apparatus for providing rich media service
CN102750297A (en) * 2011-11-11 2012-10-24 新奥特(北京)视频技术有限公司 Rendering and compositing method and system of cluster packaging
US20130038614A1 (en) * 2011-08-12 2013-02-14 Microsoft Corporation Managing multiple gpu-based rendering contexts
US20140098092A1 (en) * 2011-06-01 2014-04-10 Hitachi Medical Corporation Image display device, image display system, and image display method
US20140361977A1 (en) * 2013-06-07 2014-12-11 Sony Computer Entertainment Inc. Image rendering responsive to user actions in head mounted display
CN107203303A (en) * 2016-03-18 2017-09-26 阿里巴巴集团控股有限公司 A kind of interface display method and device
US20190079781A1 (en) * 2016-01-21 2019-03-14 Alibaba Group Holding Limited System, method, and apparatus for rendering interface elements
CN109936749A (en) * 2017-12-18 2019-06-25 腾讯科技(深圳)有限公司 Special efficacy implementation method, device and readable storage medium storing program for executing
CN110221822A (en) * 2019-05-29 2019-09-10 北京字节跳动网络技术有限公司 Merging method, device, electronic equipment and the computer readable storage medium of special efficacy
CN110717135A (en) * 2019-09-20 2020-01-21 广州坚和网络科技有限公司 Method for displaying hollow card information in list for displaying advertisement information
CN111510645A (en) * 2020-04-27 2020-08-07 北京字节跳动网络技术有限公司 Video processing method and device, computer readable medium and electronic equipment
CN111524445A (en) * 2019-02-01 2020-08-11 阿里巴巴集团控股有限公司 Map generation method and device, electronic equipment and readable storage medium
CN111586552A (en) * 2015-02-06 2020-08-25 杜比实验室特许公司 Hybrid priority-based rendering system and method for adaptive audio
US20210158597A1 (en) * 2019-11-22 2021-05-27 Sony Interactive Entertainment Inc. Systems and methods for adjusting one or more parameters of a gpu

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100083336A1 (en) * 2008-09-29 2010-04-01 Samsung Electronics Co., Ltd. Method and apparatus for providing rich media service
US20140098092A1 (en) * 2011-06-01 2014-04-10 Hitachi Medical Corporation Image display device, image display system, and image display method
US20130038614A1 (en) * 2011-08-12 2013-02-14 Microsoft Corporation Managing multiple gpu-based rendering contexts
CN102750297A (en) * 2011-11-11 2012-10-24 新奥特(北京)视频技术有限公司 Rendering and compositing method and system of cluster packaging
US20140361977A1 (en) * 2013-06-07 2014-12-11 Sony Computer Entertainment Inc. Image rendering responsive to user actions in head mounted display
CN105392538A (en) * 2013-06-07 2016-03-09 索尼电脑娱乐公司 Image rendering responsive to user actions in head mounted display
CN111586552A (en) * 2015-02-06 2020-08-25 杜比实验室特许公司 Hybrid priority-based rendering system and method for adaptive audio
US20190079781A1 (en) * 2016-01-21 2019-03-14 Alibaba Group Holding Limited System, method, and apparatus for rendering interface elements
CN107203303A (en) * 2016-03-18 2017-09-26 阿里巴巴集团控股有限公司 A kind of interface display method and device
CN109936749A (en) * 2017-12-18 2019-06-25 腾讯科技(深圳)有限公司 Special efficacy implementation method, device and readable storage medium storing program for executing
CN111524445A (en) * 2019-02-01 2020-08-11 阿里巴巴集团控股有限公司 Map generation method and device, electronic equipment and readable storage medium
CN110221822A (en) * 2019-05-29 2019-09-10 北京字节跳动网络技术有限公司 Merging method, device, electronic equipment and the computer readable storage medium of special efficacy
CN110717135A (en) * 2019-09-20 2020-01-21 广州坚和网络科技有限公司 Method for displaying hollow card information in list for displaying advertisement information
US20210158597A1 (en) * 2019-11-22 2021-05-27 Sony Interactive Entertainment Inc. Systems and methods for adjusting one or more parameters of a gpu
CN111510645A (en) * 2020-04-27 2020-08-07 北京字节跳动网络技术有限公司 Video processing method and device, computer readable medium and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114302229A (en) * 2021-12-30 2022-04-08 重庆杰夫与友文化创意有限公司 Method, system and storage medium for converting scene materials into videos
CN114302229B (en) * 2021-12-30 2024-04-12 重庆杰夫与友文化创意有限公司 Method, system and storage medium for converting scene material into video
CN114205359A (en) * 2022-01-27 2022-03-18 腾讯科技(深圳)有限公司 Video rendering coordination method, device and equipment
CN114827722A (en) * 2022-04-12 2022-07-29 咪咕文化科技有限公司 Video preview method, device, equipment and storage medium
CN115103226A (en) * 2022-06-16 2022-09-23 慧之安信息技术股份有限公司 Video stream transmission method based on KCP protocol
CN115103226B (en) * 2022-06-16 2023-03-24 慧之安信息技术股份有限公司 Video stream transmission method based on KCP protocol

Also Published As

Publication number Publication date
CN113613066B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN113613066B (en) Rendering method, system and device for real-time video special effect and storage medium
CN112184856B (en) Multimedia processing device supporting multi-layer special effect and animation mixing
CN106375789B (en) Medium living broadcast method and apparatus
CN106804003B (en) Video editing method and device based on ffmpeg
CN111294647B (en) Video processing method, device and equipment and storage medium
CN106020777B (en) A kind of data processing method, apparatus and system
CN106454402A (en) Transcoding task scheduling method and device
CN103096131B (en) A kind of live method for stream processing and device
CN107948715A (en) Live network broadcast method and device
CN115086752B (en) Recording method, system and storage medium for browser page content
CN113592066A (en) Hardware acceleration method, apparatus, device, computer program product and storage medium
CN113438285A (en) Smart campus comprehensive management platform model based on private cloud architecture
CN105262957A (en) Video image processing method and device
CN110415324A (en) A method of it is rendered based on cloud
CN106527891B (en) A kind of edit object selection method and relevant apparatus
CN108989898A (en) A kind of method, storage medium, equipment and system showing present double hit animation
CN113656717B (en) Webpage control rendering method, device, equipment and storage medium
CN116233088A (en) Real-time super-division video stream transmission optimization method based on end cloud cooperation
CN116847147A (en) Special effect video determining method and device, electronic equipment and storage medium
WO2013181756A1 (en) System and method for generating and disseminating digital video
CN112543373B (en) Transcoding control method and device and electronic equipment
CN115174948B (en) Scheduling method and device based on transcoding resources
CN116433818B (en) Cloud CPU and GPU parallel rendering method
CN113099273B (en) Video block replacement method, client, server, system and readable storage medium based on opportunity strategy
CN117041628B (en) Live picture rendering method, system, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant