CN114710704A - Method for editing three-dimensional graph and image in real time by using time line - Google Patents

Method for editing three-dimensional graph and image in real time by using time line Download PDF

Info

Publication number
CN114710704A
CN114710704A CN202210478449.3A CN202210478449A CN114710704A CN 114710704 A CN114710704 A CN 114710704A CN 202210478449 A CN202210478449 A CN 202210478449A CN 114710704 A CN114710704 A CN 114710704A
Authority
CN
China
Prior art keywords
video
dimensional graphic
audio
dimensional
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210478449.3A
Other languages
Chinese (zh)
Other versions
CN114710704B (en
Inventor
唐兴波
李刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aidipu Technology Co ltd
Original Assignee
Aidipu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aidipu Technology Co ltd filed Critical Aidipu Technology Co ltd
Priority to CN202210478449.3A priority Critical patent/CN114710704B/en
Publication of CN114710704A publication Critical patent/CN114710704A/en
Application granted granted Critical
Publication of CN114710704B publication Critical patent/CN114710704B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Marketing (AREA)
  • Computer Security & Cryptography (AREA)
  • Architecture (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a method for editing three-dimensional graphs and images in real time by using a time line, which comprises the following steps: the method comprises the following steps: establishing an editing scene, and editing the three-dimensional graphic material; step two: adding video and audio and three-dimensional graphic materials from different sources into the editing scene, and designating the track and the input and output time of the added video and audio or three-dimensional graphic materials; constructing a logic layer, and placing the added video and audio or three-dimensional graphic materials into the logic layer; step three: constructing a time line, wherein the time line is constructed by a plurality of tracks, and determining the aliasing sequence of the audio and video and the three-dimensional graphic material; step four: aliasing and rendering the video and audio and the three-dimensional graphic material according to the aliasing sequence to generate a final video; by the method, the attributes of the three-dimensional graphic materials are finely adjusted and modified in the context environment of the nonlinear video and audio editing, the effect is presented in real time, and the quality and the working efficiency of output products of nonlinear video and audio editing software are greatly improved.

Description

Method for editing three-dimensional graph and image in real time by using timeline
Technical Field
The invention relates to the field of computer graphics and image processing, in particular to a method for editing three-dimensional graphics and images in real time by using a time line.
Background
Currently, when a three-dimensional graphics material needs to be used in a non-linear video and audio coding software, a commonly adopted technical means is to convert the three-dimensional graphics material into a video, a picture or a picture sequence externally in advance, and then to import the video, the picture or the picture sequence into the non-linear video and audio coding software for video and audio synthesis and output.
However, the above-mentioned commonly used technical means and methods have the following disadvantages:
firstly, the technology for processing three-dimensional graphic materials in the current non-linear video and audio coding software cannot utilize rich information in the three-dimensional graphic materials, the loss of graphic information is caused by the early conversion from graphics to images in a pre-synthesis mode, and the data distortion after further conversion is caused by the two-dimensional aliasing of the three-dimensional graphic materials and other videos after the three-dimensional graphic materials are converted into videos or picture sequences;
secondly, when the video, the picture and the picture sequence which are converted firstly can not meet the creation requirement, the original system is required to be modified and then converted for utilization. The process is complicated and time-consuming, and the manufacturing cost is increased;
thirdly, the mode of independently processing the three-dimensional graphic materials loses the context environment of contrast reference of other video and audio materials of the nonlinear editing system, and the precision and the efficiency of editing are influenced.
The invention uses the time line to edit the three-dimensional graph and image in real time, which solves the above problem, to make the inner information of the three-dimensional graph material be used to synthesize with video and audio directly, the scene and object in the three-dimensional graph material and the video and audio layer can be mixed, linked, transited and transformed, and the property of the three-dimensional graph material can be fine-tuned and modified in the context environment of non-linear video and audio editing, to display the effect in real time.
Disclosure of Invention
Aiming at the technical problems in the prior art mentioned above, the invention provides a method for editing three-dimensional graphics and images in real time by using a timeline, so as to solve the technical problems in the prior art, such as loss of graphics information, two-dimensional aliasing of three-dimensional graphics materials and other videos after the three-dimensional graphics materials are converted into videos or picture sequences, further transformed data distortion, influence on the editing precision and efficiency due to a mode of independently processing the three-dimensional graphics materials, complex and time-consuming editing process, increased manufacturing cost and the like.
The invention discloses a method for editing three-dimensional graphs and images in real time by using a time line, which comprises the following steps:
the method comprises the following steps: establishing an editing scene, and editing the three-dimensional graphic material; the editing scene adopts a three-dimensional scene as a common container for all video, audio and three-dimensional graphic materials;
step two: adding video and audio and three-dimensional graphic materials of different sources into the editing scene, and specifying the track and the in-out time of the added video and audio or three-dimensional graphic materials, wherein the in-out time represents the time range of the video and audio or three-dimensional graphic materials appearing in the finally synthesized video file; constructing a logic layer, and placing the added video and audio or three-dimensional graphic materials into the logic layer, wherein the logic layer reflects the spatial hierarchical relationship among the three-dimensional graphic materials, the video and audio and the spatial hierarchical relationship between the three-dimensional graphic materials and the video and audio;
step three: constructing a time line, wherein the time line is constructed by a plurality of tracks and reflects the input and output time of the added audio and video and three-dimensional graphic materials and the hierarchical relationship between the logic layers, so that the aliasing order of the audio and video and the three-dimensional graphic materials is determined; the aliasing sequence reflects the shielding, covering and transmitting relations among different video and audio or three-dimensional graphic materials;
step four: and sequentially aliasing and rendering the video and audio and the three-dimensional graphic materials in the edited scene according to the aliasing sequence to generate a final video result.
The added video and audio and three-dimensional graphic materials are respectively arranged in a plurality of different logic layers, the contents of the added video and audio and three-dimensional graphic materials are respectively subjected to aliasing rendering at each time point according to the sequence of the plurality of different logic layers to obtain the image frames of the time point, and all the image frames are rendered into a composite video according to the time sequence.
As described above, different videos and audios are placed in different logical layers in the editing scene according to the aliasing order and the designated tracks to reflect the hierarchical relationship among the videos and audios, wherein each video and audio has a determined video and audio in point, video and audio out point and video and audio length.
The video and audio comprise two types: the video and audio contained in the three-dimensional graphic material and the external video and audio added by the nonlinear video and audio editing software; the video and audio contained in the three-dimensional graphic material has a logic layer where the video and audio are located, and the logic layer where the track of the three-dimensional graphic material is located is shared.
The presentation mode of the three-dimensional graphic material is animation, the animation reflects the parameter change of each component of the three-dimensional graphic material within a period of time by means of a material parameter set stored by a key frame parameter array, and a corresponding three-dimensional graphic material in point and a corresponding three-dimensional graphic material out point are arranged on the time line.
Optionally, the animation is added to the timeline as an element with the same position as the video and audio for editing, a material parameter set of each frame between an entry point and an exit point of the three-dimensional graphic material is obtained through key frame operation, a frame image of the three-dimensional graphic material at the current time point of the segment can be determined through rendering by using the material parameter set, and the image can be used for aliasing with images at the current time point obtained from other logic layers according to the aliasing order and synthesizing a final video together with videos at other steps.
Wherein the key frame operation is: and carrying out interpolation operation by utilizing the contained parameters of the key frame array and the interpolation mode of the animation corresponding to any time point between the in-point and the out-point of the three-dimensional graphic material to obtain a material key frame related parameter set of the time point, and forming a material parameter set of the current time point together with a material inherent parameter set which is not influenced by the key frame array.
Optionally, converting the shifting operation, the clipping operation and the zooming operation of the animation implementation of the three-dimensional graphic material into the moving, adding and deleting and equal-proportion time adjustment operation of key frames in the animation on the time line;
wherein the offset operation is: integrally moving the position of the animation of the three-dimensional graphic material along the track where the three-dimensional graphic material is located on the interface of the timeline to a new time point, so as to integrally advance or delay the appearance time of the animation; the cutting operation comprises the following steps: cutting a line segment which represents the three-dimensional graphic material on a track where the three-dimensional graphic material is located on an interface of the timeline, so that part of content of the animation is cut off, the output time of the animation is reduced, and the output speed of the animation is unchanged; the zooming operation is as follows: and moving the left and right edges of the line segment representing the material on the track where the three-dimensional graphic material is positioned, namely the in point and the out point of the three-dimensional graphic material, on the interface of the timeline, so as to prolong or shorten the animation.
In the invention, the internal parameters of the three-dimensional graphic material can be modified and adjusted by directly adjusting the parameters and modifying the attribute leading-out items through an external data source.
After the three-dimensional graphic material is added into the editing scene, the specific steps of editing, rendering and finally synthesizing the video in the fourth step are as follows: adding animation of the three-dimensional graphic material into a scene; adjusting the hierarchy between the logical layers; performing offset, clipping or scaling operations; setting parameters; carrying out DVE and special effect setting; rendering the three-dimensional graphic material according to time to form image frame sequences, and synthesizing the image frames with other video and audio or the image frames of the three-dimensional graphic material into a video.
Optionally, the rendering the three-dimensional graphic material in step 6 includes: parameter application, namely performing validity verification and performing correlation operation; interpolating a key frame; calculating and applying a special effect; generating an image frame; and performing post-processing on the image frame. The step 6 of synthesizing the three-dimensional graphic material comprises the following steps: setting the current time; acquiring image frames of all logic layers at the time point; aliasing to form a final image frame; synthesizing the image frames into a result video as the video content of the time point; repeating the above four steps until the video and audio ends.
The invention fundamentally solves the problems by using the method for editing the three-dimensional graphics and images in real time by using the time line, so that the internal information of the three-dimensional graphics material can be directly used for synthesizing video and audio, the scenes and objects in the three-dimensional graphics material and the video and audio layer can be subjected to aliasing, linkage, transition and transformation, and the attributes of the three-dimensional graphics material can be finely modified in the context environment of nonlinear video and audio editing to present the effect in real time. By the method, the quality and the working efficiency of the output product of the nonlinear video and audio editing software are greatly improved.
Drawings
FIG. 1 is a flow chart illustrating a method for editing three-dimensional graphics and images in real time using a timeline according to an embodiment of the present invention;
FIG. 2 is a timeline diagram according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a track and an in-point and an out-point according to an embodiment of the invention;
FIG. 4 is an animated representation of three-dimensional graphical material according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a frame of an image in an animation according to an embodiment of the invention;
FIG. 6 is a schematic illustration of three-dimensional graphic material operation according to an embodiment of the present invention;
FIG. 7 is a schematic view of a process for modifying parameters of three-dimensional graphic material according to an embodiment of the present invention;
FIG. 8 is a schematic view of a three-dimensional graphical material editing process according to an embodiment of the present invention;
FIG. 9a is a schematic diagram illustrating a three-dimensional graphics material rendering process according to an embodiment of the present invention;
fig. 9b is a schematic view of a three-dimensional graphics material composition flow according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, and the embodiments described below are only examples of the present invention, which are only used for clearly explaining and explaining the present invention and are not intended to limit the scope of the present invention.
The main information of the three-dimensional graphic material recorded in the invention comprises scene information, object information, texture information, animation key frame information, attribute lead-out item information and the like. The scene information comprises parameters such as scene coordinates, cameras, lamplight and the like; the object information comprises parameters such as grid information, position rotation and scaling and the like of the object, and specific attribute parameters related to the object; texture information including parameters such as texture type, texture coordinates, and related texture files; the animation key frame information comprises parameters such as key frame parameter arrays and interpolation modes, and the key frame is parameter data with time codes; the attribute leading-out item information comprises parameters such as attribute parameter types, value ranges and the like.
The non-linear video and audio editing software is a tool for editing and combining different video, audio, image and text materials into a video file.
Referring to fig. 1, fig. 1 is a flow chart illustrating a method for editing three-dimensional graphics and images in real time by using a timeline according to an embodiment of the present invention. The method mainly comprises four steps:
s101, establishing an editing scene, and editing the three-dimensional graphic material; the editing scene adopts a three-dimensional scene as a common container for audio and video and three-dimensional graphic materials;
s102, adding video and audio frequencies and three-dimensional graphic materials from different sources into the editing scene, and specifying the track and the input and output time of the added video and audio frequencies or three-dimensional graphic materials; constructing a logic layer, and placing the added video and audio or three-dimensional graphic materials into the logic layer;
s103, constructing a time line, wherein the time line is formed by constructing a plurality of tracks and reflects the input and output time of the added audio and video and three-dimensional graphic materials and the hierarchical relationship between the logic layers, so that the aliasing order of the audio and video and the three-dimensional graphic materials is determined;
and S104, sequentially aliasing and rendering the audio and video and the three-dimensional graphic materials in the edited scene according to the aliasing sequence to generate a final video result.
The steps of the above-described method are described in further detail below.
In step S101, an editing scene is established, and the three-dimensional graphic material is edited; and the editing scene adopts a three-dimensional scene as a common container for all audio and video and three-dimensional graphic materials.
Referring to fig. 2, fig. 2 is a timeline diagram according to an embodiment of the present invention. In step S102, adding videos and audios from different sources and three-dimensional graphics materials into the editing scene, and specifying a track and an in-out time in which the added videos and audios or three-dimensional graphics materials are located, where the in-out time represents a time range in which the videos and audios or three-dimensional graphics materials appear in a finally synthesized video file;
and constructing a logic layer, and placing the added video and audio or three-dimensional graphic materials into the logic layer, wherein the logic layer reflects the spatial hierarchical relationship among the three-dimensional graphic materials, the video and audio and the three-dimensional graphic materials and the video. The logic layers are virtual layers, and contents of all the logic layers in the three-dimensional scene are sequentially rendered according to the sequence of the logic layers.
The videos are placed in an editing scene, and different videos are placed in different logical layers according to an aliasing order so as to reflect the hierarchical relation among the videos. Each audio-visual frequency has a definite length and starting and stopping points, namely an in point and an out point. As previously mentioned, the video is assigned the track it occupies as it joins the three-dimensional scene, and thus has the logical layer represented by the track.
Referring to fig. 3, fig. 3 is a schematic diagram of a track and an in-point and an out-point according to an embodiment of the invention. In the step, the video and audio or three-dimensional graphic material of the designated track can be modified in the editing process, so that the logical hierarchical relationship between the video and audio or three-dimensional graphic material and other video and audio and graphic materials is changed; at the same time, the start-stop time may also be modified to change the time at which the audiovisual or graphical material appears in the composite video.
In step S103, constructing a timeline, which is formed by constructing a plurality of tracks, and reflects the ingress and egress times of the added audio/video and three-dimensional graphic materials and the hierarchical relationship between the logic layers, so as to determine the aliasing order of the audio/video and three-dimensional graphic materials; the aliasing order reflects the occlusion, coverage and transmission relationships between different audiovisual or three-dimensional graphical materials.
Wherein, the logic layers of the video and audio frequency and image and text materials of the adjacent tracks are also adjacent; and (4) sequentially aliasing and synthesizing the video and audio and the three-dimensional graphic material into a final video and audio result according to the logic layer sequence of the video and audio and the three-dimensional graphic material.
In step S104, sequentially performing aliasing rendering on the audio/video and the three-dimensional graphics material in the edited scene according to the aliasing order, and generating a final video result.
The added video and audio and three-dimensional graphic materials are respectively arranged in a plurality of different logic layers, the contents of the added video and audio and three-dimensional graphic materials are respectively subjected to aliasing rendering at each time point according to the sequence of the plurality of different logic layers to obtain the image frames of the time point, and all the image frames are rendered into a composite video according to the time sequence.
As described above, different videos and audios are placed in different logical layers in the editing scene according to the aliasing order and the designated tracks to reflect the hierarchical relationship among the videos and audios, wherein each video and audio has a definite video and audio in point, video and audio out point and video and audio length.
The video and audio comprise two types: the video and audio contained in the three-dimensional graphic material and the external video and audio added by the nonlinear video and audio editing software; the video and audio contained in the three-dimensional graphic material have a logic layer where the video and audio are located, and meanwhile, the three-dimensional graphic material where the video and audio are located shares the logic layer where the track is located because the three-dimensional graphic material where the video and audio are located is assigned with a specific track when the three-dimensional graphic material where the video and audio are located is added into an editing scene.
The three-dimensional graphic material is mainly presented in an animation mode, the animation reflects parameter changes of all components of the three-dimensional graphic material within a period of time by means of a material parameter set stored in a key frame parameter array, and a corresponding three-dimensional graphic material entry point and a corresponding three-dimensional graphic material exit point are arranged on the time line.
Referring to fig. 4, fig. 4 is an animation diagram of three-dimensional graphic material according to an embodiment of the invention. In the present invention, each animation of the three-dimensional graphic material, for example animation 1, animation 2, animation 3, can be added to the editing scene. Adding any animation of a three-dimensional graphic material into an editing scene is generally realized by dragging and dropping an icon representing the animation of the material to a certain time point position of a certain track of a timeline user interface of nonlinear video and audio editing software, wherein the added track of the animation represents a logic layer where the material is positioned, the added time point of the animation is an in point of the section of the three-dimensional graphic material, and the time difference between time codes of a first key frame and a last key frame of an animation key frame array of the three-dimensional graphic material is the length of the animation. And the time point obtained by adding the length of the animation to the input point is the output point of the section of the three-dimensional graphic material. The in-point and out-point reflect the actual output point and duration of the animation effect of the three-dimensional graphic material. The three-dimensional graphical material appears as a line segment from the in-point to the out-point on the timeline. The system establishes time line data related to the three-dimensional graphic material animation and is used for recording information of parameter sets, access points, logic layers and the like of the original material.
As described above, the three-dimensional graphic material added in animation is specified with the track during its addition to the editing scene, and thus has the logical layer represented by the track. By adding the three-dimensional graphic material animation into the editing scene, the three-dimensional graphic material is also arranged in different logic layers in the editing scene, so that the three-dimensional graphic material and the video have the same place and operation mode in the editing scene. The logical layer reflects the hierarchical relationship among the three-dimensional graphic materials, the video and audio and the three-dimensional graphic materials and the video and audio.
And adding the animation as an element with the same position as the video and audio into the timeline for editing, obtaining a material parameter set of each frame between the three-dimensional graphic material in-point and the three-dimensional graphic material out-point through key frame operation, rendering by using the material parameter set to determine a frame image of the three-dimensional graphic material at the current time point, and aliasing the image with the image of the current time point obtained from other logic layers according to the aliasing sequence to synthesize a final video together with videos of other steps.
Referring to fig. 5, fig. 5 is a diagram illustrating a frame of image in an animation according to an embodiment of the invention. The key frame operation is as follows: and (3) carrying out interpolation operation by utilizing parameters such as a key array, an interpolation mode and the like of the contained animation corresponding to any time point between an in point and an out point of the three-dimensional graphic material to obtain a material key frame related parameter set of the time point, and forming a material parameter set of the current time point together with a material inherent parameter set which is not influenced by the key frame array. And rendering by using the material parameter set to determine a frame of image of the three-dimensional graphic material of the section at the time point. The image can be used to perform aliasing with the image at this point in time obtained from other logical layers according to an aliasing hierarchy, and the final video is synthesized together with other steps.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating the operation of three-dimensional graphic materials according to an embodiment of the present invention. The three-dimensional graphic material and the video and audio have the same status and operation mode in the editing scene, so that the three-dimensional graphic material and the video and audio can be adjusted in the nonlinear video and audio coding software as the video, particularly, the material parameter adjustment can be performed before the image is obtained, the loss of graphic precision after rasterization is avoided, and the original precision can be maintained after the adjustment. Converting the shifting operation, the cutting operation and the zooming operation of the animation of the three-dimensional graphic material into the moving, adding and deleting of key frames in the animation and the time operation of equal-proportion adjustment on the time line;
wherein the offset operation is: and integrally moving the position of the animation of the three-dimensional graphic material along the track where the three-dimensional graphic material is located on the interface of the timeline to a new time point, so as to integrally advance or delay the appearance time of the animation. And at the moment, modifying the in-point information in the time line data related to the three-dimensional graphic material animation according to the new arrived time point, wherein the out-point information can be obtained by calculating the length of the in-point animation. All key frame information of the three-dimensional graphic material animation remains unchanged.
The cutting operation comprises the following steps: and performing cutting operation on the line segment which represents the three-dimensional graphic material on the track where the three-dimensional graphic material is positioned on the interface of the timeline, so as to cut off a part of the content of the animation, and the animation is enabled to be reduced in output time and unchanged in output speed. If the interception point is at the key frame, deleting the key frames except the interception point; otherwise, a key frame is added at the interception point through interpolation, and then the key frames except the interception point are deleted. If the interception range contains the access point, recalculating the access point, namely taking the time point of the key frame of the interception point, the original key frame or the key frame generated according to the operation as the access point. The interception causes the length of the animation to change, and the out-point calculated by the in-point and the length may also change. The interception operation can obtain the same accuracy as video interception.
The zooming operation is as follows: and moving the left and right edges of the line segment representing the material on the track where the three-dimensional graphic material is positioned, namely the in point and the out point of the three-dimensional graphic material, on the interface of the timeline, so as to prolong or shorten the animation. Firstly, calculating the scaling ratio, namely the ratio of the original length to the scaled length, and then moving the original key frame in equal proportion according to the scaling ratio, namely modifying the time code of each key frame in proportion. The scaling operation causes the distance between each key frame in the animation key frame array to change in equal proportion, the animation length changes, and the entry point may also change. The zooming operation is obviously better than the video fast-slow playing effect.
Referring to fig. 7, fig. 7 is a schematic view illustrating a process of modifying parameters of three-dimensional graphic material according to an embodiment of the present invention. In the method of the invention, the internal parameters of the three-dimensional graphic material can be modified and adjusted by direct parameter adjustment and attribute leading-out item modification through an external data source. The main process for modifying the parameters of the three-dimensional graphic material is as follows: firstly, the internal parameters can be modified, or the internal one or more parameters can be modified after the lead-out item is modified; secondly, verifying the parameter validity; and finally confirming the storage or the reading.
Wherein, the direct parameter adjustment specifically comprises: the parameters of a three-dimensional graphic material located on a timeline track are recorded in timeline data associated with animation of the material on the track, which are independent of each other, even from the same material and animation, and thus can be adjusted independently without affecting each other. These parameters are exposed to the end user for modification via a parameter adjustment page that can be called out from the timeline interface. The content modified by the user is verified and then continuously stored in the relevant timeline data to cover the original data.
In addition, the specific example of modifying the lead-out items of the attributes through the external data source is as follows: the attribute elicitation item corresponds to one or a group of parameters of the three-dimensional graphic material, limits the range of the parameters and has a verification function. The same effect as direct parameter adjustment can be achieved by modifying the parameters represented by the attribute elicitation items through an external data source. Firstly, a user modifies the lead-out item data through a lead-out item modification interface called out from the timeline interface, and the modified lead-out item data is converted into modification of one or more parameters in the timeline data according to the incidence relation recorded in the lead-out item. The steps thereafter correspond to direct parameter modification. In both cases, the bottom engine needs to verify the validity of the modified parameter values before storing the modified parameter values back into the timeline data. If the modified parameter value is illegal data, such as type mismatch or out of range, the operation is cancelled.
In the invention, the three-dimensional graphic material supports DVE transition effect, namely digital video special effect transition effect. In non-linear audiovisual editing software, DVE is used to effect transitions between different videos that are adjacent in time, such as where the fade in the preceding video disappears while the following video appears progressively clearer, and the like. In the invention, the steps for realizing DVE for the three-dimensional graphic material are as follows:
(1) firstly, rendering according to a parameter set of a current time point to form a frame of image;
(2) processing the frame of image by adopting a DVE algorithm according to the type of the DVE;
(3) and aliasing the video frame processed by DVE with the video or three-dimensional graphic material of other logic layers to form a final effect.
Referring to fig. 8, fig. 8 is a schematic view illustrating a three-dimensional graphical material editing process according to an embodiment of the present invention. After the three-dimensional graphic material is added to the editing scene, the specific steps of editing, rendering and finally synthesizing the video in step S104 are as follows: adding animation of the three-dimensional graphic material into a scene; adjusting the hierarchy between the logical layers; performing an offset (optional), crop or zoom operation (optional); setting (selecting) parameters; DVE and special effect setting (optional) are carried out; and rendering the three-dimensional graphic material according to time to form image frame sequences, and rendering the image frames together with other video and audio or image frames of the three-dimensional graphic material to form a synthetic video.
Referring to fig. 9a, fig. 9a is a schematic diagram illustrating a three-dimensional graphics material rendering process according to an embodiment of the invention. The step of rendering the three-dimensional graphic material in the step 6 comprises the following steps: parameter application, namely performing validity verification and performing correlation operation; interpolating a key frame; calculating and applying a special effect; generating an image frame; and performing post-processing on the image frame.
Referring to fig. 9b, fig. 9b is a schematic diagram of a three-dimensional graphics material synthesizing process according to an embodiment of the present invention, wherein the synthesizing step of the three-dimensional graphics material in step 6 includes: setting the current time; acquiring image frames of all logic layers at the time point; aliasing forms a final image frame; synthesizing the image frames into a result video as the video content of the time point; the first four steps are repeated until the video ends.
The invention uses the time line to edit the three-dimensional graph and image in real time, which solves the above problem, to make the inner information of the three-dimensional graph material be used to synthesize with video and audio directly, the scene and object in the three-dimensional graph material and the video and audio layer can be mixed, linked, transited and transformed, and the property of the three-dimensional graph material can be fine-tuned and modified in the context environment of non-linear video and audio editing, to display the effect in real time. The method greatly improves the quality and the working efficiency of the output product of the nonlinear video and audio editing software.
While the embodiments of the present invention have been described in detail, those skilled in the art, having the benefit of the present teachings, will appreciate that many modifications and variations are possible in the invention without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A method for editing three-dimensional graphics and images in real time using a timeline, the method comprising:
the method comprises the following steps: establishing an editing scene, and editing the three-dimensional graphic material; the editing scene adopts a three-dimensional scene as a common container for all video, audio and three-dimensional graphic materials;
step two: adding video and audio and three-dimensional graphic materials of different sources into the editing scene, and specifying the track and the in-out time of the added video and audio or three-dimensional graphic materials, wherein the in-out time represents the time range of the video and audio or three-dimensional graphic materials appearing in the finally synthesized video file; constructing a logic layer, and placing the added video and audio or three-dimensional graphic materials into the logic layer, wherein the logic layer reflects the spatial hierarchical relationship among the three-dimensional graphic materials, the video and audio and the spatial hierarchical relationship between the three-dimensional graphic materials and the video and audio;
step three: constructing a time line which is composed of a plurality of tracks and reflects the input and output time of the added audio and video and three-dimensional graphic materials and the hierarchical relationship between the logic layers, thereby determining the aliasing order of the audio and video and the three-dimensional graphic materials; the aliasing sequence reflects the shielding, covering and transmitting relations among different video and audio or three-dimensional graphic materials;
step four: sequentially performing aliasing rendering on the video and audio and the three-dimensional graphic material in the editing scene according to the aliasing sequence to generate a final video result;
the method is characterized in that the added video and audio and three-dimensional graphic materials are respectively arranged in a plurality of different logic layers, the contents of the added video and audio and three-dimensional graphic materials are respectively subjected to aliasing rendering at each time point according to the sequence of the plurality of different logic layers to obtain the image frames of the time point, and all the image frames are rendered into a composite video according to the time sequence.
2. The method of claim 1, wherein different videos and audios are placed in different logical layers of the edited scene according to the aliasing order and the assigned tracks to reflect the hierarchical relationship between the videos and audios, each of the videos and audios having a specific video and audio in-point and video and audio out-point and video and audio length.
3. The method of claim 2, wherein the audio and video includes two types: the video and audio contained in the three-dimensional graphic material and the external video and audio added by the nonlinear video and audio editing software;
the video and audio contained in the three-dimensional graphic material has a logic layer where the video and audio are located, and the logic layer where the track of the three-dimensional graphic material is located is shared.
4. The method of claim 3 wherein the three-dimensional graphic material is presented as an animation that reflects changes in parameters of various components of the three-dimensional graphic material over a period of time by virtue of the material parameter sets stored by the key frame parameter array, there being corresponding three-dimensional graphic material in-points and three-dimensional graphic material out-points on the timeline.
5. The method of claim 4, wherein the animation is added to the timeline as an element equivalent to the video and audio for editing, a material parameter set of each frame between an in point of the three-dimensional graphics material and an out point of the three-dimensional graphics material is obtained through key frame operation, and an image of the three-dimensional graphics material at the current time point of the segment can be determined through rendering by using the material parameter set, and the image can be used for aliasing with images of the current time point obtained from other logic layers according to the aliasing order to synthesize a final video together with videos of other steps.
6. The method of claim 5, wherein the key frame operation is: and carrying out interpolation operation by utilizing the contained parameters of the key frame array and the interpolation mode of the animation corresponding to any time point between the in-point and the out-point of the three-dimensional graphic material to obtain a material key frame related parameter set of the time point, and forming a material parameter set of the current time point together with a material inherent parameter set which is not influenced by the key frame array.
7. The method of claim 4, wherein the shifting, clipping, scaling operations performed on the animation of the three-dimensional graphical material are translated on the timeline into shifting, adding, deleting, and scaling time operations for key frames in the animation;
the offset operation is: integrally moving the position of the animation of the three-dimensional graphic material along the track where the three-dimensional graphic material is located on the interface of the timeline to a new time point, so as to integrally advance or delay the appearance time of the animation;
the cutting operation comprises the following steps: cutting a line segment which represents the three-dimensional graphic material on a track where the three-dimensional graphic material is located on an interface of the timeline, so that part of content of the animation is cut off, and the output speed of the animation is unchanged when the output time is reduced;
the zooming operation is as follows: and moving the left and right edges of the line segment representing the material on the track where the three-dimensional graphic material is positioned, namely the in point and the out point of the three-dimensional graphic material, on the interface of the timeline, so as to prolong or shorten the animation.
8. The method of claim 1, wherein the internal parametric modification of the three-dimensional graphics material is divided into direct parametric adjustment and attribute-induced modification via an external data source.
9. The method according to claim 1, wherein after the three-dimensional graphic material is added to the editing scene, in the fourth step, the specific steps of editing, rendering and finally synthesizing the video are as follows:
step 1: adding animation of the three-dimensional graphic material into a scene;
step 2: adjusting the hierarchy between the logical layers;
and step 3: performing offset, clipping or scaling operations;
and 4, step 4: setting parameters;
and 5: carrying out DVE and special effect setting;
step 6: rendering the three-dimensional graphic material according to time to form image frame sequences, and synthesizing the image frame sequences with image frames of other videos and audios or the three-dimensional graphic material into a video.
10. The method of claim 9, wherein the step of rendering the three-dimensional graphical material in step 6 comprises:
step i: parameter application, namely performing validity verification and performing correlation operation;
step ii: interpolating a key frame;
step iii: calculating and applying a special effect;
step iv: generating an image frame;
step v: performing post-processing on the image frame;
the step 6 of synthesizing the three-dimensional graphic material comprises the following steps:
step I: setting the current time;
step II: acquiring image frames of all logic layers at the time point;
step III: aliasing forms a final image frame;
step IV: synthesizing the image frames into a result video as the video content of the time point;
step V: and repeating the steps I-IV until the video and audio ends.
CN202210478449.3A 2022-05-05 2022-05-05 Method for editing three-dimensional graph and image in real time by using time line Active CN114710704B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210478449.3A CN114710704B (en) 2022-05-05 2022-05-05 Method for editing three-dimensional graph and image in real time by using time line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210478449.3A CN114710704B (en) 2022-05-05 2022-05-05 Method for editing three-dimensional graph and image in real time by using time line

Publications (2)

Publication Number Publication Date
CN114710704A true CN114710704A (en) 2022-07-05
CN114710704B CN114710704B (en) 2023-12-12

Family

ID=82177596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210478449.3A Active CN114710704B (en) 2022-05-05 2022-05-05 Method for editing three-dimensional graph and image in real time by using time line

Country Status (1)

Country Link
CN (1) CN114710704B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001111950A (en) * 1999-10-06 2001-04-20 Nippon Hoso Kyokai <Nhk> Video image edit method and device
CN1392824A (en) * 2000-09-28 2003-01-22 索尼公司 Authoring system and method, and storage medium
CN1437137A (en) * 2002-02-06 2003-08-20 北京新奥特集团 Non-linear editing computer
US20120284625A1 (en) * 2011-05-03 2012-11-08 Danny Kalish System and Method For Generating Videos
CN106658053A (en) * 2016-09-26 2017-05-10 新奥特(北京)视频技术有限公司 Nonlinear program editing method and nonlinear program editing device
CN109035373A (en) * 2018-06-28 2018-12-18 北京市商汤科技开发有限公司 The generation of three-dimensional special efficacy program file packet and three-dimensional special efficacy generation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001111950A (en) * 1999-10-06 2001-04-20 Nippon Hoso Kyokai <Nhk> Video image edit method and device
CN1392824A (en) * 2000-09-28 2003-01-22 索尼公司 Authoring system and method, and storage medium
CN1437137A (en) * 2002-02-06 2003-08-20 北京新奥特集团 Non-linear editing computer
US20120284625A1 (en) * 2011-05-03 2012-11-08 Danny Kalish System and Method For Generating Videos
CN106658053A (en) * 2016-09-26 2017-05-10 新奥特(北京)视频技术有限公司 Nonlinear program editing method and nonlinear program editing device
CN109035373A (en) * 2018-06-28 2018-12-18 北京市商汤科技开发有限公司 The generation of three-dimensional special efficacy program file packet and three-dimensional special efficacy generation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郝银华: "基于AVID MC6的3D立体编辑流程的初探", 影视制作, no. 06 *

Also Published As

Publication number Publication date
CN114710704B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
US11079912B2 (en) Method and apparatus for enhancing digital video effects (DVE)
US7336264B2 (en) Method and system for editing or modifying 3D animations in a non-linear editing environment
US8674998B1 (en) Snapshot keyframing
US8253728B1 (en) Reconstituting 3D scenes for retakes
US11915342B2 (en) Systems and methods for creating a 2D film from immersive content
AU2007202098B2 (en) 2D/3D Post production integration platform
EP3246921B1 (en) Integrated media processing pipeline
US8363055B1 (en) Multiple time scales in computer graphics
US20050034076A1 (en) Combining clips of image data
CN114710704B (en) Method for editing three-dimensional graph and image in real time by using time line
Obert et al. iCheat: A representation for artistic control of indirect cinematic lighting
GB2246933A (en) Production of multi-layered video composite
WO2014111160A1 (en) Device and method for rendering of moving images and set of time coded data containers
Kelly Why Use After Effects?
Jones et al. Why Use After Effects?
Thompson Digital multimedia development processes and optimizing techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant