CN111225232A - Video-based sticker animation engine, realization method, server and medium - Google Patents

Video-based sticker animation engine, realization method, server and medium Download PDF

Info

Publication number
CN111225232A
CN111225232A CN201811408941.3A CN201811408941A CN111225232A CN 111225232 A CN111225232 A CN 111225232A CN 201811408941 A CN201811408941 A CN 201811408941A CN 111225232 A CN111225232 A CN 111225232A
Authority
CN
China
Prior art keywords
sticker
animation
time
paster
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811408941.3A
Other languages
Chinese (zh)
Other versions
CN111225232B (en
Inventor
周光金
周驿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811408941.3A priority Critical patent/CN111225232B/en
Publication of CN111225232A publication Critical patent/CN111225232A/en
Application granted granted Critical
Publication of CN111225232B publication Critical patent/CN111225232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a video-based sticker animation engine, an implementation method, a server and a medium, wherein the sticker animation engine comprises: the sticker model is used for acquiring sticker description information input by a user; the sticker adapter is used for determining the mapping relation between the sticker animation time and the video time, the mapping relation between the sticker animation time and the animation state and the corresponding animation state according to the sticker description information; and the sticker filter is used for adding the sticker to the target video frame read by the sticker filter according to the animation state of the sticker, the mapping relation between the animation time and the video time and the mapping relation between the animation time and the animation state which are determined by the sticker adapter. The embodiment of the disclosure solves the problems of complicated realization process and high development cost of the video sticker animation in the prior art, improves the convenience of adding the sticker animation in the video and reduces the development cost.

Description

Video-based sticker animation engine, realization method, server and medium
Technical Field
The embodiment of the disclosure relates to the technical field of internet, in particular to a video-based sticker animation engine, an implementation method, a server and a medium.
Background
The development of network technology makes video interaction application very popular in people's daily life.
For the internet enterprises applying video interaction, the method meets the requirements of users, provides satisfactory product experience for the users, and is a key factor for keeping the enterprise competitiveness not negligible. Aiming at a wide user group, the video interactive application supports the provision of various types of existing video resources for users, and simultaneously supports the real-time shooting of personalized videos by the users. The personalized function provided by the video interactive application to the user is closely related to the development progress of background technicians.
However, for technicians, how to conveniently add the sticker animation in the video and reduce the development cost is still a problem to be solved currently.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
The embodiment of the disclosure provides a video-based sticker animation engine, a realization method, a server and a medium, so as to improve the convenience of adding a sticker animation in a video and reduce the development cost.
In a first aspect, the disclosed embodiments provide a video-based sticker animation engine, comprising:
the sticker model is used for acquiring sticker description information input by a user;
the sticker adapter is used for determining the mapping relation between the sticker animation time and the video time, the mapping relation between the sticker animation time and the animation state and the corresponding animation state according to the sticker description information;
and the paster filter is used for adding pasters to the target video frames read by the paster filter according to the mapping relation among the animation state, the animation time and the video time of the pasters and the mapping relation among the animation time and the animation state determined by the paster adapter.
Optionally, the sticker animation engine further includes:
and the paster filter management chain is used for generating the paster filters in corresponding quantity and the paster adapters corresponding to each paster model in each paster filter according to the quantity of the paster models and the paster description information corresponding to each paster model.
Optionally, the sticker model includes a static sticker model and a dynamic sticker model, where the dynamic sticker model includes at least one function for describing a sticker animation;
correspondingly, the sticker description information includes static sticker description information and dynamic sticker description information.
Optionally, the function includes at least one of the following video time and animation time relation functions: based on a second order Bezier curve animation function, a bounce animation function and a self-defined function.
Optionally, the sticker adapter is specifically configured to:
and determining the mapping relation between the animation time of the paster and the video time, the mapping relation between the animation time of the paster and the animation state and the corresponding animation state by utilizing an interpolation method according to the description information of the paster.
Optionally, the sticker filter includes:
the animation state management module is used for acquiring the animation state of the sticker, the mapping relation between animation time and video time and the mapping relation between the animation time and the animation state, which are determined by the corresponding sticker adapter;
the time stamp extraction module is used for acquiring the time stamp of the current video frame;
and the sticker adding module is used for determining whether the current video frame is a target video frame needing to be added with stickers according to the time stamp of the current video frame and the mapping relation between the animation time and the video time, if so, determining a corresponding target animation state according to the mapping relation between the animation time and the animation state, and adding the stickers corresponding to the target animation state to the current target video frame.
Optionally, the sticker animation engine further includes:
the static sticker merging filter is used for merging static stickers corresponding to at least two static sticker models for the situation that at least two static sticker models exist;
correspondingly, the sticker filter management chain is further configured to: if the number of the static stickers in the sticker model is at least two, a static sticker filter and a sticker adapter corresponding to the static sticker filter are generated according to the combined static sticker obtained by the static sticker combining filter and the static sticker description information of the combined static sticker.
Optionally, the sticker filter management chain is further configured to provide an interface for adding, deleting, or modifying the current sticker model.
In a second aspect, an embodiment of the present disclosure further provides a method for implementing a video-based sticker animation, where the method includes:
acquiring sticker description information input by a user through a sticker model;
determining the mapping relation between the animation time of the paster and the video time, the mapping relation between the animation time of the paster and the animation state and the corresponding animation state through the paster adapter according to the description information of the paster;
and adding the paster to the target video frame read by the paster filter through the paster filter according to the animation state of the paster, the mapping relation between the animation time and the video time and the mapping relation between the animation time and the animation state determined by the paster adapter.
Optionally, the method further includes:
and generating the paster filters with corresponding quantity and the paster adapters corresponding to each paster model in each paster filter according to the quantity of the paster models and the paster description information corresponding to each paster model through a paster filter management chain.
Optionally, the sticker model includes a static sticker model and a dynamic sticker model, where the dynamic sticker model includes at least one function for describing a sticker animation;
correspondingly, the sticker description information includes static sticker description information and dynamic sticker description information.
Optionally, the function includes at least one of the following video time and animation time relation functions: based on a second order Bezier curve animation function, a bounce animation function and a self-defined function.
Optionally, the determining, by the sticker adapter, the mapping relationship between the animation time of the sticker and the video time, the mapping relationship between the animation time of the sticker and the animation state, and the corresponding animation state according to the sticker description information includes:
and determining the mapping relation between the animation time of the paster and the video time, the mapping relation between the animation time of the paster and the animation state and the corresponding animation state by the paster adapter according to the description information of the paster and by utilizing an interpolation method.
Optionally, adding a sticker to a target video frame read by the sticker filter according to the animation state of the sticker, the mapping relationship between animation time and video time, and the mapping relationship between animation time and animation state determined by the sticker adapter through the sticker filter includes:
acquiring the animation state of the sticker, the mapping relation between animation time and video time and the mapping relation between animation time and the animation state, which are determined by the corresponding sticker adapter, through a sticker filter;
acquiring a timestamp of a current video frame;
and determining whether the current video frame is a target video frame to which a sticker needs to be added according to the timestamp of the current video frame and the mapping relation between the animation time and the video time, if so, determining a corresponding target animation state according to the mapping relation between the animation time and the animation state, and adding the sticker corresponding to the target animation state to the current target video frame.
Optionally, the method further includes:
for the situation that at least two static sticker models exist, static stickers corresponding to the at least two static sticker models are combined through a static sticker combining filter;
and generating a static sticker filter and a sticker adapter corresponding to the static sticker filter by the sticker filter management chain according to the static sticker merging filter and the static sticker description information of the static sticker merging filter.
Optionally, the method further includes:
and adding, deleting or modifying the current sticker model through an interface provided by a sticker filter management chain.
In a third aspect, an embodiment of the present disclosure further provides a server, including:
one or more processing devices;
a storage device for storing one or more programs,
when executed by the one or more processing devices, cause the one or more processing devices to implement a video-based sticker animation implementation method according to any one of the embodiments of the present disclosure.
In a fourth aspect, the embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processing device, implements a video-based sticker animation implementation method according to any embodiment of the present disclosure.
The embodiment of the disclosure provides a video-based sticker animation engine, an implementation method, a server and a medium. For developers, in the process of making the video sticker animation, the developers only need to configure the sticker model and set the description information of the stickers, so that the problems of complex implementation process and high development cost of the video sticker animation in the prior art are solved, convenience of adding the sticker animation in the video is improved, and development cost is reduced.
Drawings
FIG. 1 is a schematic diagram of a video-based sticker animation engine according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of another video-based sticker animation engine provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart diagram of a method for implementing video-based animation on stickers according to an embodiment of the present disclosure;
fig. 4 is a schematic hardware structure diagram of a server according to an embodiment of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the disclosure and are not limiting of the disclosure. It should be further noted that, for the convenience of description, only some of the structures relevant to the present disclosure are shown in the drawings, not all of them.
Fig. 1 is a schematic structural diagram of a video-based sticker animation engine according to an embodiment of the present disclosure, where the embodiment is applicable to a case of adding a sticker animation to a video, and the sticker animation engine may be implemented in a software and/or hardware manner and may be integrated on a server.
As shown in FIG. 1, a video-based sticker animation engine provided by an embodiment of the present disclosure may include a sticker model (Sticker model)110, a sticker adapter (Sticker Appt) 120, and a sticker filter (Sticker Filter)130, wherein:
a sticker model 110 for acquiring sticker description information input by a user;
the sticker adapter 120 is used for determining the mapping relation between the sticker animation time and the video time, the mapping relation between the sticker animation time and the animation state and the corresponding animation state according to the sticker description information;
and the sticker filter 130 is used for adding the sticker to the target video frame read by the sticker filter 130 according to the animation state of the sticker, the mapping relationship between the animation time and the video time and the mapping relationship between the animation time and the animation state determined by the sticker adapter 120.
A user can add and create a new sticker based on a pre-configured sticker model 110 through external input devices such as a keyboard, a mouse, a touch screen and the like, and input description information of the sticker into the new sticker model 110, wherein the sticker description information is used for describing attribute information of the sticker in a video, and includes but is not limited to a storage path where the sticker is called, a position of the sticker on a video frame, a key time point of the sticker in the video, a type of a sticker animation, a duration of the sticker animation, a size of the sticker, a color of the sticker, a transparency of the sticker, a rotation angle of the sticker and the like, wherein the key time point includes but is not limited to a presence time, a disappearance time, a transition time of a state change and the like of; the sticker model 110 stores the sticker in a cache according to the information input by the user and transmits the sticker description information to the sticker adapter 120; the sticker adapter 120 extracts information related to time from the sticker description information, determines a mapping relationship between the sticker animation time and the video time, a mapping relationship between the sticker animation time and the animation state, and an animation state corresponding to the sticker through logical calculation based on the type of the sticker animation, and then transmits the determined information to the sticker filter 130; the sticker filter 130 determines a video frame corresponding to a time between appearance and disappearance of a sticker in a video as a target video frame, and sequentially adds the sticker to the target video frame. In this embodiment, the videos to which the sticker animation can be added include videos that have been shot and videos that are shot or recorded in real time, that is, a user can add the sticker animation in the post-production process of the existing video, and can also add the sticker animation in the real-time shooting or recording process of the video.
For example, a user may wish to add a 6-second flame sticker animation to a 10-second video a, which appears within the 2 nd to 8 th seconds of video playback, and the flame gradually becomes larger in the first 3 seconds of the animation and gradually becomes smaller in the last 3 seconds of the animation, and the sticker description information entered into the sticker model 110 may include:
storage path of flame sticker: xxx;
time of occurrence of flame in video a: 2 nd second of playing time;
flame disappearance time in video a: the 8 th second of the play time;
animation type of the sticker: the flame gradually becomes larger 3 seconds before the animation and gradually becomes smaller 3 seconds after the animation;
after the sticker adapter 120 receives the sticker description information, through logical operation, on one hand, it can be determined that the flame sticker animation appears when the video a is played to the 2 nd second, disappears when the video a is played to the 8 th second, and the flame state changes transitionally when the video is played to the 5 th second; on the other hand, the sticker adapter 120 determines the overall changing state of the flame sticker animation, the mapping between the first 3 seconds of animation playback and the state where the flame is gradually increasing, and the mapping between the last 3 seconds of animation playback and the state where the flame is gradually decreasing. Then, the sticker filter 130 reads the video a frame by frame, and sequentially adds the flame stickers in the animation state to the target video frames, i.e., the video frames corresponding to the 2 nd second and the 2 nd to 8 th seconds in the video a, according to the flame sticker animation, the time mapping relationship between the video a and the animation state, and the flame sticker animation state determined by the sticker adapter 120.
In this embodiment, the sticker model 110 may facilitate a user to input sticker description information in a personalized manner, the sticker adapter 120 may automatically perform calculation according to the user input information and quickly determine a related mapping relationship between a sticker and a video and a corresponding sticker state, the sticker filter 130 may automatically add a sticker conforming to the sticker state to a target video frame according to a calculation result of the sticker adapter 120, and through cooperation between the sticker model 110, the sticker adapter 120, and the sticker filter 130, the user only needs to pay attention to the sticker model and set the description information of the sticker in a video sticker animation production process, thereby greatly simplifying a production process of the video sticker animation and having higher convenience.
Optionally, the sticker model 110 includes a static sticker model and a dynamic sticker model, wherein the dynamic sticker model includes at least one function for describing a sticker animation; accordingly, the sticker description information includes static sticker description information and dynamic sticker description information.
The static Sticker model is suitable for adding a Sticker in a video, wherein the Sticker state does not change along with the playing of the video, for example, the position appearing in a video frame is fixed, namely, the static Sticker (Sticker) is added; the dynamic sticker model is suitable for adding a sticker in a video, wherein the state of the sticker changes along with the playing of the video, namely, adding a dynamic sticker (animation sticker). Before inputting the sticker description information, the user can select the corresponding sticker model according to the type of the sticker. For the dynamic sticker model, the sticker description information input by the user includes parameters required by the function, for example, the appearance time, disappearance time, transition time of the state change and the like of the sticker in the video can be used as the parameters of the function.
Further, the function for describing the sticker animation comprises at least one of the following video time and animation time relation functions: based on a second order Bezier curve animation function, a bounce animation function and a self-defined function. Of course, the function is not limited to the above function types, and any relationship function for realizing the animation effect of the sticker should be included in the function described in this embodiment. By providing one or more function functions in the dynamic sticker model, the demand of a user for making diversified sticker animations can be met.
Exemplarily, a mapping relation between the playing time of the sticker animation and the playing time of the video is determined by using a video time and animation time relation function (mediatiming func), and an animation type with repeated or reciprocating changes is made; making various slow-in and slow-out animation types of the paster by using a second-order Beziercurve function (MediaTimingBezierurveFunc); making various bouncing animations of the sticker by using a bouncing animation function (MediaTimingSpringFunc), such as a bouncing effect realized by changing attributes such as displacement of the sticker, size of the sticker, or transparency of the sticker in a video frame based on an initial state of the sticker; modifying the time displayed on the time sticker in real time by using a video time and animation time relation function and a time stamp of the video, for example, adding a real-time sticker (real animation sticker) on a video frame in the video shooting process; and realizing the self-defined animation effect of the paster by utilizing the self-defined function. In addition, different sticker animation effects can be combined and produced. The foregoing is by way of example and is not to be construed as limiting the embodiments of the present disclosure.
During the process of making the video sticker animation, based on the sticker description information input by the user, the sticker adapter 120 may call the corresponding function according to the type of the sticker animation, determine the animation status corresponding to the sticker, and then transmit the animation status to the sticker filter 130.
Optionally, the sticker adapter 120 is specifically configured to:
and determining the mapping relation between the animation time of the paster and the video time, the mapping relation between the animation time of the paster and the animation state and the corresponding animation state by utilizing an interpolation method according to the description information of the paster. That is, according to the state of the sticker input by the user at the key time point, the sticker adapter 120 may determine the change state of the sticker in the continuous time by using an interpolation method, thereby determining the complete animation state corresponding to the sticker, the time mapping relationship between the sticker animation and the video, and the like. Further, the sticker adapter 120 may determine the changing status of the sticker in continuous time by calling any function, using linear interpolation or non-linear interpolation.
Optionally, the decal filter 130 includes:
the animation state management module is used for acquiring the corresponding mapping relation between the animation state of the sticker, the animation time and the video time and the mapping relation between the animation time and the animation state, which are determined by the sticker adapter 120;
the time stamp extraction module is used for acquiring the time stamp of the current video frame;
and the sticker adding module is used for determining whether the current video frame is a target video frame to which a sticker needs to be added according to the timestamp of the current video frame and the mapping relation between the animation time and the video time, if so, determining a corresponding target animation state according to the mapping relation between the animation time and the animation state, and adding the sticker corresponding to the target animation state to the current target video frame.
When the video is read by the sticker filter 130, the video frames can be read frame by frame and the timestamps thereof can be extracted frame by frame, and when the time points related to the sticker animation are consistent with the timestamps of the current video frames, the current video frames are the target video frames to which the stickers need to be added.
According to the technical scheme of the embodiment of the disclosure, firstly, the sticker description information input by a user is obtained through the sticker model, and then the sticker is added to the target video frame through the synergistic effect of the sticker adapter and the sticker filter, so that the production of the sticker animation in the video is realized. In the process of making the video sticker animation, a developer only needs to configure the sticker model and set the description information of the sticker, so that the problems of complex implementation process and high development cost of the video sticker animation in the prior art are solved, the convenience of adding the sticker animation in the video is improved, and the development cost is reduced.
Fig. 2 is a schematic structural diagram of another video-based sticker animation engine provided by an embodiment of the present disclosure, which is expanded on the basis of various alternatives in the above-described embodiment and can be combined with various alternatives in the above-described embodiment. As shown in fig. 2, a video-based sticker animation engine provided by an embodiment of the present disclosure includes:
the sticker model 110 is used for acquiring the sticker description information input by the user.
And the sticker adapter 120 is used for determining the mapping relation between the sticker animation time and the video time, the mapping relation between the sticker animation time and the animation state and the corresponding animation state according to the sticker description information.
And the sticker filter 130 is used for adding the sticker to the target video frame read by the sticker filter 130 according to the animation state of the sticker, the mapping relationship between the animation time and the video time and the mapping relationship between the animation time and the animation state determined by the sticker adapter 120.
A sticker filter management chain (stickers filter list)140 for generating a corresponding number of sticker filters 130 and sticker adapters 120 included in each sticker filter 130 corresponding to each sticker model 110 according to the number of sticker models 110 and the sticker description information corresponding to each sticker model 110.
Illustratively, if the user input is a dynamic sticker, the sticker model 110 is a dynamic sticker model, and the sticker filter management chain 140 generates a dynamic sticker filter and a sticker adapter 120 corresponding to the dynamic sticker model accordingly; if the user inputs a static sticker, the sticker model 110 is a static sticker model, and the sticker filter management chain 140 generates a static sticker filter and the sticker adapter 120 corresponding to the static sticker model accordingly.
The user can add multiple sticker models 110, either a single static sticker model or a dynamic sticker model, or a combination of static and dynamic sticker models, as desired by the design. Each time a sticker model 110 is added or created, the sticker filter management chain 140 generates a corresponding sticker filter 130 and a sticker adapter 120 according to the sticker description information corresponding to the sticker model 110, and finally, the sticker filters 130 form a rendering chain, which sequentially renders the video and adds the stickers. It should be noted that in a specific embodiment, the rendering operation of the video may be performed after each additional sticker model is generated and the corresponding sticker filter 130 and sticker adapter 120 are generated. Or after all the sticker models 110 are added, the corresponding sticker filters 130 form a rendering chain, and the stickers are sequentially added to the video according to the adding sequence of the sticker models 110.
Continuing with FIG. 2, optionally, the sticker animation engine further comprises:
a static sticker merge filter (static sticker merge filter)150, configured to merge static stickers corresponding to at least two static sticker models when a user adds at least two static stickers to a video, in a case where there are at least two static sticker models;
accordingly, the decal filter management chain 140 is further configured to: if the number of static stickers in the sticker model 110 is at least two, i.e., at least two static sticker models are needed, then a static sticker filter and a corresponding sticker adapter 120 are generated based on the merged static sticker obtained by the static sticker merging filter 150 and the static sticker description information of the merged static sticker.
When a user adds a plurality of static stickers in a video, the static sticker merge filter 150 merges the plurality of static stickers, for example, the static stickers are merged into one or more merged static stickers, the number of the merged static stickers is smaller than that of the plurality of static stickers, the sticker filter management chain 140 is then utilized to generate the corresponding static sticker filter and the sticker adapter 120, and the merged static sticker is added to a target video frame, so that the calculation resources and the calculation time consumed by adding the static stickers in the video for a plurality of times can be saved, the sticker efficiency is improved, and the production efficiency of the sticker animation is further improved.
Optionally, the decal filter management chain 140 is also used to provide an interface for adding, deleting, or modifying the current decal model 110.
According to the user's needs, the current sticker model 110 can be adjusted through the interfaces for addition, deletion, or modification provided by the sticker filter management chain 140, so that the adjustability of the sticker model 110 is increased, and the flexible applicability of the sticker animation engine provided by the embodiment to different users is also increased.
According to the technical scheme of the embodiment, the sticker filter and the sticker adapter required by the video sticker animation production process are generated by using the sticker filter management chain according to the number of the sticker models and the sticker description information corresponding to each sticker model, and meanwhile, when a user adds a plurality of static stickers in a video, the static stickers can be merged by using the static sticker merging filter, so that the problems of complicated implementation process and high development cost of the video sticker animation in the prior art are solved, the convenience of adding the sticker animation in the video is improved, the development cost is reduced, and the production efficiency of the video sticker animation is improved; in addition, an interface for adding, deleting or modifying the current sticker model is provided through the sticker filter management chain, so that the flexible applicability of the sticker animation engine provided by the embodiment to different users is increased.
Fig. 3 is a schematic flow chart of a method for implementing a video-based sticker animation according to an embodiment of the present disclosure, where the embodiment is applicable to a case of adding a sticker animation to a video, and the method for implementing a sticker animation may be executed by a video-based sticker animation engine according to an embodiment of the present disclosure. The method for implementing the sticker animation and the video-based sticker animation engine provided by the above embodiment belong to the same public concept, and details which are not described in detail in the method embodiment can refer to the description in the above embodiment.
As shown in fig. 3, a video-based sticker animation implementation method provided by an embodiment of the present disclosure may include:
s210, the paster description information input by the user is obtained through the paster model.
Optionally, the sticker model includes a static sticker model and a dynamic sticker model, where the dynamic sticker model includes at least one function for describing a sticker animation;
accordingly, the sticker description information includes static sticker description information and dynamic sticker description information.
Further, the function comprises at least one of the following video time and animation time relation functions: based on a second order Bezier curve animation function, a bounce animation function and a self-defined function.
S220, determining the mapping relation between the paster animation time and the video time, the mapping relation between the paster animation time and the animation state and the corresponding animation state through the paster adapter according to the paster description information.
Optionally, determining, by the sticker adapter, a mapping relationship between the animation time of the sticker and the video time, a mapping relationship between the animation time of the sticker and the animation state, and a corresponding animation state according to the sticker description information includes:
and determining the mapping relation between the animation time of the paster and the video time, the mapping relation between the animation time of the paster and the animation state and the corresponding animation state by the paster adapter according to the description information of the paster and by utilizing an interpolation method.
And S230, adding the paster to the target video frame passing through the paster filter according to the animation state of the paster, the mapping relation between the animation time and the video time and the mapping relation between the animation time and the animation state which are determined by the paster adapter.
Optionally, adding a sticker to a target video frame read by the sticker filter according to the animation state of the sticker, the mapping relationship between animation time and video time, and the mapping relationship between animation time and animation state determined by the sticker adapter, includes:
acquiring the animation state of the sticker, the mapping relation between animation time and video time and the mapping relation between animation time and the animation state, which are determined by the corresponding sticker adapter, through a sticker filter;
acquiring a timestamp of a current video frame;
and determining whether the current video frame is a target video frame to which the sticker needs to be added according to the timestamp of the current video frame and the mapping relation between the animation time and the video time, if so, determining a corresponding target animation state according to the mapping relation between the animation time and the animation state, and adding the sticker corresponding to the target animation state to the current target video frame.
On the basis of the above technical solution, optionally, the method for implementing animation on stickers further includes:
and generating a corresponding number of the sticker filters and sticker adapters corresponding to each sticker model in each sticker filter through a sticker filter management chain according to the number of the sticker models and the sticker description information corresponding to each sticker model.
Optionally, the method for implementing animation on stickers further includes:
for the situation that at least two static sticker models are provided, the static stickers corresponding to the at least two static sticker models are combined through a static sticker combining filter;
and generating a static sticker filter and a sticker adapter corresponding to the static sticker filter by a sticker filter management chain according to the static sticker merging filter and the static sticker description information of the static sticker merging filter.
Optionally, the method for implementing animation on stickers further includes:
and adding, deleting or modifying the current sticker model through an interface provided by a sticker filter management chain.
The technical scheme of the embodiment of the disclosure includes that firstly, sticker description information input by a user is obtained through a sticker model, then a mapping relation between sticker animation time and video time, a mapping relation between the sticker animation time and animation state and a corresponding animation state are determined through a sticker adapter according to the sticker description information, finally, a sticker is added to a target video frame read by a sticker filter according to the information determined by the sticker adapter through the sticker filter, and therefore the production of the sticker animation in the video is achieved. Based on the method for realizing the sticker animation provided by the embodiment, in the process of making the video sticker animation, a developer only needs to configure the sticker model and set the description information of the sticker, so that the problems of complicated realization process and high development cost of the video sticker animation in the prior art are solved, the convenience of adding the sticker animation in the video is improved, and the development cost is reduced.
Fig. 4 is a schematic hardware structure diagram of a server according to an embodiment of the present disclosure. Referring now to FIG. 4, a block diagram of a server 800 suitable for use in implementing embodiments of the present disclosure is shown. The server in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The server shown in fig. 4 is only an example, and should not bring any limitation to the function and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, server 800 may include one or more processing devices (e.g., central processing units, graphics processors, etc.) 801 and a storage device 808 for storing one or more programs. Among other things, the processing device 801 may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage device 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the server 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the server 800 to perform wireless or wired communication with other devices to exchange data. While fig. 4 illustrates a server 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the server; or may exist separately and not be assembled into the server.
The computer readable medium carries one or more programs which, when executed by the server, cause the server to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the server, cause the server to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (18)

1. A video-based sticker animation engine, comprising:
the sticker model is used for acquiring sticker description information input by a user;
the sticker adapter is used for determining the mapping relation between the sticker animation time and the video time, the mapping relation between the sticker animation time and the animation state and the corresponding animation state according to the sticker description information;
and the paster filter is used for adding pasters to the target video frames read by the paster filter according to the mapping relation among the animation state, the animation time and the video time of the pasters and the mapping relation among the animation time and the animation state determined by the paster adapter.
2. The sticker animation engine of claim 1, further comprising:
and the paster filter management chain is used for generating the paster filters in corresponding quantity and the paster adapters corresponding to each paster model in each paster filter according to the quantity of the paster models and the paster description information corresponding to each paster model.
3. The sticker animation engine of claim 2, wherein the sticker model comprises a static sticker model and a dynamic sticker model, wherein the dynamic sticker model comprises at least one function for describing a sticker animation;
correspondingly, the sticker description information includes static sticker description information and dynamic sticker description information.
4. The sticker animation engine of claim 3, wherein the function comprises at least one of the following video time versus animation time functions: based on a second order Bezier curve animation function, a bounce animation function and a self-defined function.
5. The sticker animation engine of claim 1, wherein the sticker adapter is specifically configured to:
and determining the mapping relation between the animation time of the paster and the video time, the mapping relation between the animation time of the paster and the animation state and the corresponding animation state by utilizing an interpolation method according to the description information of the paster.
6. The sticker animation engine of claim 1, wherein the sticker filter comprises:
the animation state management module is used for acquiring the animation state of the sticker, the mapping relation between animation time and video time and the mapping relation between the animation time and the animation state, which are determined by the corresponding sticker adapter;
the time stamp extraction module is used for acquiring the time stamp of the current video frame;
and the sticker adding module is used for determining whether the current video frame is a target video frame needing to be added with stickers according to the time stamp of the current video frame and the mapping relation between the animation time and the video time, if so, determining a corresponding target animation state according to the mapping relation between the animation time and the animation state, and adding the stickers corresponding to the target animation state to the current target video frame.
7. The sticker animation engine of claim 3, further comprising:
the static sticker merging filter is used for merging static stickers corresponding to at least two static sticker models for the situation that at least two static sticker models exist;
correspondingly, the sticker filter management chain is further configured to: if the number of the static stickers in the sticker model is at least two, a static sticker filter and a sticker adapter corresponding to the static sticker filter are generated according to the combined static sticker obtained by the static sticker combining filter and the static sticker description information of the combined static sticker.
8. The sticker animation engine of claim 2, wherein the sticker filter management chain is further configured to provide an interface to add, delete, or modify a current sticker model.
9. A video-based sticker animation implementation method is characterized by comprising the following steps:
acquiring sticker description information input by a user through a sticker model;
determining the mapping relation between the animation time of the paster and the video time, the mapping relation between the animation time of the paster and the animation state and the corresponding animation state through the paster adapter according to the description information of the paster;
and adding the paster to the target video frame read by the paster filter through the paster filter according to the animation state of the paster, the mapping relation between the animation time and the video time and the mapping relation between the animation time and the animation state determined by the paster adapter.
10. The method of claim 9, further comprising:
and generating the paster filters with corresponding quantity and the paster adapters corresponding to each paster model in each paster filter according to the quantity of the paster models and the paster description information corresponding to each paster model through a paster filter management chain.
11. The method of claim 10, wherein the sticker model comprises a static sticker model and a dynamic sticker model, wherein the dynamic sticker model includes at least one function for describing a sticker animation;
correspondingly, the sticker description information includes static sticker description information and dynamic sticker description information.
12. The method of claim 11, wherein the function comprises at least one of the following video time versus animation time functions: based on a second order Bezier curve animation function, a bounce animation function and a self-defined function.
13. The method of claim 9, wherein determining, by the sticker adapter, the mapping of the sticker animation time to the video time, the mapping of the sticker animation time to the animation state, and the corresponding animation state from the sticker description information comprises:
and determining the mapping relation between the animation time of the paster and the video time, the mapping relation between the animation time of the paster and the animation state and the corresponding animation state by the paster adapter according to the description information of the paster and by utilizing an interpolation method.
14. The method of claim 9, wherein the adding, by the decal filter, the decal to the target video frame read by the decal filter based on the animation state, the mapping of animation time to video time, and the mapping of animation time to animation state of the decal determined by the decal adapter comprises:
acquiring the animation state of the sticker, the mapping relation between animation time and video time and the mapping relation between animation time and the animation state, which are determined by the corresponding sticker adapter, through a sticker filter;
acquiring a timestamp of a current video frame;
and determining whether the current video frame is a target video frame to which a sticker needs to be added according to the timestamp of the current video frame and the mapping relation between the animation time and the video time, if so, determining a corresponding target animation state according to the mapping relation between the animation time and the animation state, and adding the sticker corresponding to the target animation state to the current target video frame.
15. The method of claim 11, further comprising:
for the situation that at least two static sticker models exist, static stickers corresponding to the at least two static sticker models are combined through a static sticker combining filter;
and generating a static sticker filter and a sticker adapter corresponding to the static sticker filter by the sticker filter management chain according to the static sticker merging filter and the static sticker description information of the static sticker merging filter.
16. The method of claim 10, further comprising:
and adding, deleting or modifying the current sticker model through an interface provided by a sticker filter management chain.
17. A server, comprising:
one or more processing devices;
a storage device for storing one or more programs,
when executed by the one or more processing devices, cause the one or more processing devices to implement the video-based sticker animation implementation method of any of claims 9-16.
18. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processing device, carries out a video-based sticker animation implementation method as claimed in any one of claims 9 to 16.
CN201811408941.3A 2018-11-23 2018-11-23 Video-based sticker animation engine, realization method, server and medium Active CN111225232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811408941.3A CN111225232B (en) 2018-11-23 2018-11-23 Video-based sticker animation engine, realization method, server and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811408941.3A CN111225232B (en) 2018-11-23 2018-11-23 Video-based sticker animation engine, realization method, server and medium

Publications (2)

Publication Number Publication Date
CN111225232A true CN111225232A (en) 2020-06-02
CN111225232B CN111225232B (en) 2021-10-29

Family

ID=70828670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811408941.3A Active CN111225232B (en) 2018-11-23 2018-11-23 Video-based sticker animation engine, realization method, server and medium

Country Status (1)

Country Link
CN (1) CN111225232B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899322A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, animation rendering SDK, device and computer storage medium
CN112770185A (en) * 2020-12-25 2021-05-07 北京达佳互联信息技术有限公司 Method and device for processing Sprite map, electronic equipment and storage medium
CN112929683A (en) * 2021-01-21 2021-06-08 广州虎牙科技有限公司 Video processing method and device, electronic equipment and storage medium
CN113163135A (en) * 2021-04-25 2021-07-23 北京字跳网络技术有限公司 Animation adding method, device, equipment and medium for video
CN113645476A (en) * 2021-08-06 2021-11-12 广州博冠信息科技有限公司 Picture processing method and device, electronic equipment and storage medium
WO2024067319A1 (en) * 2022-09-27 2024-04-04 Lemon Inc. Method and system for creating stickers from user-generated content

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010055414A1 (en) * 2000-04-14 2001-12-27 Ico Thieme System and method for digitally editing a composite image, e.g. a card with the face of a user inserted therein and for surveillance purposes
CN103853562A (en) * 2014-03-26 2014-06-11 北京奇艺世纪科技有限公司 Video frame rendering method and device
CN103928039A (en) * 2014-04-15 2014-07-16 北京奇艺世纪科技有限公司 Video compositing method and device
US20140310739A1 (en) * 2012-03-14 2014-10-16 Flextronics Ap, Llc Simultaneous video streaming across multiple channels
CN104469179A (en) * 2014-12-22 2015-03-25 杭州短趣网络传媒技术有限公司 Method for combining dynamic pictures into mobile phone video
CN105357451A (en) * 2015-12-04 2016-02-24 Tcl集团股份有限公司 Image processing method and apparatus based on filter special efficacies
CN106293393A (en) * 2016-08-01 2017-01-04 北京奇虎科技有限公司 Synthesis display packing based on electronics paster, device and terminal unit
CN106373170A (en) * 2016-08-31 2017-02-01 北京云图微动科技有限公司 Video making method and video making device
CN106730842A (en) * 2016-11-23 2017-05-31 网易(杭州)网络有限公司 A kind of game movie display methods and device
CN106846454A (en) * 2017-01-17 2017-06-13 网易(杭州)网络有限公司 Lens Flare method for drafting and device
CN107679497A (en) * 2017-10-11 2018-02-09 齐鲁工业大学 Video face textures effect processing method and generation system
US20180061186A1 (en) * 2013-03-01 2018-03-01 King Show Games, Inc. Method and apparatus for combining symbols in gaming devices
CN107909629A (en) * 2017-11-06 2018-04-13 广东欧珀移动通信有限公司 Recommendation method, apparatus, storage medium and the terminal device of paster
CN108322802A (en) * 2017-12-29 2018-07-24 广州市百果园信息技术有限公司 Stick picture disposing method, computer readable storage medium and the terminal of video image

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010055414A1 (en) * 2000-04-14 2001-12-27 Ico Thieme System and method for digitally editing a composite image, e.g. a card with the face of a user inserted therein and for surveillance purposes
US20140310739A1 (en) * 2012-03-14 2014-10-16 Flextronics Ap, Llc Simultaneous video streaming across multiple channels
US20180061186A1 (en) * 2013-03-01 2018-03-01 King Show Games, Inc. Method and apparatus for combining symbols in gaming devices
CN103853562A (en) * 2014-03-26 2014-06-11 北京奇艺世纪科技有限公司 Video frame rendering method and device
CN103928039A (en) * 2014-04-15 2014-07-16 北京奇艺世纪科技有限公司 Video compositing method and device
CN104469179A (en) * 2014-12-22 2015-03-25 杭州短趣网络传媒技术有限公司 Method for combining dynamic pictures into mobile phone video
CN105357451A (en) * 2015-12-04 2016-02-24 Tcl集团股份有限公司 Image processing method and apparatus based on filter special efficacies
CN106293393A (en) * 2016-08-01 2017-01-04 北京奇虎科技有限公司 Synthesis display packing based on electronics paster, device and terminal unit
CN106373170A (en) * 2016-08-31 2017-02-01 北京云图微动科技有限公司 Video making method and video making device
CN106730842A (en) * 2016-11-23 2017-05-31 网易(杭州)网络有限公司 A kind of game movie display methods and device
CN106846454A (en) * 2017-01-17 2017-06-13 网易(杭州)网络有限公司 Lens Flare method for drafting and device
CN107679497A (en) * 2017-10-11 2018-02-09 齐鲁工业大学 Video face textures effect processing method and generation system
CN107909629A (en) * 2017-11-06 2018-04-13 广东欧珀移动通信有限公司 Recommendation method, apparatus, storage medium and the terminal device of paster
CN108322802A (en) * 2017-12-29 2018-07-24 广州市百果园信息技术有限公司 Stick picture disposing method, computer readable storage medium and the terminal of video image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GEGE: "手机视频怎么加个贴图", 《HTTP://XUEXI.LEAWO.CN/SHIPINBIANJI/3311.HTML》 *
NIKI: "怎么给视频加表情、怎么给视频加贴图", 《HTTP://XUEXI.LEAWO.CN/SHIPINBIANJI/3465.HTML》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899322A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, animation rendering SDK, device and computer storage medium
CN111899322B (en) * 2020-06-29 2023-12-12 腾讯科技(深圳)有限公司 Video processing method, animation rendering SDK, equipment and computer storage medium
CN112770185A (en) * 2020-12-25 2021-05-07 北京达佳互联信息技术有限公司 Method and device for processing Sprite map, electronic equipment and storage medium
CN112770185B (en) * 2020-12-25 2023-01-20 北京达佳互联信息技术有限公司 Method and device for processing Sprite map, electronic equipment and storage medium
CN112929683A (en) * 2021-01-21 2021-06-08 广州虎牙科技有限公司 Video processing method and device, electronic equipment and storage medium
CN113163135A (en) * 2021-04-25 2021-07-23 北京字跳网络技术有限公司 Animation adding method, device, equipment and medium for video
CN113163135B (en) * 2021-04-25 2022-12-16 北京字跳网络技术有限公司 Animation adding method, device, equipment and medium for video
CN113645476A (en) * 2021-08-06 2021-11-12 广州博冠信息科技有限公司 Picture processing method and device, electronic equipment and storage medium
CN113645476B (en) * 2021-08-06 2023-10-03 广州博冠信息科技有限公司 Picture processing method and device, electronic equipment and storage medium
WO2024067319A1 (en) * 2022-09-27 2024-04-04 Lemon Inc. Method and system for creating stickers from user-generated content

Also Published As

Publication number Publication date
CN111225232B (en) 2021-10-29

Similar Documents

Publication Publication Date Title
CN111225232B (en) Video-based sticker animation engine, realization method, server and medium
CN112911379B (en) Video generation method, device, electronic equipment and storage medium
US20240184438A1 (en) Interactive content generation method and apparatus, and storage medium and electronic device
WO2020220773A1 (en) Method and apparatus for displaying picture preview information, electronic device and computer-readable storage medium
US20220310125A1 (en) Method and apparatus for video production, device and storage medium
US20240127856A1 (en) Audio processing method and apparatus, and electronic device and storage medium
CN111790148A (en) Information interaction method and device in game scene and computer readable medium
CN110022493B (en) Playing progress display method and device, electronic equipment and storage medium
WO2024131621A1 (en) Special effect generation method and apparatus, electronic device, and storage medium
CN110134905B (en) Page update display method, device, equipment and storage medium
CN112017261B (en) Label paper generation method, apparatus, electronic device and computer readable storage medium
CN112565870B (en) Content caching and reading method, client and storage medium
WO2023024803A1 (en) Dynamic cover generating method and apparatus, electronic device, medium, and program product
JP7473674B2 (en) Special effects processing method and device
CN111385638B (en) Video processing method and device
CN113747226A (en) Video display method and device, electronic equipment and program product
CN114827695B (en) Video recording method, device, electronic device and storage medium
CN111309685A (en) Method and server for online collaborative processing of model files
CN115442639B (en) Method, device, equipment and medium for generating special effect configuration file
US12033671B2 (en) Video generation method and apparatus, electronic device, and storage medium
CN113920220A (en) Image editing backspacing method and device
WO2024078409A1 (en) Image preview method and apparatus, and electronic device and storage medium
CN107800618B (en) Picture recommendation method and device, terminal and computer-readable storage medium
CN117692691A (en) Live broadcast room interaction method and device, electronic equipment and storage medium
WO2024123244A1 (en) Text video generation method and apparatus, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant