CN111464430B - Dynamic expression display method, dynamic expression creation method and device - Google Patents

Dynamic expression display method, dynamic expression creation method and device Download PDF

Info

Publication number
CN111464430B
CN111464430B CN202010273094.5A CN202010273094A CN111464430B CN 111464430 B CN111464430 B CN 111464430B CN 202010273094 A CN202010273094 A CN 202010273094A CN 111464430 B CN111464430 B CN 111464430B
Authority
CN
China
Prior art keywords
animation
dynamic
area
dynamic expression
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010273094.5A
Other languages
Chinese (zh)
Other versions
CN111464430A (en
Inventor
汪倩怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010273094.5A priority Critical patent/CN111464430B/en
Publication of CN111464430A publication Critical patent/CN111464430A/en
Application granted granted Critical
Publication of CN111464430B publication Critical patent/CN111464430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a dynamic expression display method, a dynamic expression creation method and a dynamic expression creation device, and belongs to the technical field of computers. The method comprises the following steps: responding to a selection operation of selecting dynamic expressions triggered by a session interface, and displaying a dynamic main body diagram of the selected dynamic expressions in the session interface as a session message; and playing the animation elements associated with the dynamic main body diagram in the session interface, wherein the playing area of the animation elements at least comprises a first area outside the dynamic main body diagram. Therefore, the limitation of the traditional dynamic expression on the display size is broken through, the display range of the dynamic expression is enlarged, a novel dynamic expression playing mechanism is provided, the dynamic expression can express richer contents, the flexibility and the interestingness of dynamic expression display are improved, and the display effect of the dynamic expression is enhanced.

Description

Dynamic expression display method, dynamic expression creation method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a dynamic expression display method, a dynamic expression creation method, and an apparatus thereof.
Background
With the rapid growth of the internet, various online social applications have emerged to conduct social activities, such as instant messaging using instant messaging applications. In the process of using the social application, in order to more vividly and vividly express the words, the users often send some dynamic expressions as session messages, and the interestingness of communication among the users can be greatly promoted by the way of carrying out the dialogue through the dynamic expressions.
The dynamic expression is generally a picture in GIF (Graphics Interchange Format, graphic interchange format) format, and in the related art, the dynamic expression has a fixed display size, and when the dynamic expression is displayed as a conversation message, the dynamic expression is played in a region with the fixed size, that is, there is a limitation on the size of the region when the dynamic expression is played.
Disclosure of Invention
The embodiment of the application provides a dynamic expression display method, a dynamic expression creation method and a dynamic expression creation device, which are used for expanding the display range of dynamic expressions and enhancing the display effect of the dynamic expressions.
In one aspect, a dynamic expression display method is provided, the method including:
responding to a selection operation of selecting dynamic expressions triggered by a session interface, and displaying a dynamic main body diagram of the selected dynamic expressions in the session interface as a session message; and
And playing the animation element associated with the dynamic main body diagram in the session interface, wherein the playing area of the animation element at least comprises a first area outside the dynamic main body diagram.
In one possible implementation manner, the playing area of the animation element further includes a second area, where the second area is a part or all of the display area of the dynamic main map.
In one possible implementation, playing the animation element associated with the dynamic main body diagram in the session interface includes:
the animation element gradually spans from one area to the other area in the first area and the second area to play; or alternatively, the process may be performed,
the animation element is played in the first area; or alternatively, the process may be performed,
the animation element is played in the first area and the second area.
In one possible implementation, presenting the selected dynamic main body diagram of the dynamic expression as a session message in the session interface, and playing the animation element associated with the dynamic main body diagram in the session interface, including:
and determining a reference position for animation drawing according to a play mode associated with the dynamic expression, and starting drawing frame by frame and displaying each animation frame corresponding to the dynamic expression by using the reference position.
In one possible implementation manner, according to the play mode associated with the dynamic expression, determining a reference position for animation drawing, and starting drawing and displaying each animation frame corresponding to the dynamic expression frame by frame with the reference position, including:
determining the reference position according to the animation type corresponding to the playing mode; and
and drawing and displaying each animation frame corresponding to the dynamic expression frame by frame according to the animation attribute information corresponding to the playing mode, wherein the animation attribute information comprises at least one of a motion track, a size, a shape, a color and an animation special effect of the animation element.
In one possible implementation manner, determining the reference position according to the animation type corresponding to the play mode includes:
if the animation type of the dynamic expression is a trigger animation, determining the trigger source position of the animation element as the reference position; or (b)
If the animation type of the dynamic expression is atmosphere animation, determining the center position of the session interface as the reference position, or determining the center position of a dialog box area in the session interface as the reference position; or (b)
And if the animation type of the dynamic expression is a position animation, determining the central position of the playing area of the dynamic main body diagram as the reference position.
In one possible implementation, before presenting the dynamic body diagram of the selected dynamic expression as a session message in the session interface, the method further includes:
and displaying the dynamic expression in an input box area in the session interface, and triggering to send the dynamic expression when a confirmation operation for determining to send the dynamic expression is detected.
In one possible implementation manner, the transparency of the background area of the dynamic main body diagram is a transparent value, or the color of the background area of the dynamic main body diagram is the background color of the session interface.
In one possible implementation, the dynamic expression is associated with a first identifier, where the first identifier is used to indicate that the associated dynamic expression is a trans-regional dynamic expression.
In one aspect, a dynamic expression creating method is provided, the method including:
responding to the expression creating operation, and displaying a video recording interface;
and responding to video recording operation triggered on the video recording interface, obtaining recorded video data, and storing the video data and animation elements in an associated mode as dynamic expressions, wherein the video data is used as a dynamic main body diagram of the dynamic expressions, and a playing area of the animation elements at least comprises a first area outside the dynamic main body diagram.
In one possible implementation, the video recording interface includes a video recording area, and storing the video data and the animation element association as a dynamic expression includes:
determining a reference position of animation drawing according to a play mode associated with the animation element, and synthesizing a sequence video frame of the video data and the animation element by taking the reference position as a coordinate original point to obtain a sequence animation frame corresponding to the dynamic expression; wherein, in the synthesizing process, the display area of the animation element at least comprises an area outside the video recording area.
In one possible implementation, the display area of the animation element further includes a part or all of the video recording area.
In one possible implementation, determining the reference position of the animation according to the play mode associated with the animation element includes:
if the animation element is a trigger type animation, determining a trigger source position for triggering the animation element in the video data as the reference position;
if the animation type of the animation element is atmosphere type animation, determining the central position of the video recording interface as the reference position;
And if the animation type of the dynamic expression is a position type animation, determining the center position of the video recording area as the reference position.
In one possible implementation, the method further includes:
responding to a follow-up operation aiming at a target dynamic expression displayed in a dialog box area of a session interface, and extracting the animation element from the target dynamic expression, wherein the target dynamic expression is associated with a first identifier, and the first identifier is used for indicating that the associated dynamic expression is a trans-regional dynamic expression; or alternatively, the process may be performed,
and responding to the operation of selecting the animation material template or the animation icon, extracting the animation element from the selected animation material template, or determining the selected animation icon as the animation element, wherein the selected animation material template and the animation icon are both associated with a second identifier, and the second identifier is used for indicating that the corresponding animation element can be displayed outside the video recording area so as to synthesize the dynamic expression.
In one possible implementation manner, the synthesizing the sequence video frames of the video data and the animation elements to obtain the sequence animation frames corresponding to the dynamic expression includes:
Determining a background area of each video frame in the video data;
adjusting the transparency of the background area of each video frame to a transparent value, or adjusting the color of the background area of each video frame to a predetermined color;
and synthesizing the regulated video frames and the animation elements to obtain the sequence animation frames corresponding to the dynamic expressions.
In one aspect, a dynamic expression display device is provided, the device comprising:
the response module is used for responding to the selection operation of selecting the dynamic expression triggered by the session interface;
and the display module is used for displaying the selected dynamic main body diagram of the dynamic expression as a session message in the session interface and playing the animation element associated with the dynamic main body diagram in the session interface, wherein the playing area of the animation element at least comprises a first area outside the dynamic main body diagram.
In one possible implementation manner, the playing area of the animation element further includes a second area, where the second area is a part or all of the display area of the dynamic main map.
In one possible implementation, the display module is configured to:
the animation element gradually spans from one area to the other area in the first area and the second area to play; or alternatively, the process may be performed,
The animation element is played in the first area; or alternatively, the process may be performed,
the animation element is played in the first area and the second area.
In one possible implementation, the display module is configured to:
and determining a reference position for animation drawing according to a play mode associated with the dynamic expression, and starting drawing frame by frame and displaying each animation frame corresponding to the dynamic expression by using the reference position.
In one possible implementation, the display module is configured to:
determining the reference position according to the animation type corresponding to the playing mode; and
and drawing and displaying each animation frame corresponding to the dynamic expression frame by frame according to the animation attribute information corresponding to the playing mode, wherein the animation attribute information comprises at least one of a motion track, a size, a shape, a color and an animation special effect of the animation element.
In one possible implementation, the display module is configured to:
if the animation type of the dynamic expression is a trigger animation, determining the trigger source position of the animation element as the reference position; or (b)
If the animation type of the dynamic expression is atmosphere animation, determining the center position of the session interface as the reference position, or determining the center position of a dialog box area in the session interface as the reference position; or (b)
And if the animation type of the dynamic expression is a position animation, determining the central position of the playing area of the dynamic main body diagram as the reference position.
In one possible implementation manner, the apparatus further includes a confirmation module configured to:
before the display module displays the selected dynamic main body diagram of the dynamic expression in the conversation interface as a conversation message, the dynamic expression is displayed in an input box area in the conversation interface, and when a confirmation operation for determining to send the dynamic expression is detected, the dynamic expression is triggered to be sent.
In one possible implementation manner, the transparency of the background area of the dynamic main body diagram is a transparent value, or the color of the background area of the dynamic main body diagram is the background color of the session interface.
In one possible implementation, the dynamic expression is associated with a first identifier, where the first identifier is used to indicate that the associated dynamic expression is a trans-regional dynamic expression.
In one aspect, there is provided a dynamic expression creating apparatus, the apparatus including:
the display module is used for responding to the expression creating operation and displaying the video recording interface;
the creation module is used for responding to video recording operation triggered on the video recording interface, obtaining recorded video data, and storing the video data and animation elements in an associated mode as dynamic expressions, wherein the video data is stored as a dynamic main body diagram of the dynamic expressions, and a playing area of the animation elements at least comprises a first area outside the dynamic main body diagram.
In one possible implementation, the video recording interface includes a video recording area, and the creating module is configured to:
determining a reference position of animation drawing according to a play mode associated with the animation element, and synthesizing a sequence video frame of the video data and the animation element by taking the reference position as a coordinate original point to obtain a sequence animation frame corresponding to the dynamic expression; wherein, in the synthesizing process, the display area of the animation element at least comprises an area outside the video recording area.
In one possible implementation, the display area of the animation element further includes a part or all of the video recording area.
In one possible implementation, the creating module is configured to:
if the animation element is a trigger type animation, determining a trigger source position for triggering the animation element in the video data as the reference position;
if the animation type of the animation element is atmosphere type animation, determining the central position of the video recording interface as the reference position;
and if the animation type of the dynamic expression is a position type animation, determining the center position of the video recording area as the reference position.
In one possible implementation manner, the apparatus further includes a determining module configured to:
responding to a follow-up operation aiming at a target dynamic expression displayed in a dialog box area of a conversation interface, and extracting the animation element from the target dynamic expression, wherein the target dynamic expression is associated and displayed with a first mark, and the first mark is used for indicating that the animation element of the associated dynamic expression can be displayed outside a display area of a dynamic main body diagram of the dynamic expression; or alternatively, the process may be performed,
and responding to the operation of selecting the animation material template or the animation icon, extracting the animation element from the selected animation material template, or determining the selected animation icon as the animation element, wherein the selected animation material template and the animation icon are both associated and displayed with a second identifier, and the second identifier is used for indicating that the corresponding animation element can be displayed outside the video recording area so as to synthesize the dynamic expression.
In one possible implementation, the creating module is configured to:
determining a background area of each video frame in the video data;
adjusting the transparency of the background area of each video frame to a transparent value, or adjusting the color of the background area of each video frame to a predetermined color;
And synthesizing the regulated video frames and the animation elements to obtain the sequence animation frames corresponding to the dynamic expressions.
In one aspect, a computing device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps included in the dynamic expression presentation method described in the various possible implementations described above when the computer program is executed.
In one aspect, a computing device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps included in the dynamic expression creation method described in the various possible implementations described above when the computer program is executed.
In one aspect, a storage medium is provided, where the storage medium stores computer executable instructions for causing a computer to perform steps included in a dynamic expression presentation method described in the various possible implementations described above.
In one aspect, a storage medium is provided, where the storage medium stores computer executable instructions for causing a computer to perform the steps included in the dynamic expression creating method described in the above various possible implementations.
In one aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps comprised in the dynamic expression presentation method described in the various possible implementations described above.
In one aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps comprised in the dynamic expression creation method described in the various possible implementations described above.
In this embodiment of the present application, after detecting a selection operation of selecting a dynamic expression triggered by a session interface, the selected dynamic expression may be displayed in the session interface as a session message by responding to the selection operation, specifically, a playing area of an animation element of the dynamic expression includes at least an area (for example, the area is referred to as a first area) other than a dynamic main diagram of the dynamic expression, that is, in a displaying process of the dynamic expression, the animation element in the dynamic expression may be played at least in an area other than the playing area of the dynamic main diagram, so that the playing area of the animation element in the dynamic expression is not limited to a displaying area corresponding to a fixed size set for the dynamic expression in a conventional manner, but may be fused with a display space outside the area of the fixed size, thereby breaking through limitation of the conventional dynamic expression on the display size, expanding a displaying range of the dynamic expression, providing a new dynamic expression playing mechanism, and such dynamic expression may express richer content, and may also improve flexibility and interest of dynamic expression display, and enhance the effect of dynamic expression display.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings may be obtained according to the provided drawings without inventive effort to a person skilled in the art.
FIG. 1 is a schematic diagram of a dynamic expression;
FIG. 2a is a schematic diagram of a session interface in an embodiment of the present application;
FIG. 2b is another schematic illustration of a session interface in an embodiment of the present application;
fig. 3 is a schematic diagram of an application scenario applicable to the embodiment of the present application;
FIG. 4 is a flowchart of a dynamic expression display method in an embodiment of the present application;
FIG. 5a is a schematic diagram of a dynamic expression selection performed by triggering in an embodiment of the present application;
FIG. 5b is another schematic diagram of a dynamic expression selection by triggering in an embodiment of the present application;
FIG. 6 is a schematic diagram of a comparison of a traditional dynamic expression and a trans-regional dynamic expression in an embodiment of the present application;
FIG. 7 is a schematic diagram of coordinates of a drawing trigger animation in an embodiment of the present application;
FIG. 8 is a schematic diagram of coordinates of a drawing atmosphere animation in an embodiment of the present application;
FIG. 9 is a schematic diagram of coordinates of a drawing position animation in an embodiment of the present application;
FIG. 10 is a flowchart of a dynamic expression creation method in an embodiment of the present application;
fig. 11a is a schematic diagram of performing an expression creation operation in an embodiment of the present application;
FIG. 11b is another schematic diagram of performing an expression creation operation in an embodiment of the present application;
FIG. 12 is a schematic diagram of a reverse video recording operation according to an embodiment of the present application;
FIG. 13a is a schematic diagram of an animation drawn during a dynamic expression creation process in an embodiment of the present application;
FIG. 13b is another schematic drawing of an animation during dynamic expression creation in an embodiment of the present application;
FIG. 13c is another schematic drawing of an animation during dynamic expression creation in an embodiment of the present application;
fig. 14a is a block diagram of a dynamic expression display device according to an embodiment of the present application;
FIG. 14b is another block diagram of a dynamic expression display device in an embodiment of the present application;
fig. 15a is a block diagram of a dynamic expression creating apparatus in an embodiment of the present application;
fig. 15b is another block diagram of the dynamic expression creating apparatus in the embodiment of the present application;
Fig. 16 is a schematic structural diagram of a computing device in an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure. Embodiments and features of embodiments in this application may be combined with each other arbitrarily without conflict. Also, while a logical order is depicted in the flowchart, in some cases, the steps depicted or described may be performed in a different order than presented herein.
The terms first and second in the description and claims of the present application and in the above-described figures are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the term "include" and any variations thereof is intended to cover non-exclusive protection. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. The term "plurality" in the present application may mean at least two, for example, two, three or more, and embodiments of the present application are not limited.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. The character "/" herein generally indicates that the associated object is an "or" relationship unless otherwise specified.
Some technical terms referred to herein are described below to facilitate understanding by those skilled in the art.
1. Instant messaging applications, which can provide a class of applications with instant messaging functions, are also called instant messaging, and refer to services capable of sending and receiving internet messages and the like in real time, allowing two or more people to communicate text messages, files, voice and video using a network in real time, and have been developed into a comprehensive information platform integrating communication, information, entertainment, search, e-commerce, office collaboration, enterprise customer service and the like.
2. Dynamic expression is an expression with animation effect, the expression is an image with meaning expression function, and can reflect the internal activities, emotion or specific semantics of a user who sends the expression, and the expression comprises static expression and dynamic expression. Typically, the static expression is a frame of static picture, which may be in the file format of PNG (Protable Network Graphicks, portable network graphics), and the dynamic expression is an animation synthesized from multiple frames of pictures, which may be in the file format of GIF.
The dynamic expression can comprise a dynamic main body diagram and an animation element, the dynamic main body diagram is a main body part of the dynamic expression, for example, the main body diagram is a head portrait of a user taking the dynamic expression or some cartoon images, the animation element can be understood as an element which shows animation special effects in the dynamic expression, the animation element can show the animation special effects of the whole dynamic expression, the animation element can be used as an auxiliary element to better show the dynamic expression, and the animation element is a dynamic image with animation special effects, such as heart shapes, balloons, water drops, pentagram, characters and the like, of various sizes and colors. For example, referring to the dynamic expression shown in fig. 1, the dynamic expression shown in fig. 1 is that a boy makes a "heart comparing" gesture, and as the boy completes the "heart comparing" gesture, a heart-shaped image is popped up, and the heart-shaped image can present special effects such as gradually changing size and changing position, so that the boy image can be understood as a dynamic main body image of the dynamic expression, corresponds to a dynamic main body part of the dynamic expression, and the heart-shaped image can be immediately an animation element of the dynamic expression.
It should be noted that, since the dynamic expression book is an animation including multiple frames of images, the main body portion corresponding to the dynamic main body diagram in the dynamic expression generally changes, for example, the motion changes, the gesture changes, the expression changes, and the like.
3. The session interface, which may also be referred to as a chat interface, for example, is an interface for presenting session messages in an instant messaging application, and may include a private chat session interface between two users and a group chat session interface between multiple users for more than two users. The conversation interface generally includes a dialog box region for displaying dialog messages that the user himself has successfully transmitted and received, which may include text messages, voice messages, and emoticons, and an input box region for receiving dialog messages input by the user. One possible session interface is shown in fig. 2a, and another possible session interface is shown in fig. 2 b.
As described above, the display of the dynamic expression in the related art is limited by the specified size, for example, the first dynamic expression from top to bottom in fig. 2a, the size of the playing area when playing in the session interface is always fixed, and the dynamic main diagram corresponding to the child lifted by the right hand and the animation elements corresponding to the two words "refueling" can only be displayed in the specified inherent area, so that the display manner is relatively stiff and single, the flexibility is poor, and the effect of the animation special effect that the dynamic expression originally wants to express may not be better reflected.
Through analysis, the inventor of the application finds that the main reason that the playing mode of the dynamic expression is single in the related art is that the size of the playing area of the dynamic expression is limited, and the playing mode of the dynamic expression is solidified due to the limitation, so that the inventor of the application considers that the size limitation of the playing area can be broken through, and the dynamic expression can be displayed outside the playing area with the inherent size of the related art, for example, the dynamic main body diagram of the dynamic expression can be displayed in the existing mode, namely, the dynamic main body diagram is displayed inside the area with the regulated fixed size, and the animation element of the dynamic expression can be displayed outside the area with the regulated fixed size, so that the animation element in the dynamic expression can be played outside the playing area of the dynamic main body diagram, the area with the fixed size is spanned, the display space outside the inherent display area is fused, and therefore, a brand-new dynamic expression playing mechanism is provided, the display range of the dynamic expression can be expanded, and the flexibility and the interestingness of dynamic expression can be improved through dynamic expression.
In order to better understand the technical solution provided by the embodiments of the present application, a few simple descriptions are provided below for application scenarios applicable to the technical solution provided by the embodiments of the present application, and it should be noted that the application scenarios described below are only used to illustrate the embodiments of the present application and are not limiting. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Referring to fig. 3, fig. 3 is an application scenario applicable to the embodiment of the present application, where the application scenario includes a terminal device 301, a terminal device 302, and a server 303, where the terminal device 301 and the terminal device 302 may both communicate with the server 303, where clients corresponding to an instant messaging application are installed in the terminal 301 and the terminal 302, and the server 303 is a background service device that provides services for the instant messaging application. User 1 can use terminal equipment 301 to carry out instant messaging with user 2 that uses terminal equipment 302, for example can carry out word communication, voice communication and video communication, can also send expression information each other between user 1 and user 2, for example can send dynamic expression and static expression, specifically, user 1 and user 2 can use respective terminal equipment to adopt the dynamic expression display method that this application provided to send dynamic expression to the opposite side respectively, can also establish dynamic expression through the dynamic expression creation method that this application provided simultaneously, improves interactive quality through the interaction of dynamic expression. For example, the user 1 creates a dynamic expression by using the terminal device 301, and selects to send the created dynamic expression to the user 2, after detecting that the user 1 triggers an operation of sending the dynamic expression, the terminal device 301 uploads relevant information of the dynamic expression to the server 303, and forwards the relevant information to the terminal device 302 through the server 303, and the terminal device 302 displays the dynamic expression sent by the user 1 according to the received information, so that the user 2 can view the dynamic expression, and further performs information interaction with the user 1.
The server 303 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. Terminal device 301 and terminal device 302 may be, but are not limited to, smart phones, tablet computers, notebook computers, desktop computers, smart speakers, smart watches, and the like. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
In order to further explain the technical solutions provided in the embodiments of the present application, the following details are described with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide the method operational steps as shown in the following embodiments or figures, more or fewer operational steps may be included in the method, either on a routine or non-inventive basis. In steps where there is logically no necessary causal relationship, the execution order of the steps is not limited to the execution order provided by the embodiments of the present application. The methods may be performed sequentially or in parallel as shown in the embodiments or the drawings when the actual processing or the apparatus is performed.
The embodiment of the application provides a dynamic expression display method, which can be executed by a device capable of playing dynamic expressions, for example, can be executed by the terminal device 301 or the terminal device 302 in fig. 3, or can be executed by the server 303 in fig. 3. The dynamic expression display method provided by the embodiment of the application is shown in fig. 4, and a flowchart shown in fig. 4 is described as follows.
Step 401: and detecting a selection operation of selecting dynamic expressions triggered by the session interface.
In the process of message interaction through the instant messaging application, when a user wishes to send a dynamic expression to other users, a trigger operation for selecting the dynamic expression can be performed on the session interface, specifically, the dynamic expression can be selected by triggering a selection operation for selecting the dynamic expression, and the device can detect the selection operation of the user.
In one possible implementation manner, the user may perform a selection operation of selecting a dynamic expression in the expression input panel, where the expression input panel is a container storing expression thumbnails corresponding to the expressions (including the dynamic expression and the static expression), and the user may add the expression displayed in the dialog box area to the expression input panel, may create a new expression in the expression input panel, or may delete the expression stored in the expression input panel. Referring to fig. 5a, a user selects a dynamic expression corresponding to a dynamic expression in an expression input panel by clicking the dynamic expression, and the clicking operation can trigger the dynamic expression corresponding to the expression thumbnail selected by the user to be displayed in a session interface, specifically in a dialog box region. The user can select proper dynamic expressions in the expression container of the expression input panel according to the requirement of the user and send the dynamic expressions to other users, and the selection space is large.
In another possible implementation, the user may directly perform a selection operation on the dynamic expression displayed in the dialog box region in the session interface, for example, please refer to fig. 5b, and the user performs a designated operation, such as a finger joint operation or a double click operation, on the dynamic expression already displayed in the dialog box region, so as to select the active dynamic expression and trigger sending the dynamic expression. That is, the user can directly select the dynamic expression displayed in the dialog box area to quickly select the dynamic expression displayed in the process, so that the speed of selecting the dynamic expression can be improved, the selection efficiency is high, meanwhile, the dynamic effect for displaying the dynamic expression in the dialog box area is presented to the user, and by the mode, the user can quickly select the dynamic expression interested by the user, the pertinence is high, and the effectiveness of selection is high.
Step 402: and responding to the selection operation, displaying the selected dynamic main body diagram of the dynamic expression as a session message in a session interface, and playing the animation element associated with the dynamic main body diagram in the session interface, wherein the playing area of the animation element at least comprises a first area outside the dynamic main body diagram.
After detecting the selection operation performed by the user, the device can trigger the selected dynamic expression to be displayed in a dialogue box area of the dialogue interface in the form of dialogue information, namely, the dialogue box area is transmitted to other users, specifically, the device can transmit the corresponding information of the dynamic expression selected by the user to a background server, and then the background server forwards the information to other devices so as to realize the transmission of the expression information.
After the selection operation is detected, the selection operation can be temporarily not responded to display the dynamic expression as a session message in the session interface, but after the selection operation is detected, the dynamic expression can be displayed in an input frame area in the session interface, if the user really wants to send the selected dynamic expression, a confirmation operation for determining to send the dynamic expression can be performed, the device triggers to send the dynamic expression only after the confirmation operation is detected, that is, after the dynamic expression is selected, the user can further fully consider whether the user really needs to send the dynamic expression or not by temporarily staying in the input frame area, in the related technology, the dynamic expression is directly sent after the user selects the dynamic expression, however, the user directly sends the dynamic expression after the user selects the dynamic expression, so that misoperation is easy to be caused, accuracy and effectiveness of message sending are affected, and through the temporary storage mechanism for sending the dynamic expression in the input frame area in the embodiment of the application can be improved, and meanwhile, the transmission of data volume can be reduced.
In the embodiment of the application, when the dynamic expression is played, the dynamic main body diagram of the dynamic expression occupies a certain display area, and the playing area of the animation element included in the dynamic expression at least comprises an area outside the display area of the dynamic main body diagram, for example, the area outside the dynamic main body diagram occupied by the display of the animation element is called a first area, that is, the animation element can be at least displayed across the display area of the dynamic main body diagram.
Referring to fig. 6, the left diagram in fig. 6 shows the playing effect of the conventional dynamic expression, and it can be seen that the dynamic body diagram corresponding to the dynamic body part (i.e. the boy making the "heart comparison" gesture) in the "heart comparison" dynamic expression and all the animation elements (including the solid heart shape triggered by the boy, the netlike heart shape and the irregular five-pointed star) are displayed in the display area (e.g. the intrinsic display area) with a fixed size, and the left diagram in fig. 6 shows the intrinsic display area with a dotted rectangle box, and it can be seen that the display area of the dynamic expression is limited as a whole, and it is difficult to display the animation special effects of the animation elements more fully. The right diagram in fig. 6 is a display of the playing effect of "cross-region playing" in the embodiment of the present application, and it can be seen that, compared to the conventional playing manner, animation elements such as solid hearts, netlike hearts, and irregular pentagons, have a part displayed in the intrinsic display area and another part displayed outside the intrinsic display area. It can be understood that as the animation effect of the dynamic expression continues to play, the animation elements can gradually cross-domain from the intrinsic display area to the outside of the intrinsic display area, and similarly spread from inside to outside until the animation elements spread to most or all areas of the dialog box area, so as to achieve the playing effect of the animation elements approximate to full screen, as shown in the right graph in fig. 6, the reticular heart-shaped animation elements are spread onto the bubble frame of the previous message, and a brand new mechanism for displaying the dynamic expression is provided by a cross-area dynamic display mode, so that the display effect of the dynamic table is enriched, and the animation element has flexibility and entertainment.
In the embodiment of the present application, a first identifier is associated with the "trans-regional dynamic expression", where the first identifier is used to indicate that the associated dynamic expression is the "trans-regional dynamic expression", that is, the animation element that can be used to indicate the associated dynamic expression by the first identifier can be displayed outside the display area of the dynamic main body diagram of the dynamic expression, that is, the special attribute of the dynamic expression "trans-regional play" can be represented by the first identifier, so, when the dynamic expression associated with the first identifier is displayed, the play effect of "trans-regional play" can be achieved.
In a specific implementation, the first identifier may be displayed in association with a dynamic expression, whether it is an expression thumbnail in the expression input panel or a dynamic expression already displayed in the dialog box region, and if these dynamic expressions all have the characteristics of displaying animation elements across regions, i.e. for a "cross-region dynamic expression", a specific first identifier may be marked on the expression thumbnail or the dynamic expression itself corresponding to these dynamic expressions, for example, as shown in fig. 5a, where the first identifier is a black triangle identifier, where the upper left corners of the second and third dynamic expressions in the expression input panel are marked with black triangle identifiers, and where the upper left corners of these two dynamic expressions in the dialog box region are also marked with black triangle identifiers. The special dynamic expressions which can be played transregional by the animation elements can be marked explicitly through the displayed first mark, so that the animation elements can be distinguished from other conventional dynamic expressions, the prompting performance is enhanced, and the animation elements are convenient for users to select. The specific form of the explicitly displayed first identifier is, for example, the triangle identifier, it is understood that other forms are also possible, and embodiments of the present application are not limited.
In another possible implementation manner, the first identifier may not be displayed, and the first identifier may be considered to exist implicitly, so as to implicitly represent a play effect of "cross-region play" of the dynamic expression. For example, a "cross-region dynamic expression" does not correlate to display the first logo, but the display effect of "cross-region play" can be seen when the display (e.g., user click) is displayed, and in addition, when the cursor points to such dynamic expression, the first logo is displayed, i.e., the first logo can be changed from implicit existence to explicit display by some trigger means. When the dynamic expression of the first mark implicitly exists is used, the suddenly-appearing dynamic effect of 'cross-region playing' can bring surprise degree to the user, and the use experience of the user is enhanced.
In the embodiment of the application, the transparency of the background area of the dynamic main body diagram in the dynamic expression may be a transparent value, or the color of the background area of the dynamic main body diagram may be the background color of the session interface, which may be performed by performing a background removing operation on the dynamic main body diagram, for example, performing a background removing process when the dynamic expression is created, or performing a background removing process on the background of the dynamic main body diagram in each frame of animation frame in the process of drawing in real time, so that when the dynamic expression is displayed in the session interface as a session message, the dynamic expression may be fused into the whole session interface more naturally, and the display effect is improved.
In the specific implementation process, for example, the playing area of the dynamic main diagram is referred to as an intrinsic display area, the area of the playing area of the animation element outside the dynamic main diagram is referred to as a first area, and the first area may be any area except the intrinsic display area, for example, all areas except the intrinsic display area, or a partial area except the intrinsic display area, for example, a left side area or a lower side area or an upper side area except the intrinsic display area, and the like, and the size and shape of the first area are not limited in the embodiments of the present application. The playing area of the animation element includes at least a first area, that is, the animation element is at least to be displayed outside the inherent display area of the dynamic main body diagram, and since the motion track, shape, color and special effect of the animation element are generally dynamically changed, the playing area of the animation element may specifically include the following cases.
In case 1, the animation element is only played in the first area, so that during the whole playing process of the dynamic expression, no intersection exists between the animation element and the display area of the dynamic main diagram, and the animation element and the display area of the dynamic main diagram are independent from each other, for example, the animation element is directly shown from the first area at the beginning. In this way, the dynamic main body diagram is displayed in the inherent display area, and the animation elements are displayed outside the inherent display area, so that the animation elements are mutually revealed, the dynamic main body diagram is complement to each other, and the display effect of the dynamic expression is enhanced.
In case 2, the playing area of the animation element includes a second area in addition to the first area, and the second area is a partial area or all areas of the display area of the dynamic main map (i.e., the foregoing inherent display area), that is, the animation element may be displayed by combining the first area and the second area.
In case 2, one implementation manner is that the animation element gradually spans from one area to the other area in the first area and the second area, that is, during the playing process of the animation element, the animation element gradually spans from one area to the other area, for example, the first area gradually spans to the second area, or the second area gradually spans to the first area, so as to realize a spreading and spanning animation effect between the two areas, thereby enhancing the display effect of the dynamic expression.
In case 2, another implementation is that the animation element is played in the first region and the second region, respectively. In practice, the animation elements may include multiple types or multiple animation icons at the same time, different animation elements may be displayed in the first area and the second area independently from each other at different times, or the same animation element may be displayed in a round between the first area and the second area at different times, for example, played in the second area first and then played in the first area.
According to different areas occupied by the animation elements during playing, different display schemes can be adopted to effectively display the animation elements, and the display effect of dynamic expressions is enhanced.
The essence of playing dynamic expressions in the related art is playing prerecorded video animation, which can be understood as purely static playing, because the traditional dynamic expressions are displayed in a display area with a fixed size, and even if the dynamic expressions change in the position of the session interface, for example, the sent dynamic expressions are sent by other messages sent later to the top of the screen, the static playing mode is not changed. In the embodiment of the application, since the animation elements of the dynamic expression are at least to be displayed in the area outside the dynamic main body diagram, in the playing process of the dynamic expression, the animation elements are played while being drawn, specifically, the reference position of animation drawing is determined according to the playing mode associated with the dynamic expression, and each animation frame corresponding to the dynamic expression is drawn and displayed frame by frame with the determined reference position, that is, each animation frame included in the dynamic expression is drawn in real time to obtain a sequence animation frame, and each animation frame in the obtained sequence animation frame is sequentially displayed in real time. Therefore, the positions of the dynamic main body diagram and the animation elements in the corresponding coordinate systems can be recalculated in real time along with the position change of the dynamic expression in the session interface according to different coordinate systems of the playing modes, and further the animation frames are drawn and displayed in real time according to the coordinate positions obtained by dynamic calculation, so that the display effect of the dynamic expression can be synchronously changed, various animation special effects of the animation elements are realized, and for example, the animation effect of outward diffusion in the inherent display area of the dynamic main body diagram can be obviously seen.
The animation effects of the dynamic expressions are different, and the corresponding playing modes are also different, and the playing modes of the dynamic expressions can be used for describing the overall animation effect of the dynamic expressions, for example, the playing modes of the dynamic expressions can comprise two reference factors, one is an animation type, and the other is animation attribute information.
The animation type refers to the type of dynamic expression, and can be classified into trigger type animation, atmosphere type animation and position type animation according to animation effects; wherein, the trigger-type animation refers to that the animation element is triggered by a certain action of the dynamic main body part, such as a heart-shaped element triggered by a 'heart-comparing' gesture, or a balloon element triggered by a 'beep' action, or a series of love elements triggered by a 'blink' action; the atmosphere animation refers to adding an overall atmosphere effect to the whole dynamic expression, such as red careless elements diffused to the whole dialog box area, or raindrop elements extended to the middle part of the dialog box area, etc.; the position animation refers to an animation element having a certain position relation with the dynamic main body diagram, such as a purple heart shape of a dashed frame circumscribing the display area of the dynamic main body diagram, or a circumscribing circle concentric with the display area of the dynamic main body diagram, etc.
The animation attribute information is information for describing an animation effect of an animation element, and may include one or more combinations of a motion trajectory, a size, a shape, a color, an animation effect of the animation element, or may further include other description information. The animation attribute information is configured when the dynamic expression is created, and a series of animation frames are dynamically drawn and played according to the pre-configured animation attribute information when the dynamic expression is played later.
The different dynamic expressions can also be different in the coordinate system adopted when the dynamic expression is drawn in real time due to different playing modes, specifically, the reference position can be determined according to the animation type corresponding to the playing mode of the dynamic expression, the reference position can be used as the origin of coordinates of the drawing coordinate system, in other words, the reference coordinate system and the origin of coordinates of the reference coordinate system can be determined according to the animation type, and then each obtained animation frame is drawn and displayed frame by frame according to the determined reference position according to the animation attribute information corresponding to the playing mode of the dynamic expression. In this way, the position drawing of each animation frame of each dynamic expression is associated with the animation type and the animation attribute information of the dynamic expression, so that the dynamic expressions of various animation types can be reasonably drawn in real time, the difference of various dynamic expressions is reflected, and the playing effect is enhanced.
When the animation type of the dynamic expression is a trigger animation, for example, an animation element in an love shape triggered by a user through a 'heart comparison' gesture, for such dynamic expression, the trigger source position of the animation element can be determined as a reference position, the trigger source position of the animation element refers to the original position where the animation element is triggered and generated, for example, the trigger source position of the love shape triggered by the 'heart comparison' gesture is the position where the 'heart comparison' gesture is located, and for example, the trigger source position of the animation element triggered by the 'mouth-beeping' action is the position where the mouth is located. Referring to fig. 7, at time T1, the dynamic expression is displayed at a position near the middle of the session interface, and at time T2 after time T1, since a new session message is presented, the display position of the dynamic expression is moved up a little, but when the image is drawn at time T1 and time T2 respectively, the coordinate origin of the trigger source position is used for drawing the image, and the reference positions for drawing the image at time T1 and time T2 are respectively (x 1, y 1) and (x 1', y 1'), which are different, so that the coordinate can be dynamically changed according to the different display positions of the dynamic expression to draw the image, so as to achieve the effect of real-time playing, and the position of the trigger-type animation of the solid heart gradually rises from time T1 to time T2, and the volume is larger and larger, and the animation display effect similar to the gradual rising and enlarging is achieved.
When the animation type of the dynamic expression is an atmosphere-type animation, for example, a full-screen-type animation, for example, a mesh heart shape and an irregular five-pointed star spread over most of the area of the screen in fig. 8, the position drawing may be performed with the center position of the entire session interface (including the dialog box area and the input box area) determined as the reference position, or the position drawing may be performed with the center position of the dialog box area in the session interface determined as the reference position, since the atmosphere-type animation is generally used for representing the entire atmosphere, generally large-area delivery, the image drawing may be accurately performed with the entire session interface as the reference coordinate system, for example, as shown in fig. 8, at the time T2 after the time T1 and the time T1, both the size and the position of the mesh heart shape of the atmosphere-type animation may be changed, and the reference position of the drawing coordinate may not be changed, both (x 2, y 2) shown in fig. 8.
When the animation type of the dynamic expression is a position type animation, since the position type animation is an animation in which animation elements and the dynamic main body diagram have a certain position relationship, such as a dotted line heart shape externally connected to the outside of the dynamic main body diagram as shown in fig. 9, at this time, the center position of the playing area of the dynamic main body diagram may be determined as a reference position for frame drawing, the position type animation may be fixed in size and displayed at a certain position, or may be displayed in a blinking manner at intervals, or may be hidden after a certain period of time, from the time T1 to the time T2 in fig. 9, the heart shape dotted line is displayed in a blinking manner, and as the position of the dynamic expression increases, the coordinates of the reference position are changed from (x 3, y 3) to (x 3', y 3'), that is, the position coordinates of the drawing frame are dynamically changed. In addition, when the position type animation is used for image drawing, four vertex coordinates of a display area of the dynamic main body diagram can be considered at the same time, and the change of the display position of the dynamic main body diagram can be reflected through the change of the four vertex coordinates, so that the drawing of the animation frame can be more accurately carried out.
The foregoing describes the process of displaying the "cross-regional dynamic expression" in the embodiment of the present application, but before using such "cross-regional dynamic expression", the embodiment of the present application may create the dynamic expression first, and based on the same inventive concept, further provides a dynamic expression creation method, by which the process of creating the "cross-regional dynamic expression" is described, where the dynamic expression creation method in the embodiment of the present application is shown in fig. 10, and the flow shown in fig. 10 is described as follows.
Step 1001: and responding to the expression creation operation, and displaying a video recording interface.
When the user wants to create the dynamic expression, the user can perform the operation of creating the expression, and the equipment displays the video recording interface by triggering the operation of creating the expression, so that video data is collected in the video recording interface and the synthesis of the dynamic expression is performed.
For example, as shown in the left graph of fig. 11a, the user performs an expression creation operation by clicking on a "+" mark in the expression input panel, where the "+" mark is an identifier for indicating that a dynamic expression is created, and the apparatus displays a video recording interface as shown in the right graph of fig. 11a in response to the expression creation operation.
For another example, as shown in the left diagram of fig. 11b, the user may perform an expression creation operation for a dynamic expression displayed in a dialog box area of the session interface, for example, a long press operation after clicking, and after triggering the operation, the device displays a video recording interface as shown in the right diagram of fig. 11 b. In the manner shown in fig. 11b, the user may directly perform the following operation on the dynamic expression already displayed in the dialog box area, so that in the process of creating the dynamic expression, the animation element in the dynamic expression aimed at by the following operation may be directly extracted to create the dynamic expression, and the following dynamic expression may be the dynamic expression sent by the user himself or may also be the dynamic expression sent by other users participating in the conversation. Therefore, the user can quickly select the dynamic expression of the favorite animation effect from the dialog box area to imitate the follow-up shooting, and the interestingness is enhanced.
In addition, in the video recording interface triggered and displayed based on the expression creating operation, as shown in fig. 11a or 11b, a video recording area, an animation material template and a shooting button may be included, where the video recording area is a video viewfinder, and the device collects video data in the video recording area.
Step 1002: and responding to video recording operation triggered on a video recording interface, obtaining recorded video data, and storing the video data and the animation elements in an associated mode as dynamic expressions, wherein the video data is stored as a dynamic main body diagram of the dynamic expressions, and a playing area of the animation elements at least comprises a first area outside the dynamic main body diagram.
In the video recording interface, a user can perform video recording operation in the video recording interface, as shown in the left diagram in fig. 12, the user can perform video recording operation of clicking a shooting button or pressing the shooting button for a long time, further video data in a video recording area can be collected, and the collected video data and a predetermined animation element are combined into a dynamic expression, wherein in the synthesis process, the collected video data is used as data corresponding to a dynamic main diagram of the dynamic expression, and a playing area of the animation element at least comprises a first area except the dynamic main diagram, that is, the playing effect of 'cross-area playing' shown in fig. 6 can be achieved when the dynamic expression created in the embodiment of the application is adopted, so that the display range of the dynamic expression is enlarged, in particular, the display range of the animation element is enlarged, a brand new dynamic display scheme is provided, and the display effect of the dynamic expression is enhanced.
As shown in the right diagram of fig. 12, the synthesized animation elements are solid hearts, a plurality of net hearts, and irregular pentagons, and the video data and the animation elements may be dynamically synthesized in such a manner that the animation elements to be synthesized are displayed at least outside the video recording area of the video data, so as to obtain dynamic expressions, that is, in the process of synthesizing the dynamic video data and the animation elements, the display area of the animation elements includes at least an area other than the video recording area. In one possible implementation, the animation element is only displayed in an area outside the video recording area, that is, the display area of the animation element and the video recording area are two mutually disjoint areas; in another possible implementation, the display area of the animation element further includes a part or all of the video recording area, that is, the animation element may be displayed inside and outside the frame of the video recording area, for example, may gradually span from inside the video recording area to outside the video recording area, or may gradually span from outside the video recording area to inside the video recording area, where the span may refer to the number of animation elements being unchanged but only moving in position, or may refer to the number of animation elements being gradually increased to achieve an animation effect similar to diffusion and spreading.
Conventionally, when a dynamic expression is synthesized, animation elements to be synthesized can only be displayed outside a video recording area, but the animation elements of the embodiment of the application can be displayed at least outside the video recording area, which is to support the animation effect of 'trans-regional play' of the dynamic expression, so that in the process of synthesizing the dynamic expression, in the embodiment of the application, the reference position of animation drawing can be determined according to the play mode associated with the animation elements to be synthesized, then the video data sequence video frame and the animation elements to be synthesized are synthesized by taking the determined reference position as a coordinate original point, so that a sequence animation frame corresponding to the dynamic expression is obtained, and the drawn sequence animation frame is saved, namely, the dynamic expression is saved. That is, in the process of composing the "trans-regional dynamic expression", since the animation elements are to be displayed at least outside the video recording region, according to the display position and the display effect, the drawing coordinate system and the corresponding origin of coordinates can be dynamically selected according to the playing mode of the animation elements to be composed to draw the animation frames in real time, so as to meet the animation effect requirement of the animation elements.
As mentioned above, the dynamic expressions may have different animation types, and in particular, may refer to that animation effects of animation elements are different, and in this embodiment of the present application, the animation types of animation elements mainly include three main classes: the following describes the coordinate drawing of three types of animation elements, i.e., trigger animation, atmosphere animation, and position animation.
For a trigger-type animation element, such as the solid heart shape shown in fig. 13a, the trigger source position of the trigger-type animation element in the video data in the video recording area may be determined as the reference position for animation drawing as the origin of coordinates, and since the motion of the trigger-type animation is performed with respect to the trigger source position, real-time drawing of the motion position of the trigger-type animation element with the trigger source position as the origin of coordinates may accurately preserve the motion position of the animation element, such as gradually rising from the trigger source position to outside the video recording area, thereby ensuring the animation effect of the animation element.
For atmosphere-type animation elements, such as the netlike heart shape and irregular five-pointed star shown in fig. 13b, such animation elements are generally displayed in a large area, such as full screen display, the central position of the whole video recording interface can be determined as the reference position for animation drawing as the origin of coordinates, and because the atmosphere-type animation is that the motion position is relative to the whole interface, the animation elements are drawn in real time at the central position of the video recording interface, so that the change of the motion position can be accurately represented, and the animation effect is ensured.
For a positional animation element, such as a dotted heart shape surrounding the video recording area as shown in fig. 13c, since such an animation element moves with respect to the video recording area, the center position of the video recording area can be determined as a reference position for animation drawing as an origin of coordinates, and four vertex coordinates of the video recording area can be considered at the same time, so that accurate drawing of the moving position of the positional animation can be realized, thereby ensuring the animation effect.
After the sequence animation frame is drawn, the sequence animation frame can be saved as a picture format of the GIF, namely, the dynamic expression is obtained, and in the process of synthesizing and saving related data, a reference position (namely, the coordinate origin information during animation drawing) is also required to be saved, so that the coordinate position information can be extracted to accurately display the motion position of the animation element when the dynamic expression is played later. For example, for a dynamic expression of a trigger type, it is necessary to save a sequence animation frame of the trigger type and coordinate position information of a trigger source; for the atmosphere-type dynamic expression, it is necessary to save the atmosphere-type sequential animation frame (since the drawing coordinate position of the atmosphere type is fixed, it is unnecessary to save the coordinate origin information of the atmosphere type); for the position-type dynamic expression, it is necessary to store position-type sequential animation frames, center position information of the dynamic main body diagram (which may be understood as the center position of the video recording area), and four corner vertex coordinate information of the dynamic main body diagram (which may be understood as four vertex coordinates of the video recording area).
In the implementation process, one or more types of animation elements can be selected from the same dynamic expression according to the use requirement, for example, only the trigger type animation elements can be selected, or the trigger type animation elements and atmosphere type animation can be simultaneously selected, and the like. That is, the "cross-region dynamic expression" can freely match and combine various types of animation elements, so that the flexibility is high, the diversified dynamic effect is realized, and the differentiated requirements of users are met.
In addition, animation elements that need to be synthesized may be preselected before synthesizing dynamic expressions. For the mode of selecting the animation elements, one mode can directly select the animation material templates, the equipment responds to the operation of selecting the animation material templates, the animation elements are extracted from the selected animation material templates, the efficiency of obtaining the animation elements is higher and the combination effect is generally better through the mode of the animation material templates; or, another way is that one or more animation icons can be directly selected and then combined into animation elements, that is, the device responds to the operation of selecting one or more animation icons, and determines the selected one or more animation icons as animation elements. The animation material template and the selected one or more animation icons are respectively associated with a second identifier, for example, the second identifiers are associated and displayed, and the second identifiers are used for indicating that corresponding animation elements can be displayed outside the video recording area to synthesize dynamic expressions.
For the way of selecting the animation element, there is also a way that the animation element to be synthesized can be extracted from the target dynamic expression in response to the follow-up operation for the target dynamic expression displayed in the dialog box area of the session interface, that is, the animation element in the target dynamic expression can be directly used as the animation element to be synthesized, for example, as shown in the foregoing fig. 11b, after the user performs the create expression operation for the target dynamic expression, the user can understand the create expression operation as the follow-up operation at the same time, and then the animation element can be extracted from the follow-up target dynamic expression as the animation element used in the newly created dynamic expression, so that the animation element can be quickly obtained, and meanwhile, the user can directly use the favorite animation element for imitation and reference from the dialog message, thereby meeting the use requirement of the user. The foregoing first identifier is associated with the target dynamic expression of the beat, for example, the first identifier is directly displayed in association with the target dynamic expression of the beat, and the first identifier is, for example, a black triangle mark in fig. 11b, or may also be other marks.
In addition, in the process of synthesizing the dynamic expression, the human body contour information in each video frame in the video data can be determined, the background area of each video frame is determined according to the human body contour information in each video frame, then the transparency of the background area of each video frame is adjusted to a transparent value, or the color of the background area of each video frame is adjusted to a preset color, for example, the background color of the current session interface, and finally the adjusted video frames and animation elements are synthesized to obtain the dynamic expression. That is, in the process of synthesizing the dynamic expression, the background area of the dynamic main body diagram can be removed, so that the dynamic expression can be integrated with the conversation interface conveniently when the dynamic expression is displayed later, abrupt sense is eliminated, and the display effect is enhanced.
In the embodiment of the application, when the dynamic expression is played, the animation elements of the dynamic expression can be played by crossing the display area outside the dynamic main body diagram of the dynamic expression and the display space outside the area with the inherent size of the dynamic main body diagram, so that the limitation of the traditional dynamic expression on the display size is broken through, a brand new dynamic expression playing mechanism is provided, the display range of the dynamic expression is enlarged, the dynamic expression can express richer contents, meanwhile, the flexibility and the interestingness of dynamic expression display can be improved, and the display effect of the dynamic expression is enhanced. In addition, in the process of playing the dynamic expression, the corresponding adaptive reference coordinate system is selected according to different dynamic expressions to draw the animation frame in real time, so that the animation elements can be ensured not to be limited by positions in display, and the trans-regional animation display effect is realized.
Based on the same inventive concept, the embodiments of the present application provide a dynamic expression display device, which may be a hardware structure, a software module, or a hardware structure plus a software module. The dynamic expression presentation means is, for example, the terminal device 301 or the terminal device 302 in fig. 3 described above, or may be a functional means provided in the terminal device 301 or the terminal device 302. Referring to fig. 14a, the dynamic expression display device in the embodiment of the present application includes a response module 1401 and a display module 1402, wherein:
A response module 1401, configured to respond to a selection operation triggered by a session interface to select a dynamic expression;
the display module 1402 is configured to display the selected dynamic main body diagram of the dynamic expression as a session message in a session interface, and play an animation element associated with the dynamic main body diagram in the session interface, where a play area of the animation element includes at least a first area outside the dynamic main body diagram.
In one possible implementation, the playing area of the animation element further includes a second area, where the second area is a part or all of the display area of the dynamic body diagram.
In one possible implementation, the display module 1402 is configured to:
gradually crossing the animation element from one area to the other area in the first area and the second area to play; or alternatively, the process may be performed,
the animation element is played in the first area; or alternatively, the process may be performed,
the animation element is played in the first area and the second area.
In one possible implementation, the display module 1402 is configured to:
and determining a reference position of animation drawing according to the play mode associated with the dynamic expression, and drawing and displaying each animation frame corresponding to the dynamic expression frame by frame with the reference position.
In one possible implementation, the display module 1402 is configured to:
determining a reference position according to the animation type corresponding to the playing mode; and
and drawing and displaying each animation frame corresponding to the dynamic expression frame by frame with a reference position according to the animation attribute information corresponding to the playing mode, wherein the animation attribute information comprises at least one of a motion track, a size, a shape, a color and an animation special effect of an animation element.
In one possible implementation, the display module 1402 is configured to:
if the animation type of the dynamic expression is a trigger animation, determining the trigger source position of the animation element as a reference position; or (b)
If the animation type of the dynamic expression is atmosphere animation, determining the center position of the session interface as a reference position, or determining the center position of a dialog box area in the session interface as the reference position; or (b)
If the animation type of the dynamic expression is a position animation, determining the central position of the playing area of the dynamic main body diagram as a reference position.
In a possible implementation manner, referring to fig. 14b, the dynamic expression display apparatus in this embodiment of the present application further includes a confirmation module 1403, configured to:
Before the display module 1402 displays the selected dynamic main body diagram of the dynamic expression as a conversation message in the conversation interface, the dynamic expression is displayed in an input box area in the conversation interface, and when a confirmation operation for determining to transmit the dynamic expression is detected, the transmission of the dynamic expression is triggered.
In one possible implementation, the transparency of the background area of the dynamic body map is a transparent value, or the color of the background area of the dynamic body map is the background color of the session interface.
In one possible implementation, the dynamic expression is associated with a first identifier, where the first identifier is used to indicate that the associated dynamic expression is a trans-regional dynamic expression.
All relevant contents of each step involved in the foregoing embodiments of the dynamic expression display method may be cited in the functional description of the functional module corresponding to the dynamic expression display device in the embodiments of the present application, which is not described herein again.
Based on the same inventive concept, the embodiments of the present application provide a dynamic expression creating apparatus, which may be a hardware structure, a software module, or a hardware structure plus a software module. The dynamic expression creating means is, for example, the terminal device 301 or the terminal device 302 in fig. 3 described above, or may be a functional means provided in the terminal device 301 or the terminal device 302. Referring to fig. 15a, the dynamic expression creating apparatus in the embodiment of the present application includes a display module 1501 and a creating module 1502, wherein:
The display module 1501 is configured to respond to the operation of creating the expression and display the video recording interface;
the creating module 1502 is configured to respond to a video recording operation triggered at a video recording interface, obtain recorded video data, and store the video data and an animation element in association as a dynamic expression, where the video data is stored as a dynamic main diagram of the dynamic expression, and a playing area of the animation element at least includes a first area outside the dynamic main diagram.
In one possible implementation, the video recording interface includes a video recording area, and the creating module 1502 is configured to:
determining a reference position of animation drawing according to a play mode associated with the animation elements, and synthesizing a sequence video frame of video data and the animation elements by taking the reference position as a coordinate origin point to obtain a sequence animation frame corresponding to the dynamic expression; wherein, in the synthesizing process, the display area of the animation element at least comprises an area outside the video recording area.
In one possible implementation, the display area of the animation element further includes a portion or all of the video recording area.
In one possible implementation, the creation module 1502 is configured to:
if the animation element is a trigger animation, determining a trigger source position of the trigger animation element in the video data as a reference position;
If the animation type of the animation element is atmosphere type animation, determining the central position of the video recording interface as a reference position;
if the animation type of the dynamic expression is a position type animation, the center position of the video recording area is determined to be a reference position.
In a possible implementation manner, referring to fig. 15b, the dynamic expression creating apparatus in the embodiment of the present application further includes a determining module 1503, configured to:
responding to the follow-up operation of the target dynamic expression displayed in the dialog box area of the conversation interface, and extracting animation elements from the target dynamic expression, wherein the target dynamic expression is associated with a first mark, and the first mark is used for indicating that the associated dynamic expression is a trans-regional dynamic expression; or alternatively, the process may be performed,
and responding to the operation of selecting the animation material template or the animation icon, extracting the animation element from the selected animation material template, or determining the selected animation icon as the animation element, wherein the selected animation material template and the animation icon are both associated with a second identifier, and the second identifier is used for indicating that the corresponding animation element can be displayed outside the video recording area so as to synthesize the dynamic expression.
In one possible implementation, the creation module 1502 is configured to:
Determining a background area of each video frame in the video data;
adjusting the transparency of the background area of each video frame to a transparent value, or adjusting the color of the background area of each video frame to a predetermined color;
and synthesizing the regulated video frames and the animation elements to obtain the sequence animation frames corresponding to the dynamic expressions.
All relevant contents of each step involved in the foregoing embodiments of the dynamic expression creating method may be cited in the functional description of the functional module corresponding to the dynamic expression creating device in the embodiments of the present application, which is not described herein again.
The division of the modules in the embodiments of the present application is schematically only one logic function division, and there may be another division manner in actual implementation, and in addition, each functional module in each embodiment of the present application may be integrated in one processor, or may exist separately and physically, or two or more modules may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules.
Based on the same inventive concept, embodiments of the present application also provide a computing device, which may perform the steps of the methods shown in fig. 4 and 10, for example, the terminal device 301 or the terminal device 302 in fig. 3, or may also be the server 303 in fig. 3. Referring to fig. 16, the computing device in the embodiment of the present application includes at least one processor 1601, and a memory 1602 connected to the at least one processor, where the embodiment of the present application is not limited to a specific connection medium between the processor 1601 and the memory 1602, for example, the processor 1601 and the memory 1602 may be connected by a bus, and the bus may be divided into an address bus, a data bus, a control bus, and so on.
In the embodiment of the present application, the memory 1602 stores instructions executable by the at least one processor 1601, and the at least one processor 1601 may perform the steps included in the video processing method by executing the instructions stored in the memory 1602.
The processor 1601 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
Memory 1602 is a non-volatile computer-readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory may include at least one type of storage medium, which may include, for example, flash Memory, hard disk, multimedia card, card Memory, random access Memory (Random Access Memory, RAM), static random access Memory (Static Random Access Memory, SRAM), programmable Read-Only Memory (Programmable Read Only Memory, PROM), read-Only Memory (ROM), charged erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), magnetic Memory, magnetic disk, optical disk, and the like. The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 1602 in the present embodiment may also be a circuit or any other device capable of implementing a memory function for storing program instructions and/or data.
Where the processor 1601 is a control center of the computing device, various interfaces and lines may be used to connect various portions of the overall computing device, as well as to monitor the computing device as a whole by executing or executing instructions stored in the memory 1602 and invoking data stored in the memory 1602, as well as to process data for various functions and processes of the computing device. Alternatively, the processor 1601 may include one or more processing units, and the processor 1601 may integrate an application processor primarily handling operating systems, user interfaces, application programs, etc., with a modem processor primarily handling wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1601. In some embodiments, the processor 1601 and the memory 1602 may be implemented on the same chip, and in some embodiments they may be implemented separately on separate chips.
Further, the computing device in the embodiments of the present application may further include an input unit 1603, a display unit 1604, a radio frequency unit 1605, an audio circuit 1606, a speaker 1607, a microphone 1608, a wireless fidelity (Wireless Fidelity, wiFi) module 1609, a bluetooth module 1610, a power supply 1611, an external interface 1612, a headset jack 1613, and the like. It will be appreciated by those skilled in the art that FIG. 16 is merely an example of a computing device and is not limiting of the computing device, and that a computing device may include more or fewer components than shown, or may combine certain components, or different components.
The input unit 1603 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the computing device. For example, the input unit 1603 may include a touch screen 1614 and other input devices 1615. The touch screen 1614 may collect touch operations on or near the user (e.g., the user's manipulation of any suitable object on the touch screen 1614 or near the touch screen 1614 using a finger, a joint, a stylus, etc.), i.e., the touch screen 1614 may be used to detect touch pressure and touch input position and touch input area, and to actuate the corresponding connection device according to a preset program. The touch screen 1614 may detect a touch operation of the touch screen 1614 by a user, convert the touch operation into a touch signal and transmit the touch signal to the processor 1601, or understand that touch information of the touch operation may be transmitted to the processor 1601, and may receive a command transmitted from the processor 1601 and execute the command. The touch information may include at least one of pressure magnitude information and pressure duration information. The touch screen 1614 may provide an input interface and an output interface between the computing device and a user. In addition, the touch screen 1614 may be implemented in various types of resistive, capacitive, infrared, surface acoustic wave, and the like. The input unit 1603 may also include other input devices 1615 in addition to the touch screen 1614. For example, other input devices 1615 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 1604 may be used to display information entered by a user or information provided to a user as well as various menus of a computing device. Further, the touch screen 1614 may cover the display unit 1604, and when the touch screen 1614 detects a touch operation thereon or nearby, the touch screen 1614 transmits pressure information of the touch operation to the processor 1601 for determination. In the embodiment of the application, the touch screen 1614 and the display unit 1604 may be integrated into one component to implement the input, output and display functions of the computing device. For convenience of description, the embodiment of the present application is schematically illustrated by taking the touch screen 1614 as an example to represent a functional set of the touch screen 1614 and the display unit 1604, and of course, in some embodiments, the touch screen 1614 and the display unit 1604 may also be two independent components.
When the display unit 1604 and the touch pad are overlapped with each other in the form of layers to form the touch screen 1614, the display unit 1604 may be used as an input device and an output device, and when used as an output device, may be used to display images, for example, to realize playback of various videos. The display unit 1604 may include at least one of a liquid crystal display (Liquid Crystal Display, LCD), a thin film transistor liquid crystal display (Thin Film Transistor Liquid Crystal Display, TFT-LCD), an organic light emitting diode (Organic Light Emitting Diode, OLED) display, an active matrix organic light emitting diode (Active Matrix Organic Light Emitting Diode, AMOLED) display, an In-Plane Switching (IPS) display, a flexible display, a 3D display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and the computing device may include two or more display units (or other display means) depending on the particular desired implementation, e.g., the computing device may include an external display unit (not shown in fig. 16) and an internal display unit (not shown in fig. 16).
The radio frequency unit 1605 may be used to receive and transmit information or signals during a call. Typically, the radio frequency circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA), a duplexer, and the like. In addition, the radio frequency unit 1605 may also communicate with network devices and other devices via wireless communications.
Audio circuitry 1606, speaker 1607, microphone 1608 can provide an audio interface between a user and the computing device. The audio circuit 1606 may transmit the received electrical signal after audio data conversion to the speaker 1607, and the speaker 1607 may convert the electrical signal into a sound signal for output. On the other hand, the microphone 1608 converts the collected sound signals into electrical signals, which are received by the audio circuit 1606 and converted into audio data, which are processed by the audio data output processor 1601 and sent to, for example, another electronic device via the radio frequency unit 1605, or the audio data are output to the memory 1602 for further processing, and the audio circuit may also include a headphone jack 1613 for providing a connection interface between the audio circuit and headphones.
WiFi belongs to a short-distance wireless transmission technology, and the computing device can help a user to send and receive emails, browse webpages, access streaming media and the like through the WiFi module 1609, so that wireless broadband Internet access is provided for the user. Although fig. 16 shows a WiFi module 1609, it is to be understood that it is not a necessary component of a computing device and can be omitted entirely as desired within the scope of not changing the essence of the invention.
Bluetooth is a short-range wireless communication technology. By utilizing the Bluetooth technology, communication between mobile communication computing devices such as palm computers, notebook computers and mobile phones can be effectively simplified, communication between the mobile communication computing devices and the Internet (Internet) can be successfully simplified, and the computing device enables data transmission between the computing device and the Internet to be quicker and more efficient through the Bluetooth module 1610, so that a road is widened for wireless communication. Bluetooth technology is an open scheme that enables wireless transmission of voice and data. While fig. 16 shows a bluetooth module 1610, it is to be understood that it is not a necessary component of a computing device and may be omitted entirely as desired within the scope of not changing the essence of the invention.
The computing device may also include a power supply 1611 (such as a battery) for receiving external power or powering the various components within the computing device. Preferably, the power supply 1611 may be logically connected to the processor 1601 by a power management system, so as to perform functions of managing charging, discharging, and power consumption management by the power management system.
The computing device may also include an external interface 1612, which external interface 1612 may include a standard Micro USB interface, may include a multi-pin connector, may be used to connect the computing device to communicate with other devices, and may also be used to connect a charger to charge the computing device.
Although not shown, the computing device in the embodiments of the present application may further include other possible functional modules such as a camera, a flash, and so on, which are not described herein.
Based on the same inventive concept, the embodiments of the present application also provide a storage medium, which may be a computer-readable storage medium, having stored therein computer instructions that, when run on a computer, cause the computer to perform the steps of the dynamic expression presentation method as described above.
Based on the same inventive concept, the embodiments of the present application also provide a storage medium, which may be a computer-readable storage medium, having stored therein computer instructions that, when run on a computer, cause the computer to perform the steps of the dynamic expression creating method as described above.
Based on the same inventive concept, the embodiments of the present application further provide a chip system, where the chip system includes a processor and may further include a memory, to implement the steps of the dynamic expression display method or the steps of the dynamic expression creation method. The chip system may be formed of a chip or may include a chip and other discrete devices.
In some possible implementations, various aspects of the dynamic expression presentation method provided by the embodiments of the present application may also be implemented in the form of a program product including program code for causing a computer to perform the steps in the dynamic expression presentation method according to the various exemplary embodiments of the present application as described above, when the program product is run on the computer.
In some possible implementations, various aspects of the dynamic expression creating method provided by the embodiments of the present application may also be implemented in the form of a program product including program code for causing a computer to perform the steps in the dynamic expression creating method according to various exemplary embodiments of the present application as described above, when the program product is run on the computer.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (14)

1. A dynamic expression display method, characterized in that the method comprises:
responding to a selection operation of selecting dynamic expressions triggered by a session interface, and displaying a dynamic main body diagram of the selected dynamic expressions in the session interface as a session message; and
playing at least one animation element associated with the dynamic main body diagram in the session interface, wherein the playing area of the animation element at least comprises a first area outside the dynamic main body diagram;
the dynamic expression is associated with a first identifier, the first identifier is used for marking that the dynamic expression can be played in an area where the dynamic main body diagram is located and the first area, and the first identifier is presented in the session interface simultaneously with the dynamic expression or presented when the dynamic expression is triggered; the animation type of the dynamic expression comprises at least one of trigger animation, atmosphere animation and position animation, wherein the reference positions of animation elements corresponding to different animation types in the at least one animation element are different in playing; the trigger-type animation characterization: the playing of the corresponding animation element is triggered by the target action appearing in the dynamic main body diagram; the atmosphere type animation representation: adding atmosphere effects for the dynamic expressions by corresponding animation elements; the position type animation characterization: and the corresponding animation elements have a position relation with the dynamic main body diagram.
2. The method of claim 1, wherein the play area of the animation element further comprises a second area, the second area being a part or all of the display area of the dynamic body map.
3. The method of claim 2, wherein playing the animation element associated with the dynamic body diagram in the session interface comprises:
the animation element gradually spans from one area to the other area in the first area and the second area to play; or alternatively, the process may be performed,
the animation element is played in the first area; or alternatively, the process may be performed,
the animation element is played in the first area and the second area.
4. The method of claim 1, wherein exposing the selected dynamic body diagram of the dynamic expression as a conversation message in the conversation interface, and playing the animation element associated with the dynamic body diagram in the conversation interface, comprises:
and determining a reference position for animation drawing according to a play mode associated with the dynamic expression, and starting drawing frame by frame and displaying each animation frame corresponding to the dynamic expression by using the reference position.
5. The method of claim 4, wherein determining a reference position for animation drawing according to the play mode associated with the dynamic expression, and starting drawing and displaying each animation frame corresponding to the dynamic expression frame by frame with the reference position, comprises:
determining the reference position according to the animation type corresponding to the playing mode; and
and drawing and displaying each animation frame corresponding to the dynamic expression frame by frame according to the animation attribute information corresponding to the playing mode, wherein the animation attribute information comprises at least one of a motion track, a size, a shape, a color and an animation special effect of the animation element.
6. The method of claim 5, wherein determining the reference position according to the animation type corresponding to the play mode comprises:
if the animation type of the dynamic expression is the trigger animation, determining the trigger source position of the animation element as the reference position; or (b)
If the animation type of the dynamic expression is the atmosphere animation, determining the center position of the session interface as the reference position, or determining the center position of a dialog box area in the session interface as the reference position; or (b)
And if the animation type of the dynamic expression is the position animation, determining the center position of the playing area of the dynamic main body diagram as the reference position.
7. The method of claim 1, wherein prior to exposing the dynamic body map of the selected dynamic expression as a conversation message in the conversation interface, the method further comprises:
and displaying the dynamic expression in an input box area in the session interface, and triggering to send the dynamic expression when a confirmation operation for determining to send the dynamic expression is detected.
8. The method of claim 1, wherein a transparency of a background area of the dynamic body map is a transparency value, or a color of the background area of the dynamic body map is a background color of the session interface.
9. A method of creating a dynamic expression, the method comprising:
responding to the expression creating operation, and displaying a video recording interface;
responding to video recording operation triggered on the video recording interface, obtaining recorded video data, and storing the video data and animation elements in an associated mode as dynamic expressions, wherein the video data is stored as a dynamic main body diagram of the dynamic expressions, and a playing area of the animation elements at least comprises a first area outside the dynamic main body diagram;
The dynamic expression is associated with a first mark, the first mark is used for marking that the dynamic expression can be played in an area where the dynamic main body diagram is located and the first area, and the first mark is presented simultaneously with the dynamic expression or is presented when the dynamic expression is triggered in a session interface for displaying the dynamic expression; the animation type of the dynamic expression comprises at least one of trigger animation, atmosphere animation and position animation, wherein the reference positions of animation elements corresponding to different animation types in the at least one animation element are different in playing; the trigger-type animation characterization: the corresponding animation element is triggered by a target action appearing in the dynamic main body diagram; the atmosphere type animation representation: adding atmosphere effects for the dynamic expressions by corresponding animation elements; the position type animation characterization: and the corresponding animation elements have a position relation with the dynamic main body diagram.
10. The method of claim 9, wherein the video recording interface includes a video recording area, wherein storing the video data and animation element associations as dynamic expressions comprises:
determining a reference position of animation drawing according to a play mode associated with the animation element, and synthesizing a sequence video frame of the video data and the animation element by taking the reference position as a coordinate original point to obtain a sequence animation frame corresponding to the dynamic expression; wherein, in the synthesizing process, the display area of the animation element at least comprises an area outside the video recording area.
11. The method of claim 10, wherein determining a reference position for the animated rendering based on the play mode associated with the animated element comprises:
if the animation element is the trigger animation, determining a trigger source position for triggering the animation element in the video data as the reference position;
if the animation type of the animation element is the atmosphere type animation, determining the central position of the video recording interface as the reference position;
and if the animation type of the dynamic expression is the position type animation, determining the center position of the video recording area as the reference position.
12. The method of claim 9, wherein the method further comprises:
responding to a follow-up operation aiming at a target dynamic expression displayed in a dialog box area of a conversation interface, and extracting the animation element from the target dynamic expression; or alternatively, the process may be performed,
and responding to the operation of selecting the animation material template or the animation icon, extracting the animation element from the selected animation material template, or determining the selected animation icon as the animation element, wherein the selected animation material template and the animation icon are both associated with a second identifier, and the second identifier is used for indicating that the corresponding animation element can be displayed outside the video recording area so as to synthesize the dynamic expression.
13. A dynamic expression display device, the device comprising:
the response module is used for responding to the selection operation of selecting the dynamic expression triggered by the session interface;
the display module is used for displaying the selected dynamic main body diagram of the dynamic expression as a session message in the session interface and playing at least one animation element associated with the dynamic main body diagram in the session interface, wherein the playing area of the animation element at least comprises a first area outside the dynamic main body diagram;
the dynamic expression is associated with a first identifier, the first identifier is used for marking that the dynamic expression can be played in an area where the dynamic main body diagram is located and the first area, and the first identifier is presented in the session interface simultaneously with the dynamic expression or presented when the dynamic expression is triggered; the animation type of the dynamic expression comprises at least one of trigger animation, atmosphere animation and position animation, wherein the reference positions of animation elements corresponding to different animation types in the at least one animation element are different in playing; the trigger-type animation characterization: the corresponding animation element is triggered by a target action appearing in the dynamic main body diagram; the atmosphere type animation representation: adding atmosphere effects for the dynamic expressions by corresponding animation elements; the position type animation characterization: and the corresponding animation elements have a position relation with the dynamic main body diagram.
14. A dynamic expression creating apparatus, characterized in that the apparatus comprises:
the display module is used for responding to the expression creating operation and displaying the video recording interface;
the creation module is used for responding to video recording operation triggered on the video recording interface, obtaining recorded video data, and storing the video data and animation elements in an associated mode as dynamic expressions, wherein the video data is stored as a dynamic main body diagram of the dynamic expressions, and a playing area of the animation elements at least comprises a first area outside the dynamic main body diagram;
the dynamic expression is associated with a first mark, the first mark is used for marking that the dynamic expression can be played in an area where the dynamic main body diagram is located and the first area, and the first mark is presented simultaneously with the dynamic expression or is presented when the dynamic expression is triggered in a session interface for displaying the dynamic expression; the animation type of the dynamic expression comprises at least one of trigger animation, atmosphere animation and position animation, wherein the reference positions of animation elements corresponding to different animation types in the at least one animation element are different in playing; the trigger-type animation characterization: the corresponding animation element is triggered by a target action appearing in the dynamic main body diagram; the atmosphere type animation representation: adding atmosphere effects for the dynamic expressions by corresponding animation elements; the position type animation characterization: and the corresponding animation elements have a position relation with the dynamic main body diagram.
CN202010273094.5A 2020-04-09 2020-04-09 Dynamic expression display method, dynamic expression creation method and device Active CN111464430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010273094.5A CN111464430B (en) 2020-04-09 2020-04-09 Dynamic expression display method, dynamic expression creation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010273094.5A CN111464430B (en) 2020-04-09 2020-04-09 Dynamic expression display method, dynamic expression creation method and device

Publications (2)

Publication Number Publication Date
CN111464430A CN111464430A (en) 2020-07-28
CN111464430B true CN111464430B (en) 2023-07-04

Family

ID=71683722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010273094.5A Active CN111464430B (en) 2020-04-09 2020-04-09 Dynamic expression display method, dynamic expression creation method and device

Country Status (1)

Country Link
CN (1) CN111464430B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112748974B (en) * 2020-08-05 2024-04-16 腾讯科技(深圳)有限公司 Information display method, device, equipment and storage medium based on session
CN112000252B (en) * 2020-08-14 2022-07-22 广州市百果园信息技术有限公司 Virtual article sending and displaying method, device, equipment and storage medium
CN112328140B (en) * 2020-11-02 2022-02-25 广州华多网络科技有限公司 Image input method, device, equipment and medium thereof
CN112506393B (en) * 2021-02-07 2021-05-18 北京聚通达科技股份有限公司 Icon display method and device and storage medium
CN113438149A (en) * 2021-07-20 2021-09-24 网易(杭州)网络有限公司 Expression sending method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104932853A (en) * 2015-05-25 2015-09-23 深圳市明日空间信息技术有限公司 Dynamic expression play method and device
CN108055191A (en) * 2017-11-17 2018-05-18 深圳市金立通信设备有限公司 Information processing method, terminal and computer readable storage medium
CN109120866A (en) * 2018-09-27 2019-01-01 腾讯科技(深圳)有限公司 Dynamic expression generation method, device, computer readable storage medium and computer equipment
CN109787890A (en) * 2019-03-01 2019-05-21 北京达佳互联信息技术有限公司 Instant communicating method, device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170018289A1 (en) * 2015-07-15 2017-01-19 String Theory, Inc. Emoji as facetracking video masks
CN106357506A (en) * 2016-08-30 2017-01-25 北京北信源软件股份有限公司 Treatment method for expression flow message in instant communication
CN106534875A (en) * 2016-11-09 2017-03-22 广州华多网络科技有限公司 Barrage display control method and device and terminal
CN109388297B (en) * 2017-08-10 2021-10-22 腾讯科技(深圳)有限公司 Expression display method and device, computer readable storage medium and terminal
CN110213638B (en) * 2019-06-05 2021-10-08 北京达佳互联信息技术有限公司 Animation display method, device, terminal and storage medium
CN110428485A (en) * 2019-07-31 2019-11-08 网易(杭州)网络有限公司 2 D animation edit methods and device, electronic equipment, storage medium
CN110475150B (en) * 2019-09-11 2021-10-08 广州方硅信息技术有限公司 Rendering method and device for special effect of virtual gift and live broadcast system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104932853A (en) * 2015-05-25 2015-09-23 深圳市明日空间信息技术有限公司 Dynamic expression play method and device
CN108055191A (en) * 2017-11-17 2018-05-18 深圳市金立通信设备有限公司 Information processing method, terminal and computer readable storage medium
CN109120866A (en) * 2018-09-27 2019-01-01 腾讯科技(深圳)有限公司 Dynamic expression generation method, device, computer readable storage medium and computer equipment
CN109787890A (en) * 2019-03-01 2019-05-21 北京达佳互联信息技术有限公司 Instant communicating method, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Shunya Osawa ; Guifang Duan ; Masataka Seo ; Takanori Igarashi ; Yen-Wei Chen.Reconstruction of 3D dynamic expressions from single facial image.《 2013 IEEE International Conference on Image Processing》.2014,全文. *
基于动态时间规整和主动外观模型的动态表情识别;许良凤;王家勇;崔婧楠;胡敏;张柯柯;《电子与信息学报》;第40卷(第2期);全文 *

Also Published As

Publication number Publication date
CN111464430A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111464430B (en) Dynamic expression display method, dynamic expression creation method and device
US20190221045A1 (en) Interaction method between user terminals, terminal, server, system, and storage medium
US10904482B2 (en) Method and apparatus for generating video file, and storage medium
TWI592021B (en) Method, device, and terminal for generating video
US10553003B2 (en) Interactive method and apparatus based on web picture
US9542949B2 (en) Satisfying specified intent(s) based on multimodal request(s)
WO2019029406A1 (en) Emoji displaying method and apparatus, computer readable storage medium, and terminal
WO2018157812A1 (en) Method and apparatus for implementing video branch selection and playback
CN108900407B (en) Method and device for managing session record and storage medium
US9699291B2 (en) Phonepad
CN113485617A (en) Animation display method and device, electronic equipment and storage medium
WO2022068721A1 (en) Screen capture method and apparatus, and electronic device
WO2021254113A1 (en) Control method for three-dimensional interface and terminal
CN113332720A (en) Game map display method and device, computer equipment and storage medium
CN108710521A (en) A kind of note generation method and terminal device
CN114338572B (en) Information processing method, related device and storage medium
CN115379113A (en) Shooting processing method, device, equipment and storage medium
KR20140089069A (en) user terminal device for generating playable object and method thereof
CN112783386A (en) Page jump method, device, storage medium and computer equipment
CN113362802A (en) Voice generation method and device and electronic equipment
WO2018219040A1 (en) Display method and device, and storage medium
US9384013B2 (en) Launch surface control
US11972173B2 (en) Providing change in presence sounds within virtual working environment
US20240012558A1 (en) User interface providing reply state transition
US20240220198A1 (en) Providing change in presence sounds within virtual working environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40025824

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant