CN111182361A - Communication terminal and video previewing method - Google Patents

Communication terminal and video previewing method Download PDF

Info

Publication number
CN111182361A
CN111182361A CN202010031801.XA CN202010031801A CN111182361A CN 111182361 A CN111182361 A CN 111182361A CN 202010031801 A CN202010031801 A CN 202010031801A CN 111182361 A CN111182361 A CN 111182361A
Authority
CN
China
Prior art keywords
animation
elements
subset
effect
effects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010031801.XA
Other languages
Chinese (zh)
Other versions
CN111182361B (en
Inventor
康凯
彭迎
孙喜洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN202010031801.XA priority Critical patent/CN111182361B/en
Publication of CN111182361A publication Critical patent/CN111182361A/en
Application granted granted Critical
Publication of CN111182361B publication Critical patent/CN111182361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8173End-user applications, e.g. Web browser, game
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Telephone Function (AREA)

Abstract

The application provides a communication terminal and a video previewing method, which are used for improving the execution efficiency of video previewing on the premise of realizing a complex animation effect. The communication terminal includes: a memory, a display and a processor; the memory is used for storing animation effects of the animation set, the animation subset and the animation elements; the processor is used for determining the Seek time corresponding to the video preview frame which needs to be displayed currently; selecting an animation set containing Seek time; if the animation effect of the selected animation set is a special animation effect, determining animation subsets corresponding to the Seek time from the animation set, wherein the special animation effect is that the animation effects of the animation elements in two adjacent animation subsets are different and/or the animation effects of the animation elements in the same animation subset are different; determining a video preview frame according to animation elements in the animation subset and animation effects of the animation elements, and informing a display to display the video preview frame; and a complex animation effect is realized, and the execution efficiency of video preview is improved.

Description

Communication terminal and video previewing method
Technical Field
The present application relates to the field of computer technologies, and in particular, to a communication terminal and a video preview method.
Background
Currently, video has become a part of users' life entertainment. With the development of science and technology, various application software for making videos gradually rises, and a user can edit videos and preview effects through the various application software for making videos.
At present, when previewing a video edited by a user, the video can be simply edited according to a fixed template, so that the video displayed in the previewing process has a single effect; in order to display richer and more complex videos, when a complex animation is set, a plurality of views are defined through animation superposition at present, but when a preview screen of the video is acquired, the execution efficiency is low.
Disclosure of Invention
The embodiment of the application provides a communication terminal and a video previewing method, which are used for improving the execution efficiency of video previewing on the premise of a complex animation effect.
The embodiment of the application provides the following specific technical scheme:
in a first aspect, the present application provides a communication terminal for video preview, including: a processor, a memory, and a display; wherein:
the memory is used for storing animation effects of the animation set, the animation subset and the animation elements;
the processor is used for determining the Seek time corresponding to the video preview frame which needs to be displayed currently; selecting an animation set containing Seek time from a plurality of animation sets, wherein the animation set is divided according to the animation duration of at least one group of animation elements and the animation effect of the animation elements in the animation duration; if the animation effect of the selected animation set is a special animation effect, determining an animation subset corresponding to the Seek time from the selected animation set, wherein the special animation effect is that the animation effects of animation elements in two adjacent animation subsets in the animation set are different and/or the animation effects of animation elements in the same animation subset are different; determining a video preview frame according to animation elements in the animation subset and animation effects of the animation elements, and informing a display to display;
the display is used for displaying the video preview frame.
The communication terminal determines the Seek time corresponding to the video preview frame to be displayed, selects the animation set containing the Seek time from the plurality of animation sets, determines the animation subset corresponding to the Seek time from the selected animation set when the animation effect of the animation set is determined to be the special animation effect, and determines the video preview frame according to the animation elements in the animation subset and the animation effect of the animation elements, wherein the special animation effect is that the animation effects of the animation elements in two adjacent animation subsets in the animation set are different and/or the animation effects of the animation elements in the same animation subset are different. Multiple animation effects can be achieved in one animation, so that the animation display is richer, and the complex animation effect is realized; and by acquiring the animation subset corresponding to the Seek time, the video preview frame is generated according to the animation elements in the animation subset and the animation effect of the animation elements, and the execution efficiency is improved.
In one possible implementation, the processor divides the animation set by:
dividing the preset total animation time into a plurality of time periods according to the preset total animation time and the animation time of at least one group of animation elements, wherein each time period corresponds to at least one group of animation elements;
and composing animation subsets included by at least one group of animation elements corresponding to each time period into an animation set.
The communication terminal provides the rule for dividing the animation set, divides the preset total animation duration into a plurality of time periods according to the animation durations of at least one group of animation elements, and each time period corresponds to the animation set consisting of the animation subsets corresponding to at least one group of animation elements, so that the preset total animation duration is divided into a plurality of animation sets, so that the animation sets corresponding to the Seek time can be selected from the plurality of animation sets in the later process, and the execution efficiency is high when the animation subsets corresponding to the Seek time in the animation sets are selected.
In one possible implementation, the processor is further configured to:
after an animation set corresponding to the determined Seek time is selected from a plurality of animation sets, if the animation effect corresponding to the selected animation set is a single animation effect, determining an animation subset of the Seek time, and displaying animation elements corresponding to the animation subset;
the single animation effect is that the animation effects of the animation elements in the animation subsets are the same, and the animation effects of the animation elements in the two adjacent animation subsets are the same.
In the communication terminal, when the animation effect of the animation set corresponding to the Seek time is determined to be the single animation effect, the animation elements corresponding to the Seek time in the animation set are directly read and displayed, and the single animation effect is the same as the animation effect of the animation elements in the animation set.
In one possible implementation, the processor is specifically configured to:
after determining an animation subset corresponding to the Seek time from the selected animation set, iterating to obtain an animation subset corresponding to the Seek time in the animation subset of the animation set before determining a video preview frame according to animation elements in the animation subset and animation effects of all the animation elements;
if the animation subset is a parallel animation subset, iteratively obtaining each animation element and the corresponding animation effect in the animation subset;
and simultaneously displaying the animation elements according to the animation effect corresponding to the animation elements in the animation subset to generate a video preview frame.
According to the communication terminal, the animation set comprises the plurality of animation subsets, the animation subsets possibly comprise a plurality of animation elements with different animation effects, the animation elements corresponding to the Seek time are displayed in the video frame, the animation effects corresponding to the animation elements are displayed, so that the animation subsets corresponding to the Seek time are selected from the animation sets firstly in the animation subsets, when the animation subsets are parallel animation subsets, the animation elements in the animation subsets and the animation effects corresponding to the animation elements need to be acquired one by one, the acquired animation elements are displayed simultaneously by the animation effects corresponding to the animation elements, a video preview frame is generated, various animation effects can be displayed in one frame of video, and the animation is richer.
In one possible implementation, the processor is specifically configured to:
after the animation elements in the animation subset and the animation effects of all the animation elements are obtained, the animation elements in the animation subset and the animation effects of all the animation elements are cached in a memory, so that the animation elements corresponding to the Seek time and the animation effects of all the animation elements are read from the memory when the video preview frame is determined.
According to the communication terminal, the acquired animation elements and the animation effects of all the animation elements are cached in the memory, so that the animation elements corresponding to the Seek time and the animation effects of all the animation elements are read from the memory when the video preview frame is determined, the animation effects of the animation elements are adopted to display the animation elements, data can be directly read from the cache in the scene of previewing the video but not storing the complete video, the step of acquiring the data is not required to be repeated, the time is saved, and the execution efficiency is improved.
In a second aspect, the present application provides a method for video preview, where the method is applied in a communication terminal, and the method includes:
determining the Seek time corresponding to the video preview frame which needs to be displayed currently;
selecting an animation set containing Seek time from a plurality of animation sets, wherein the animation set is divided according to the animation duration of at least one group of animation elements and the animation effect of the animation elements in the animation duration;
if the animation effect of the selected animation set is a special animation effect, determining an animation subset corresponding to the Seek time from the selected animation set, wherein the special animation effect is that the animation effects of animation elements in two adjacent animation subsets in the animation set are different and/or the animation effects of animation elements in the same animation subset are different;
and determining and displaying the video preview frame according to the animation elements in the animation subset and the animation effect of each animation element.
In one possible implementation, the animation set is partitioned by:
dividing the preset total animation time into a plurality of time periods according to the preset total animation time and the animation time of at least one group of animation elements, wherein each time period corresponds to at least one group of animation elements;
and composing animation subsets included by at least one group of animation elements corresponding to each time period into an animation set.
In a possible implementation manner, after an animation set corresponding to the determined Seek time is selected from a plurality of animation sets, if an animation effect corresponding to the selected animation set is a single animation effect, an animation subset of the Seek time is determined, and animation elements corresponding to the animation subset are displayed;
the single animation effect is that the animation effects of the animation elements in the animation subsets are the same, and the animation effects of the animation elements in the two adjacent animation subsets are the same.
In a possible implementation mode, after determining an animation subset corresponding to the Seek time from a selected animation set, before determining a video preview frame according to animation elements in the animation subset and animation effects of the animation elements, iterating to obtain the animation subset corresponding to the Seek time in the animation subset of the animation set;
if the animation subset is a parallel animation subset, iteratively obtaining each animation element and the corresponding animation effect in the animation subset;
and when the video preview frame is determined according to the animation elements in the animation subset and the animation effect of each animation element: and simultaneously displaying the animation elements according to the animation effect corresponding to the animation elements in the animation subset to generate a video preview frame.
In one possible implementation manner, after the animation elements in the animation subset and the animation effects of the animation elements are determined and before the video preview frame is determined, the animation elements in the animation subset and the animation effects of the animation elements are cached in the memory, so that the animation elements corresponding to the Seek time and the animation effects of the animation elements are read from the memory when the video preview frame is determined.
In a third aspect, the present application provides an apparatus for video preview, the apparatus comprising: the device comprises a first determining module, a selecting module, a second determining module and a third determining module; wherein:
the first determining module is used for determining the Seek time corresponding to the video preview frame which needs to be displayed currently;
the selection module is used for selecting an animation set containing Seek time from the animation sets, wherein the animation set is divided according to the animation duration of at least one group of animation elements and the animation effect of the animation elements in the animation duration;
the second determining module is used for determining the animation subsets corresponding to the Seek time from the selected animation set if the animation effect of the selected animation set is a special animation effect, wherein the special animation effect is that the animation effects of the animation elements in two adjacent animation subsets in the animation set are different and/or the animation effects of the animation elements in the same animation subset are different;
and the third determining module is used for determining and displaying the video preview frame according to the animation elements in the animation subset and the animation effect of each animation element.
In a fourth aspect, the present application further provides a computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions, when executed by a processor, implement the method for video preview provided in the embodiments of the present application.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a mobile terminal provided in the present application;
fig. 2 is a block diagram of a software structure of a mobile terminal according to the present application;
fig. 3 is a schematic view of a user interface of a mobile terminal according to the present application;
fig. 4 is a flowchart of a method for video preview according to an embodiment of the present application;
FIG. 5 is a diagram of a display interface for a video preview of a parallel animation-like collection according to an embodiment of the present application;
FIG. 6 is a diagram of a display interface for a video preview of a sequence-like animation set according to an embodiment of the present application;
FIG. 7 is a flowchart of an overall method for video preview according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a communication terminal for video preview according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a video preview device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solution and advantages of the present application more clearly and clearly understood, the technical solution in the embodiments of the present application will be described below in detail and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions in the embodiments of the present application will be described in detail and clearly with reference to the accompanying drawings. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
Fig. 1 shows a schematic configuration of a communication terminal 100.
The following describes an embodiment specifically taking the communication terminal 100 as an example. It should be understood that the communication terminal 100 shown in fig. 1 is only an example, and the communication terminal 100 may have more or less components than those shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A block diagram of a hardware configuration of a communication terminal 100 according to an exemplary embodiment is exemplarily shown in fig. 1. As shown in fig. 1, the communication terminal 100 includes: a Radio Frequency (RF) circuit 110, a memory 120, a display unit 130, a camera 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (Wi-Fi) module 170, a processor 180, a bluetooth module 181, and a power supply 190.
The RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to the processor 180 for processing; the uplink data may be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 120 may be used to store software programs and data. The processor 180 executes various functions of the communication terminal 100 and data processing by executing software programs or data stored in the memory 120. The memory 120 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 120 stores an operating system that enables the communication terminal 100 to operate. The memory 120 may store an operating system and various application programs, and may also store codes for performing the methods of the embodiments of the present application.
The display unit 130 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the communication terminal 100, and particularly, the display unit 130 may include a touch screen 131 disposed on the front surface of the communication terminal 100 and may collect touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
The display unit 130 may also be used to display a Graphical User Interface (GUI) of information input by or provided to the user and various menus of the communication terminal 100. Specifically, the display unit 130 may include a display screen 132 disposed on the front surface of the communication terminal 100. The display screen 132 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 130 may be used to display various graphical user interfaces in the present application.
The touch screen 131 may cover the display screen 132, or the touch screen 131 and the display screen 132 may be integrated to implement the input and output functions of the communication terminal 100, and after the integration, the touch screen may be referred to as a touch display screen for short. In the present application, the display unit 130 may display the application programs and the corresponding operation steps.
The camera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals which are then passed to the processor 180 for conversion into digital image signals.
The communication terminal 100 may further comprise at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The communication terminal 100 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, optical sensor, motion sensor, and the like.
The audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between a user and the communication terminal 100. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161. The communication terminal 100 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 160, and then outputs the audio data to the RF circuit 110 to be transmitted to, for example, another communication terminal, or outputs the audio data to the memory 120 for further processing. In this application, the microphone 162 may capture the voice of the user.
Wi-Fi belongs to a short-distance wireless transmission technology, and the communication terminal 100 may help a user to send and receive e-mails, browse webpages, access streaming media, and the like through the Wi-Fi module 170, which provides a wireless broadband internet access for the user.
The processor 180 is a control center of the communication terminal 100, connects various parts of the entire communication terminal using various interfaces and lines, and performs various functions of the communication terminal 100 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, processor 180 may include one or more processing units; the processor 180 may also integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a baseband processor, which mainly handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 180. In the present application, the processor 180 may run an operating system, an application program, a user interface display, a touch response, and a processing method according to the embodiments of the present application. In addition, the processor 180 is coupled with the input unit 130 and the display unit 140.
And the bluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the communication terminal 100 may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) having a bluetooth module via the bluetooth module 181, so as to perform data interaction.
The communication terminal 100 also includes a power supply 190 (such as a battery) to power the various components. The power supply may be logically connected to the processor 180 through a power management system to manage charging, discharging, power consumption, etc. through the power management system. The communication terminal 100 may also be configured with power buttons for powering the communication terminal on and off, and for locking the screen.
Fig. 2 is a block diagram of a software configuration of the communication terminal 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide a communication function of the communication terminal 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the communication terminal vibrates, and an indicator light flashes.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following exemplifies the workflow of the software and hardware of the communication terminal 100 in connection with capturing a photographing scene.
When the touch screen 131 receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 140.
The communication terminal 100 in the embodiment of the present application may be a mobile phone, a tablet computer, a wearable device, a notebook computer, a television, and the like.
Fig. 3 is a schematic diagram for illustrating a user interface on a communication terminal (e.g., communication terminal 100 of fig. 1). In some implementations, a user can open a corresponding application by touching an application icon on the user interface, or can open a corresponding folder by touching a folder icon on the user interface.
In the display interface of fig. 3, a user can edit a short video by touching an application software icon on the user interface, such as an application program of "word saying", "trembling", and the like, and can intercept a certain video segment as a short video in the process of editing the short video, and also can make the short video by himself; when a short video is produced, a text animation is usually produced, a session is displayed in a video mode, and a display background can be set, but in the current text animation video previewing and playing process, the display mode is single, vertical superposition or transverse superposition is usually adopted, and a rotation effect can occur in multi-language sentences; therefore, the method is focused on the complex character animation, so that the animation can have more display effects in the preview playing process, the simple superposition is not needed, and the effects of 3D rotation, flash and the like can appear; however, in order to realize a complex animation effect, animation superposition is required during video preview, a plurality of views are defined, and the execution efficiency is low.
Therefore, the present application provides a video preview method for improving the execution efficiency under the complex animation effect.
In the method, after the Seek time corresponding to the video preview frame which needs to be displayed currently is determined, an animation set corresponding to the Seek time is selected from a plurality of animation sets, and an animation subset corresponding to the Seek time is determined from the selected animation set; determining a video preview frame according to animation elements in the animation subset and animation effects of the animation elements, and displaying the video preview frame to generate a preview video; and acquiring a plurality of animation elements, extracting the animation elements into canvas, and animating the animation frame by frame without defining a plurality of views, thereby improving the execution efficiency.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide method steps as shown in the following embodiments or figures, more or fewer steps may be included in the method based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application.
In the process of video editing, the method mainly comprises the steps of making, previewing, storing and playing and the like. The video production process is that a user selects a background, animation elements displayed in a video, animation effects of the animation elements, appearance modes of background pictures and the like according to the needs of the user; the video previewing process is to generate video frames according to various parameters set in the manufacturing process and continuously display the generated video frames; the storage and playing are to store various parameters set in the production process in a memory after confirming that the preview is finished and the video content does not need to be modified, and to play the video according to the stored parameters in the playing process.
As shown in fig. 4, a flowchart of a method for video preview provided in an embodiment of the present application includes the following steps:
step 400, determining the Seek time corresponding to the video preview frame which needs to be displayed currently.
When the video is previewed, the video can be previewed in sequence, and a video preview frame corresponding to the time point after the progress bar is adjusted can be displayed according to an adjusting instruction of the user for adjusting the progress bar, so that the video can be previewed; therefore, the Seek time may be each time point in the case of sequential preview, or may be a corresponding time point after the user adjusts the progress bar.
After determining the Seek time, displaying a video preview frame of the Seek time on the display.
In the application, the video preview frame is generated according to parameters set by a user during video production; wherein, the parameters set by the user include: animation elements such as Chinese characters, letters, numbers and the like, animation effects of all the animation elements, duration of total animation and the like.
Therefore, when determining the video preview frame, it is necessary to determine animation elements corresponding to the Seek time and animation effects of the animation elements.
Step 410, selecting an animation set containing Seek time from a plurality of animation sets, wherein the animation set is divided according to animation duration of at least one group of animation elements and animation effect of the animation elements in the animation duration.
In the present application, a picture set is divided by:
dividing the preset total animation time into a plurality of time periods according to the preset total animation time and the animation time of at least one group of animation elements, wherein each time period corresponds to at least one group of animation elements;
and composing animation subsets included by at least one group of animation elements corresponding to each time period into an animation set.
In the application, the total animation time length is determined according to the time length or the number of pictures corresponding to the video clip selected by the user;
specifically, the duration corresponding to the selected video clip is taken as the total animation duration; or determining the total animation time length according to the number of the pictures, the display time of each picture and the switching interval between each picture.
For example, if the video clip selected by the user is 1 minute 20 seconds, the total video duration is 1 minute 20 seconds; if the user determines the total animation time length according to the number of pictures, for example, the user selects 7 pictures, the display time of each picture is preset to be 3 seconds, the display interval of the pictures is 1 second, and the total animation time length is 28 seconds.
In the application, a user can set at least one group of animation elements by means of voice input and/or handwritten book input; assuming a group of animation elements as a segment of characters, wherein each character is an animation element;
therefore, when a user inputs a plurality of sections of characters, a plurality of groups of animation elements exist correspondingly; in the application, a user can input a first section of characters in a voice mode and input a second section of characters in a handwriting input mode.
After the total animation duration and the animation elements set by the user are determined, it is required to ensure that the animation elements set by the user are displayed in the total animation duration. It is therefore necessary to assign a display time duration to each set of animation elements and to divide the total animation time duration into a plurality of time segments, each time segment corresponding to at least one set of animation elements.
When the total animation time length is divided into a plurality of time periods, the total animation time length is divided into a plurality of time periods according to the number of animation elements in each group of animation elements, and the time length of each time period may be different; the total animation time length can be further divided into a plurality of time periods according to the animation speed level set by the user for each group of animation elements and the total animation time length, for example, the animation speed of the first group of animation elements is set to be fast by the user, the animation speed of the second group of animation elements is set to be slow, and if the number of the first group of animation elements is the same as the number of the second group of animation elements, the animation time length of the first group of animation elements is smaller than the animation time length of the second group of animation elements.
An animation set may contain a plurality of animation subsets, and an animation subset is a word or a word group or a plurality of continuous words in a segment of words; the animation subset is determined by identifying phrases in animation elements corresponding to the animation set or according to the setting of a user;
in the present application, each time period is a duration of one animation set, and each time period contains a subset of animations in the animation set.
In step 420, if the animation effect of the selected animation set is a special animation effect, determining an animation subset corresponding to the Seek time from the selected animation set, where the special animation effect is that animation effects of animation elements in two adjacent animation subsets in the animation set are different and/or animation effects between animation elements in the same animation subset are different.
In the application, because a plurality of animation sets exist, after the animation sets are arranged according to a time sequence, animation elements in the animation sets are read and displayed according to the arrangement sequence; therefore, whether the animation set comprises the time corresponding to the Seek is determined in the process of selecting the animation set, if the animation set comprises the time corresponding to the Seek, the animation effect of the animation set is determined, and after the animation effect of the animation set is determined, the animation subset is read from the animation set to determine the animation elements in the animation subset and the display effect of the animation elements.
The animation effect of the animation set is a single animation effect, the single animation effect is that the animation effects of the animation elements in the animation subsets are the same, and the animation effects of the animation elements in two adjacent animation subsets are the same; or the animation effect of the animation set is a special animation effect, and the special animation effect is that the animation effects of the animation elements in two adjacent animation subsets in the animation set are different and/or the animation effects of the animation elements in the same animation subset are different.
In the application, if the animation effect of the animation set is determined to be a single animation effect, the animation subset corresponding to the Seek time is determined in the animation set, the animation subset in the animation set is executed, the animation elements of the animation subset are displayed, and the animation effect of the animation elements is a single animation effect such as vertical superposition, transverse superposition and the like;
if the animation effect of the animation set is determined to be the special animation effect, the animation of the animation set is divided into a parallel animation subset and a sequence animation subset due to the fact that the special animation effect is different in the animation effects of two adjacent animation subsets and/or different in the animation effect of animation elements in the same animation subset, under certain special conditions, the animation subsets of the sequence animation set may be the parallel animation subsets, and the animation subsets in the parallel animation set are the parallel animation subsets;
animation elements in the parallel animation set are started at the same time, the animation is stopped at the same time, and the same animation subset contains various animation effects, for example, the animation elements in the parallel animation set are 'big good' animation, the animation time length is 3 seconds, each second is a video frame, each video frame is an animation subset, the 'big good' is the animation element in each frame, the display effect of the 'big good' in the first frame is that the 'big' appears from the upper left corner of the screen and moves to the center position; the 'home' is positioned in the center of the screen; "good" appears from the lower right corner of the screen, moving towards the center position; the big character moves in a mode of flickering the big character and the small character, the Chinese character with house marks, the good character and the bold character move, and as shown in figure 5, the video preview effect of the parallel animation set is achieved.
The animation elements in the sequence type animation set are started in sequence, and the animation subsets in the sequence type animation set comprise sequence type animation subsets and parallel type animation subsets; the animation elements in the sequence type animation subset adopt the same animation effect, and the animation elements in the parallel type animation subset adopt different animation effects. Assuming that animation elements in the animation set are 'welcome people', a first subset is 'happy', a second subset is 'welcome', a third subset is 'big', and the first subset is deviated from the left by 45 degrees; the second subset is right-offset by 45 degrees; the "big" in the third subset is bold and displayed offset up and down, as shown in fig. 6, the video preview effect of the sequence-like animation set.
Therefore, after the animation effect of the animation set is determined to be a special animation effect, the animation subset in the parallel animation set is obtained in an iterative mode, whether the animation subset is the parallel animation subset or the sequence animation subset is determined, and after the animation subset is determined to be the parallel animation subset, the animation elements corresponding to the Seek time in the animation subset and the animation effect corresponding to the animation elements are obtained in an iterative mode so that the video frame can be displayed; if the sequence type animation subset is the sequence type animation subset, the animation elements in the animation subset are obtained, and the animation effect of any animation element in the animation subset is determined.
And step 430, determining a video preview frame according to the animation elements in the animation subset and the animation effect of each animation element, and displaying.
In the application, the display effect of each animation element is set by a user, so that after the animation elements are determined, the animation effect of the animation elements can be determined, and the animation elements and the animation effect corresponding to the animation elements are determined.
As shown in fig. 7, a flowchart of an overall method for video preview provided in an embodiment of the present application specifically includes the following steps:
step 700, determining the Seek time corresponding to the video preview frame which needs to be displayed currently;
step 701, acquiring animation sets corresponding to Seek time from a plurality of animation sets;
step 702, determining the animation effect of the animation set, if the animation effect of the animation set is a single animation effect, executing step 703, and if the animation effect of the animation set is a complex animation effect, executing step 704;
step 703, acquiring an animation subset corresponding to the Seek time in the animation set, and displaying animation elements in the animation subset;
step 704, determining whether the special animation effect is the animation effect of the parallel animation set or the animation effect of the sequence animation set; if the sequence class is the sequence class, executing step 705, and if the parallel class is the parallel class, executing step 707;
step 705, iteratively obtaining an animation subset in the animation set, and determining an animation subset corresponding to the Seek time;
step 707, determining and displaying a video preview frame according to the animation elements in the animation subset and the display effect of the animation elements;
step 707, iteratively acquiring an animation subset in the animation set, and determining an animation subset corresponding to the Seek time;
and 708, acquiring the display effect of each animation element in the animation subset, determining a video preview frame according to the animation element and the display effect of the animation element, and displaying.
It should be noted that, in the present application, for the sequential preview case, the previewed video preview frame before the Seek time needs to be positioned at the position where the execution is completed; or for the condition that the user selects to adjust the preview time point, positioning the animation element before the preview time point to the position of the execution completion, and preventing the animation from deforming.
Based on the same inventive concept, the embodiment of the present application further provides a communication terminal for video preview, and as the communication terminal corresponds to the communication terminal corresponding to the video preview method in the embodiment of the present application, and the principle of the communication terminal for solving the problem is similar to the principle of the method, the implementation of the communication terminal can refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 8, a communication terminal 800 for video preview according to an embodiment of the present invention, the communication terminal 800 includes: a processor 801, a memory 802, and a display 803; wherein:
the memory 802 is used for storing animation effects of animation sets, animation subsets and animation elements;
the processor 801 is configured to determine a Seek time corresponding to a video preview frame that needs to be displayed currently; selecting an animation set containing Seek time from a plurality of animation sets, wherein the animation set is divided according to the animation duration of at least one group of animation elements and the animation effect of the animation elements in the animation duration; if the animation effect of the selected animation set is a special animation effect, determining an animation subset corresponding to the Seek time from the selected animation set, wherein the special animation effect is that the animation effects of animation elements in two adjacent animation subsets in the animation set are different and/or the animation effects of animation elements in the same animation subset are different; determining a video preview frame according to the animation elements in the animation subset and the animation effect of each animation element, and informing the display 803 to display the video preview frame;
the display 803 is used to display a video preview frame.
In one possible implementation, the processor 801 divides the animation set by:
dividing the preset total animation time into a plurality of time periods according to the preset total animation time and the animation time of at least one group of animation elements, wherein each time period corresponds to at least one group of animation elements;
and composing animation subsets included by at least one group of animation elements corresponding to each time period into an animation set.
In one possible implementation, the processor 801 is further configured to:
after an animation set corresponding to the determined Seek time is selected from a plurality of animation sets, if the animation effect corresponding to the selected animation set is a single animation effect, determining an animation subset of the Seek time, and displaying animation elements corresponding to the animation subset;
the single animation effect is that the animation effects of the animation elements in the animation subsets are the same, and the animation effects of the animation elements in the two adjacent animation subsets are the same.
In one possible implementation, the processor 801 is specifically configured to:
after determining an animation subset corresponding to the Seek time from the selected animation set, iterating to obtain an animation subset corresponding to the Seek time in the animation subset of the animation set before determining a video preview frame according to animation elements in the animation subset and animation effects of all the animation elements;
if the animation subset is a parallel animation subset, iteratively obtaining each animation element and the corresponding animation effect in the animation subset;
and simultaneously displaying the animation elements according to the animation effect corresponding to the animation elements in the animation subset to generate a video preview frame.
In one possible implementation, the processor 801 is specifically configured to:
after the animation elements in the animation subset and the animation effects of all the animation elements are obtained, the animation elements in the animation subset and the animation effects of all the animation elements are cached in a memory, so that the animation elements corresponding to the Seek time and the animation effects of all the animation elements are read from the memory when the video preview frame is determined.
As shown in fig. 9, an apparatus 900 for video preview according to an embodiment of the present invention includes a first determining module 901, a selecting module 902, a second determining module 903, and a third determining module 904; wherein:
the first determining module 901 is configured to determine a Seek time corresponding to a video preview frame that needs to be displayed currently;
a selecting module 902, configured to select an animation set including a Seek time from a plurality of animation sets, where the animation set is divided according to animation durations of at least one group of animation elements and animation effects of the animation elements within the animation durations;
the second determining module 903 is configured to determine, if the animation effect of the selected animation set is a special animation effect, an animation subset corresponding to the Seek time from the selected animation set, where the special animation effect is that animation effects of animation elements in two adjacent animation subsets in the animation set are different and/or animation effects of animation elements in the same animation subset are different;
the third determining module 904 is configured to determine and display the video preview frame according to the animation elements in the animation subset and the animation effects of the animation elements.
In one possible implementation, the selection module 902 divides the animation set by:
dividing the preset total animation time into a plurality of time periods according to the preset total animation time and the animation time of at least one group of animation elements, wherein each time period corresponds to at least one group of animation elements;
and composing animation subsets included by at least one group of animation elements corresponding to each time period into an animation set.
In a possible implementation manner, the second determining module 903 is further configured to determine an animation subset of the Seek time and display an animation element corresponding to the animation subset if the animation effect corresponding to the selected animation set is a single animation effect;
the single animation effect is that the animation effects of the animation elements in the animation subsets are the same, and the animation effects of the animation elements in the two adjacent animation subsets are the same.
In a possible implementation manner, the second determining module 903 is further configured to iteratively obtain, in an animation subset of the animation set, an animation subset corresponding to the Seek time;
if the animation subset is a parallel animation subset, iteratively obtaining each animation element and the corresponding animation effect in the animation subset;
and simultaneously displaying the animation elements according to the animation effect corresponding to the animation elements in the animation subset to generate a video preview frame.
In a possible implementation manner, the third determining module 904 is further configured to cache the animation elements in the animation subset and the animation effects of the animation elements in the memory, so that when the video preview frame is determined, the animation elements corresponding to the Seek time and the animation effects of the animation elements are read from the memory.
An embodiment of the present application further provides a computer-readable non-volatile storage medium, which includes program code for causing a computing terminal to execute the steps of the video preview method of the present application when the program code runs on the computing terminal.
The present application is described above with reference to block diagrams and/or flowchart illustrations of methods, apparatus (systems) and/or computer program products according to embodiments of the application. It will be understood that one block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, and/or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
Accordingly, the subject application may also be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.). Furthermore, the present application may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. In the context of this application, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or communications terminal.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A communication terminal for video preview, the communication terminal comprising: a memory, a display and a processor; wherein:
the memory is used for storing animation effects of the animation set, the animation subset and the animation elements;
the processor is used for determining the Seek time corresponding to the video preview frame which needs to be displayed currently; selecting an animation set containing the Seek time from a plurality of animation sets, wherein the animation set is divided according to the animation duration of at least one group of animation elements and the animation effect of the animation elements in the animation duration; if the animation effect of the selected animation set is a special animation effect, determining an animation subset corresponding to the Seek time from the selected animation set, wherein the special animation effect is that the animation effects of animation elements in two adjacent animation subsets in the animation set are different and/or the animation effects of animation elements in the same animation subset are different; determining a video preview frame according to the animation elements in the animation subset and the animation effect of each animation element, and informing the display to display;
the display is used for displaying the video preview frame.
2. The communication terminal of claim 1, wherein the processor divides the set of animations by:
dividing the preset total animation time into a plurality of time periods according to the preset total animation time and the animation time of at least one group of animation elements, wherein each time period corresponds to at least one group of animation elements;
and composing animation subsets included by at least one group of animation elements corresponding to each time period into an animation set.
3. The communication terminal of claim 1, wherein the processor is further configured to:
after an animation set corresponding to the determined Seek time is selected from a plurality of animation sets, if the animation effect corresponding to the selected animation set is a single animation effect, determining an animation subset of the Seek time, and displaying animation elements corresponding to the animation subset;
the single animation effect is that the animation effects of the animation elements in the animation subsets are the same, and the animation effects of the animation elements in the two adjacent animation subsets are the same.
4. The communication terminal of claim 1, wherein the processor is specifically configured to:
after determining the animation subset corresponding to the Seek time from the selected animation set, iterating to obtain the animation subset corresponding to the Seek time in the animation subset of the animation set before determining the video preview frame according to the animation elements in the animation subset and the animation effect of each animation element;
if the animation subset is a parallel animation subset, iteratively obtaining each animation element and the corresponding animation effect in the animation subset;
and simultaneously displaying the animation elements according to the animation effect corresponding to the animation elements in the animation subset, and generating the video preview frame.
5. The communication terminal of claim 1, wherein the processor is further configured to:
and caching the animation elements in the animation subset and the animation effects of the animation elements in the animation subset in a memory according to the animation elements in the animation subset and the animation effects of the animation elements, so that the animation elements corresponding to the Seek time and the animation effects of the animation elements are read from the memory when a video preview frame is determined.
6. A method for video preview, which is applied in a communication terminal, includes:
determining the Seek time corresponding to the video preview frame which needs to be displayed currently;
selecting an animation set containing the Seek time from a plurality of animation sets, wherein the animation set is divided according to the animation duration of at least one group of animation elements and the animation effect of the animation elements in the animation duration;
if the animation effect of the selected animation set is a special animation effect, determining an animation subset corresponding to the Seek time from the selected animation set, wherein the special animation effect is that the animation effects of animation elements in two adjacent animation subsets in the animation set are different and/or the animation effects of animation elements in the same animation subset are different;
and determining and displaying the video preview frame according to the animation elements in the animation subset and the animation effect of each animation element.
7. The method of claim 6, wherein the sub-picture set is partitioned by:
dividing the preset total animation time into a plurality of time periods according to the preset total animation time and the animation time of at least one group of animation elements, wherein each time period corresponds to at least one group of animation elements;
and composing animation subsets included by at least one group of animation elements corresponding to each time period into an animation set.
8. The method of claim 6, wherein after selecting the animation set corresponding to the determined Seek time from the plurality of animation sets, further comprising:
if the animation effect corresponding to the selected animation set is a single animation effect, determining an animation subset of the Seek time, and displaying an animation element corresponding to the animation subset;
the single animation effect is that the animation effects of the animation elements in the animation subsets are the same, and the animation effects of the animation elements in the two adjacent animation subsets are the same.
9. The method of claim 6, wherein after determining the subset of animations corresponding to the Seek time from the selected animation set, before determining the video preview frame according to the animation elements in the subset of animations and the animation effect of each animation element, further comprising:
in the animation subset of the animation set, iteratively acquiring the animation subset corresponding to the Seek time;
if the animation subset is a parallel animation subset, iteratively obtaining each animation element and the corresponding animation effect in the animation subset;
determining a video preview frame according to the animation elements in the animation subset and the animation effect of each animation element, comprising:
and simultaneously displaying the animation elements according to the animation effect corresponding to the animation elements in the animation subset, and generating the video preview frame.
10. The method of claim 9, wherein said determining a video preview frame based on animation elements in said subset of animations and animation effects of each animation element comprises:
and caching the animation elements in the animation subset and the animation effects of all the animation elements in a memory so as to read the animation elements corresponding to the Seek time and the animation effects of all the animation elements from the memory when the video preview frame is determined.
CN202010031801.XA 2020-01-13 2020-01-13 Communication terminal and video previewing method Active CN111182361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010031801.XA CN111182361B (en) 2020-01-13 2020-01-13 Communication terminal and video previewing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010031801.XA CN111182361B (en) 2020-01-13 2020-01-13 Communication terminal and video previewing method

Publications (2)

Publication Number Publication Date
CN111182361A true CN111182361A (en) 2020-05-19
CN111182361B CN111182361B (en) 2022-06-17

Family

ID=70652737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010031801.XA Active CN111182361B (en) 2020-01-13 2020-01-13 Communication terminal and video previewing method

Country Status (1)

Country Link
CN (1) CN111182361B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030095135A1 (en) * 2001-05-02 2003-05-22 Kaasila Sampo J. Methods, systems, and programming for computer display of images, text, and/or digital content
US7830384B1 (en) * 2005-04-27 2010-11-09 Image Metrics Limited Animating graphical objects using input video
US20150318020A1 (en) * 2014-05-02 2015-11-05 FreshTake Media, Inc. Interactive real-time video editor and recorder
CN107943964A (en) * 2017-11-27 2018-04-20 腾讯音乐娱乐科技(深圳)有限公司 Lyric display method, device and computer-readable recording medium
CN108337547A (en) * 2017-11-27 2018-07-27 腾讯科技(深圳)有限公司 A kind of word cartoon implementing method, device, terminal and storage medium
US10083537B1 (en) * 2016-02-04 2018-09-25 Gopro, Inc. Systems and methods for adding a moving visual element to a video
WO2018174945A1 (en) * 2017-03-22 2018-09-27 Google Llc Caller preview data and call messages based on caller preview data
CN109788335A (en) * 2019-03-06 2019-05-21 珠海天燕科技有限公司 Video caption generation method and device
CN109803180A (en) * 2019-03-08 2019-05-24 腾讯科技(深圳)有限公司 Video preview drawing generating method, device, computer equipment and storage medium
CN110213638A (en) * 2019-06-05 2019-09-06 北京达佳互联信息技术有限公司 Cartoon display method, device, terminal and storage medium
CN110662090A (en) * 2018-06-29 2020-01-07 腾讯科技(深圳)有限公司 Video processing method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030095135A1 (en) * 2001-05-02 2003-05-22 Kaasila Sampo J. Methods, systems, and programming for computer display of images, text, and/or digital content
US7830384B1 (en) * 2005-04-27 2010-11-09 Image Metrics Limited Animating graphical objects using input video
US20150318020A1 (en) * 2014-05-02 2015-11-05 FreshTake Media, Inc. Interactive real-time video editor and recorder
US10083537B1 (en) * 2016-02-04 2018-09-25 Gopro, Inc. Systems and methods for adding a moving visual element to a video
WO2018174945A1 (en) * 2017-03-22 2018-09-27 Google Llc Caller preview data and call messages based on caller preview data
CN107943964A (en) * 2017-11-27 2018-04-20 腾讯音乐娱乐科技(深圳)有限公司 Lyric display method, device and computer-readable recording medium
CN108337547A (en) * 2017-11-27 2018-07-27 腾讯科技(深圳)有限公司 A kind of word cartoon implementing method, device, terminal and storage medium
CN110662090A (en) * 2018-06-29 2020-01-07 腾讯科技(深圳)有限公司 Video processing method and system
CN109788335A (en) * 2019-03-06 2019-05-21 珠海天燕科技有限公司 Video caption generation method and device
CN109803180A (en) * 2019-03-08 2019-05-24 腾讯科技(深圳)有限公司 Video preview drawing generating method, device, computer equipment and storage medium
CN110213638A (en) * 2019-06-05 2019-09-06 北京达佳互联信息技术有限公司 Cartoon display method, device, terminal and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘旭: ""视效预览软件在动态视觉领域的应用研究"", 《新媒体研究》 *

Also Published As

Publication number Publication date
CN111182361B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN107087101B (en) Apparatus and method for providing dynamic panorama function
CN111240546B (en) Split screen processing method and communication terminal
CN111367456A (en) Communication terminal and display method in multi-window mode
CN111225108A (en) Communication terminal and card display method of negative screen interface
CN112114733B (en) Screen capturing and recording method, mobile terminal and computer storage medium
CN111597000A (en) Small window management method and terminal
CN111597004B (en) Terminal and user interface display method in application
CN111857531A (en) Mobile terminal and file display method thereof
CN111176766A (en) Communication terminal and component display method
CN113709026B (en) Method, device, storage medium and program product for processing instant communication message
CN114374813A (en) Multimedia resource management method, recorder and server
CN111031377B (en) Mobile terminal and video production method
CN113055585B (en) Thumbnail display method of shooting interface and mobile terminal
CN111182361B (en) Communication terminal and video previewing method
CN111324255B (en) Application processing method based on double-screen terminal and communication terminal
CN113079332B (en) Mobile terminal and screen recording method thereof
CN111163220B (en) Display method, communication terminal and computer storage medium
CN114979533A (en) Video recording method, device and terminal
CN114594894A (en) Interface element marking method, terminal device and storage medium
CN113507614A (en) Video playing progress adjusting method and display equipment
CN113157092A (en) Visualization method, terminal device and storage medium
CN113760164A (en) Display device and response method of control operation thereof
CN112328135A (en) Mobile terminal and application interface display method thereof
CN112114883A (en) Terminal awakening method, terminal and computer storage medium
CN111381801B (en) Audio playing method based on double-screen terminal and communication terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Patentee after: Qingdao Hisense Mobile Communication Technology Co.,Ltd.

Address before: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Patentee before: HISENSE MOBILE COMMUNICATIONS TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder