CN113316016A - Video processing method and device, storage medium and mobile terminal - Google Patents

Video processing method and device, storage medium and mobile terminal Download PDF

Info

Publication number
CN113316016A
CN113316016A CN202110592685.3A CN202110592685A CN113316016A CN 113316016 A CN113316016 A CN 113316016A CN 202110592685 A CN202110592685 A CN 202110592685A CN 113316016 A CN113316016 A CN 113316016A
Authority
CN
China
Prior art keywords
video
image
dynamic
video image
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110592685.3A
Other languages
Chinese (zh)
Inventor
张永兴
许玉新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Communication Ningbo Ltd
Original Assignee
TCL Communication Ningbo Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Communication Ningbo Ltd filed Critical TCL Communication Ningbo Ltd
Priority to CN202110592685.3A priority Critical patent/CN113316016A/en
Publication of CN113316016A publication Critical patent/CN113316016A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • H04N21/4325Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a video processing method, a video processing device, a storage medium and a mobile terminal, wherein the video processing method comprises the following steps: in the video shooting process, multiple frames of target video images are sequentially acquired, when the video shooting is finished, the multiple frames of target video images are synthesized into a dynamic image, and the dynamic image is used as a preview image of the video. Because the preview image is a dynamic image formed according to the shot video, when a user inquires a certain video to be browsed, the user can judge the approximate content of the video by directly browsing the dynamic image without clicking the preview image to play the video content, and the video to be browsed is inquired in a short time, so that the video inquiry time is shortened, and the video inquiry efficiency is improved.

Description

Video processing method and device, storage medium and mobile terminal
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a video processing method and apparatus, a storage medium, and a mobile terminal.
Background
With the development of electronic technology and computer technology, various types of electronic terminals for shooting and processing videos and pictures have been greatly popularized, and people can finish shooting and playing videos at any time and any place through various electronic terminals.
When the video is shot, a video preview (for example, a first frame image is captured from the video) is usually saved in an album for a user to browse, and the user can play the video after clicking the video preview. When the number of the shot videos is large, if a user wants to find a certain video from the videos for browsing, the user can only roughly estimate the content of the video through the video preview image, so that the user usually needs to click a plurality of video preview images to play the plurality of videos to find the video which is desired to be browsed, which results in long time consumption and low efficiency of video query.
Disclosure of Invention
The embodiment of the application provides a video processing method, a video processing device, a storage medium and a mobile terminal, which can shorten video query time and improve video query efficiency.
The embodiment of the application provides a video processing method, which comprises the following steps:
sequentially acquiring multi-frame target video images in the video shooting process;
and when the video shooting is finished, synthesizing the multi-frame target video images into a dynamic image, and taking the dynamic image as a preview image of the video.
Wherein, the obtaining of the multi-frame target video images in sequence comprises:
judging whether the video image meets a preset condition or not when shooting one frame of video image;
and if so, taking the video image as the target video image.
Wherein, the judging whether the video image meets the preset condition comprises:
detecting whether the video image has preset image elements;
if the video image has preset image elements, judging whether the image elements in the video image are the same as the image elements in the newly acquired target video image, if so, determining that the video image does not meet preset conditions, and if not, determining that the video image meets the preset conditions;
and if the video image does not have the preset image elements, determining that the video image does not meet the preset condition.
The synthesizing of the multiple frames of target video images into the dynamic image comprises the following steps:
sequencing the multi-frame target video images according to the acquisition time sequence to form an image sequence;
and converting the image sequence into a graph exchange format to form a dynamic graph.
Wherein the method further comprises:
establishing a jump link of the dynamic graph and a video file corresponding to the video;
and responding to the touch operation of the dynamic graph, jumping to the video file according to the jump link, and playing the video corresponding to the video file.
An embodiment of the present application further provides a video processing apparatus, where the apparatus includes:
the acquisition module is used for sequentially acquiring multi-frame target video images in the video shooting process;
and the synthesis module is used for synthesizing the multi-frame target video images into a dynamic image when the video shooting is finished, and taking the dynamic image as a preview image of the video.
Wherein the obtaining module is specifically configured to:
judging whether the video image meets a preset condition or not when shooting one frame of video image;
and if so, taking the video image as the target video image.
Wherein the synthesis module is specifically configured to:
sequencing the multi-frame target video images according to the acquisition time sequence to form an image sequence;
and converting the image sequence into a graph exchange format to form a dynamic graph.
The embodiment of the application also provides a computer readable storage medium, wherein a plurality of instructions are stored in the storage medium, and the instructions are suitable for being loaded by a processor to execute any one of the video processing methods.
The embodiment of the application further provides a mobile terminal, which comprises a processor and a memory, wherein the processor is electrically connected with the memory, the memory is used for storing instructions and data, and the processor is used for executing the steps in any one of the video processing methods.
The embodiment of the application provides a video processing method, a video processing device, a storage medium and a mobile terminal. Because the preview image is a dynamic image formed according to the shot video, when a user inquires a certain video to be browsed, the user can judge the approximate content of the video by directly browsing the dynamic image without clicking the preview image to play the video content, and the video to be browsed is inquired in a short time, so that the video inquiry time is shortened, and the video inquiry efficiency is improved.
Drawings
The technical solution and other advantages of the present application will become apparent from the detailed description of the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a video processing method according to an embodiment of the present application.
Fig. 2 is a schematic view of a video processing scene according to an embodiment of the present application.
Fig. 3 is another schematic flow chart of a video processing method according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of another video processing apparatus according to an embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Fig. 7 is another schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a video processing method, a video processing device, a storage medium and a mobile terminal.
As shown in fig. 1, fig. 1 is a schematic flow chart of a video processing method provided in an embodiment of the present application, where the video processing method is applied to a mobile terminal, and the mobile terminal may be a smart phone, an iPad, and other devices having a shooting function, and a specific flow may be as follows:
s101, in the video shooting process, multiple frames of target video images are sequentially acquired.
One frame of video image refers to one image in a video, and a currently shot scene can be detected through an AI (Artificial Intelligence) scene detection function in a video shooting process to obtain a plurality of frames of target video images. Specifically, the AI scene detection function is realized based on a scene detection model, during the training process of the scene detection algorithm, the scene detection model is trained according to a plurality of preset scenes, and then the scene detection model detects the video shooting scene, wherein the preset scenes comprise characters, flowers, beaches, sky, sea, fruits, gourmet powder, buildings, snow scenes, rainy days and pets, and if the current scene is detected to be one of the preset scenes during the video shooting process, the video image in the current scene is automatically captured as the target video image.
Further, the step S101 may specifically include:
judging whether the video image meets a preset condition or not when shooting one frame of video image;
and if so, taking the video image as a target video image.
In one embodiment, every time a frame of video image is shot, whether the video image has preset image elements is detected; if the video image has preset image elements, judging whether the image elements in the video image are the same as the image elements in the newly acquired target video image, if so, determining that the video image does not meet preset conditions, and if not, determining that the video image meets the preset conditions; and if the video image does not have the preset image elements, determining that the video image does not meet the preset condition.
The latest acquired target video image is the target video image with the shortest time interval with the video image in the acquired target video images, and before the target video image is acquired for the first time, if the currently shot video image is detected to have preset image elements, the video image can be directly determined to meet the preset conditions, and the image elements in the video image do not need to be compared with the image elements in the latest acquired target video image.
Specifically, since the recognition of the above preset scene is determined according to the photographed image elements, the preset image elements may include characters, flowers, beaches, sky, sea, fruits, delicacies, buildings, snow scenes, rainy days, and pets. If the image elements in the shot video image are flowers and plants before the target video image is obtained for the first time, determining that the video image meets the preset condition, and taking the video image as the target video image; if the image elements in the shot video image are flowers and plants, determining that the video image does not meet the preset condition because the image elements are the same as the image elements (namely flowers and plants) in the newly acquired target video image; if the shot video image is an automobile, the image element does not have a preset image element, and therefore the video image is determined not to meet the preset condition.
In another embodiment, each time a frame of video image is shot, whether the acquisition time interval between the video image and the newly acquired target video image is equal to a preset time length or not is detected, and if yes, the video image is determined to meet a preset condition; if not, determining that the video image does not meet the preset condition.
For example, the preset time period is set to 1 s. If the acquisition time interval between the shot video image and the newly acquired target video image is equal to 1s, determining that the video image meets the preset condition; and if the acquisition time interval between the shot video image and the newly acquired target video image is equal to 3s, determining that the video image does not meet the preset condition.
In another embodiment, each time a frame of video image is shot, detecting whether the acquisition time interval between the video image and the newly acquired target video image is equal to a preset time length, if so, judging whether the number of the acquired target video images is less than a preset value, if so, determining that the video image meets a preset condition, and if not, determining that the video image does not meet the preset condition; and if the time length is not equal to the preset time length, determining that the video image does not meet the preset condition.
For example, the preset time period is set to 2s, and the preset value is set to 4. If the acquisition time interval between the shot video image and the newly acquired target video image is equal to 2s and the number of the acquired target video images is detected to be 3, determining that the video image meets the preset condition; if the acquisition time interval between the shot video image and the newly acquired target video image is equal to 2s and the number of the acquired target video images is detected to be 5, determining that the video image does not meet the preset condition; and if the acquisition time interval between the shot video image and the newly acquired target video image is equal to 3s, determining that the video image does not meet the preset condition.
And S102, when the video shooting is finished, synthesizing the multi-frame target video images into a dynamic image, and taking the dynamic image as a preview of the video.
In the prior art, after a video is captured, a video preview (for example, a first frame image is captured from the video) is usually saved in a gallery for a user to browse, and the user can play the video after clicking the video preview. For example, as shown in fig. 2, after the video capturing is finished, the captured video is saved in a gallery, and a still image 2004 (the first frame image in the video) is displayed as a video preview image in a gallery interface 2003, and the user can click on the still image 2004 to play the video.
When the number of the shot videos is large, if a user wants to find a certain video from the videos for browsing, the user can only roughly estimate the content of the video through the video preview image, so that the user usually needs to click a plurality of video preview images to play the plurality of videos to find the video which is desired to be browsed, which results in long time consumption and low efficiency of video query. In this embodiment, all the acquired target video images are combined into one dynamic image, the dynamic image is used as a preview of the video, and a user can directly browse the dynamic image to determine the approximate content of the video and query the video to be browsed in a short time, so that the video query time is shortened and the video query efficiency is improved.
Further, the step S102 may specifically include:
the method comprises the steps of sequencing multiple frames of target video images according to the acquisition time sequence to form an image sequence, and converting the image sequence into a graph exchange format to form a dynamic graph.
Among them, the GIF (Graphics Interchange Format) Format is a dynamic image Format, which can allow a plurality of still pictures to be played and moved rapidly and continuously according to a certain rule to form a dynamic picture, and such a dynamic picture is generally called a GIF dynamic picture. Because the image sequence is obtained by sequencing all target video images according to the acquisition time sequence, the GIF dynamic graph converted by the image sequence displays all the target video images according to the acquisition time sequence, and if a user wants to accurately find a certain video from a plurality of videos for browsing, the user can presume the rough content of the video by browsing the GIF dynamic graph, thereby improving the video query efficiency.
For example, as shown in fig. 2, a video image is captured in a video image capture interface 2001 of the mobile terminal, the current video image capture is terminated by clicking an image capture termination button 2002, the video image is stored in a gallery, and a GIF diagram 2005 formed is displayed as a preview of the video image in the gallery interface 2003, so that the user can directly browse the GIF diagram 2005 in the gallery interface 2003 to estimate the rough content of the video image.
Specifically, after the dynamic graph is used as a preview of the video, a jump link of the dynamic graph and a video file corresponding to the video is established, a touch operation on the dynamic graph is responded, the video file is jumped to according to the jump link, and the video corresponding to the video file is played.
The skip link includes a URL (Uniform Resource Locator) or a video file path, and after the video shooting is completed, a URL or a video file path is generated, and when a touch operation on the dynamic graph is detected, the skip link automatically skips to a corresponding video file according to the URL or the video file path. For example, as shown in fig. 2, when the user clicks the GIF graph 2005, jumping to the video file according to the video file path of the video file, and opening the video playing interface 2006 to play the video corresponding to the video file.
As shown in fig. 3, fig. 3 is another schematic flow chart of the video processing method according to the embodiment of the present application, and the specific flow may be as follows:
s201, in the shooting process of the video, judging whether the video image meets a preset condition or not when shooting one frame of video image; and if so, taking the video image as a target video image.
If the shot video image has preset image elements, judging whether the image elements in the video image are the same as the image elements in the newly acquired target video image, and if the image elements in the shot video image are different from the image elements in the newly acquired target video image, determining that the video image meets the preset conditions.
For example, the preset image elements include a person, flowers, grass, a beach, a sky, a sea, fruits, food, a building, a snow scene, a rainy day, and a pet, as shown in fig. 2, video shooting is performed in a video shooting interface 2001 in the mobile terminal, an image element in a currently shot video image B is flowers and grass, and an image element in a video image a (i.e., a newly acquired target video image) is a pet, and since the image element is different from the image element in the video image a, it is determined that the video image B satisfies the preset condition, and the video image B is taken as the target video image.
S202, when video shooting is finished, sequencing multiple frames of target video images according to the acquisition time sequence to form an image sequence, converting the image sequence into a graph exchange format to form a dynamic graph, and taking the dynamic graph as a preview of a video.
For example, the shooting end button 2002 is clicked to end the current video shooting, the video is saved in the gallery, and since the video image a is acquired before the video image B, an image sequence is formed in the order of the video image a and the video image B, the image sequence is converted into the GIF image 2005, the GIF image 2005 is taken as a preview of the video, and the preview is displayed on the gallery interface 2003.
S203, establishing a jump link of the dynamic graph and a video file corresponding to the video.
For example, after a video is saved, a video file path corresponding to the video is automatically generated.
And S204, responding to the touch operation of the dynamic graph, jumping to the video file according to the jump link, and playing the video corresponding to the video file.
For example, when it is detected that the user clicks the GIF graph 2005, the user automatically jumps to the corresponding video file according to the video file path, and opens the video playing interface 2006 to play the video corresponding to the video file.
In the video processing method, during the shooting process of the video, the multi-frame target video images are sequentially acquired, and when the video shooting is finished, the multi-frame target video images are synthesized into the dynamic image, and the dynamic image is used as the preview image of the video. Because the preview image is a dynamic image formed according to the shot video, when a user inquires a certain video to be browsed, the user can judge the approximate content of the video by directly browsing the dynamic image without clicking the preview image to play the video content, and the video to be browsed is inquired in a short time, so that the video inquiry time is shortened, and the video inquiry efficiency is improved.
According to the method described in the foregoing embodiment, the present embodiment will be further described from the perspective of a video processing apparatus, which may be specifically implemented as an independent entity or integrated in a mobile terminal, where the mobile terminal may be a device with a shooting function, such as a smart phone or an iPad.
Referring to fig. 4, fig. 4 specifically illustrates a video processing apparatus according to an embodiment of the present application, where the video processing apparatus may include: an acquisition module 10 and a synthesis module 20, wherein:
(1) acquisition module 10
The acquiring module 10 is configured to sequentially acquire multiple frames of target video images in a video shooting process.
The obtaining module 10 specifically includes:
a judging unit 11, configured to judge whether a video image satisfies a preset condition every time a frame of video image is shot; and if so, taking the video image as a target video image.
Specifically, the determining unit 11 is specifically configured to:
detecting whether a video image has preset image elements;
if the video image has the preset image elements, judging whether the image elements in the video image are the same as the image elements in the newly acquired target video image, if so, determining that the video image does not meet the preset condition, and if not, determining that the video image meets the preset condition;
and if the video image does not have the preset image elements, determining that the video image does not meet the preset condition.
(2) Synthesis module 20
And the synthesizing module 20 is configured to synthesize the multi-frame target video images into a dynamic image when the video shooting is finished, and use the dynamic image as a preview of the video.
The synthesis module 20 is specifically configured to:
sequencing the multi-frame target video images according to the acquisition time sequence to form an image sequence;
the image sequence is converted into a graphics interchange format to form a dynamic graph.
As shown in fig. 5, fig. 5 is a schematic structural diagram of another video processing apparatus according to an embodiment of the present application, and the apparatus further includes a skip module 30.
The skip module 30 is configured to establish a skip link of a video file corresponding to the dynamic image and the video, skip to the video file according to the skip link in response to a touch operation on the dynamic image, and play the video corresponding to the video file.
In view of the above, the video processing apparatus provided in the present application sequentially obtains the multiple frames of target video images during the shooting process of the video, synthesizes the multiple frames of target video images into a dynamic image when the shooting of the video is finished, and uses the dynamic image as a preview of the video. Because the preview image is a dynamic image formed according to the shot video, when a user inquires a certain video to be browsed, the user can judge the approximate content of the video by directly browsing the dynamic image without clicking the preview image to play the video content, and the video to be browsed is inquired in a short time, so that the video inquiry time is shortened, and the video inquiry efficiency is improved.
Correspondingly, the embodiment of the invention also provides a video processing system, which comprises any one of the video processing devices provided by the embodiment of the invention, and the video processing device can be integrated in a mobile terminal.
The method comprises the steps that in the shooting process of a video, multi-frame target video images are sequentially obtained; and when the video shooting is finished, synthesizing the multi-frame target video images into a dynamic image, and taking the dynamic image as a preview image of the video.
The specific implementation of each device can be referred to the previous embodiment, and is not described herein again.
Since the video processing system may include any video processing apparatus provided in the embodiment of the present invention, beneficial effects that can be achieved by any video processing apparatus provided in the embodiment of the present invention can be achieved, for details, see the foregoing embodiment, and are not described herein again.
In addition, the embodiment of the application also provides a mobile terminal, and the mobile terminal can be equipment such as a smart phone. As shown in fig. 6, the mobile terminal 200 includes a processor 201, a memory 202. The processor 201 is electrically connected to the memory 202.
The processor 201 is a control center of the mobile terminal 200, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or loading an application program stored in the memory 202 and calling data stored in the memory 202, thereby performing overall monitoring of the mobile terminal.
In this embodiment, the processor 201 in the mobile terminal 200 loads instructions corresponding to processes of one or more application programs into the memory 202 according to the following steps, and the processor 201 runs the application programs stored in the memory 202, thereby implementing various functions:
sequentially acquiring multi-frame target video images in the video shooting process;
and when the video shooting is finished, synthesizing the multi-frame target video images into a dynamic image, and taking the dynamic image as a preview image of the video.
Fig. 7 is a block diagram showing a specific structure of a mobile terminal according to an embodiment of the present invention, where the mobile terminal may be used to implement the video processing method provided in the foregoing embodiment. The mobile terminal 300 may be a smart phone or a tablet computer.
The RF circuit 310 is used for receiving and transmitting electromagnetic waves, and performing interconversion between the electromagnetic waves and electrical signals, thereby communicating with a communication network or other devices. RF circuitry 310 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuit 310 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices over a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (WCDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (Wi-Fi) (e.g., IEEE802.11a, IEEE802.11 b, IEEE 802.2.access, and/or IEEE802.11 n), Voice over Internet Protocol (VoIP), world wide Internet Microwave Access (Microwave for Wireless Communication), other suitable protocols for short message service (Max), and any other suitable protocols, and may even include those protocols that have not yet been developed.
The memory 320 may be used to store software programs and modules, and the processor 380 executes various functional applications and data processing by operating the software programs and modules stored in the memory 320. The memory 320 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 320 may further include memory located remotely from the processor 380, which may be connected to the mobile terminal 300 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input unit 330 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 330 may include a touch-sensitive surface 331 as well as other input devices 332. The touch-sensitive surface 331, also referred to as a touch screen or touch pad, may collect touch operations by a user on or near the touch-sensitive surface 331 (e.g., operations by a user on or near the touch-sensitive surface 331 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 331 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 380, and can receive and execute commands sent by the processor 380. In addition, the touch-sensitive surface 331 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 330 may comprise other input devices 332 in addition to the touch sensitive surface 331. In particular, other input devices 332 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 340 may be used to display information input by or provided to the user and various graphical user interfaces of the mobile terminal 300, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 340 may include a Display panel 341, and optionally, the Display panel 341 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, touch-sensitive surface 331 may overlay display panel 341, and when touch-sensitive surface 331 detects a touch operation thereon or thereabout, communicate to processor 380 to determine the type of touch event, and processor 380 then provides a corresponding visual output on display panel 341 in accordance with the type of touch event. Although in FIG. 7, touch-sensitive surface 331 and display panel 341 are implemented as two separate components for input and output functions, in some embodiments, touch-sensitive surface 331 and display panel 341 may be integrated for input and output functions.
The mobile terminal 300 may also include at least one sensor 350, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 341 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 341 and/or the backlight when the mobile terminal 300 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be further configured on the mobile terminal 300, detailed descriptions thereof are omitted.
Audio circuitry 360, speaker 361, and microphone 362 may provide an audio interface between a user and the mobile terminal 300. The audio circuit 360 may transmit the electrical signal converted from the received audio data to the speaker 361, and the audio signal is converted by the speaker 361 and output; on the other hand, the microphone 362 converts the collected sound signal into an electrical signal, which is received by the audio circuit 360 and converted into audio data, which is then processed by the audio data output processor 380 and then transmitted to, for example, another terminal via the RF circuit 310, or the audio data is output to the memory 320 for further processing. The audio circuit 360 may also include an earbud jack to provide communication of a peripheral headset with the mobile terminal 300.
The mobile terminal 300, which may assist the user in e-mail, web browsing, streaming media access, etc., through the transmission module 370 (e.g., a Wi-Fi module), provides the user with wireless broadband internet access. Although fig. 7 shows the transmission module 370, it is understood that it does not belong to the essential constitution of the mobile terminal 300 and may be omitted entirely within the scope not changing the essence of the invention as needed.
The processor 380 is a control center of the mobile terminal 300, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile terminal 300 and processes data by operating or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory 320, thereby integrally monitoring the mobile phone. Optionally, processor 380 may include one or more processing cores; in some embodiments, processor 380 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 380.
The mobile terminal 300 also includes a power supply 390 (e.g., a battery) that provides power to the various components and, in some embodiments, may be logically coupled to the processor 380 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The power supply 390 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the mobile terminal 300 may further include a camera (e.g., a front camera, a rear camera), a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the display unit of the mobile terminal is a touch screen display, the mobile terminal further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for:
sequentially acquiring multi-frame target video images in the video shooting process;
and when the video shooting is finished, synthesizing the multi-frame target video images into a dynamic image, and taking the dynamic image as a preview image of the video.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor. To this end, embodiments of the present invention provide a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any one of the video processing methods provided by the embodiments of the present invention.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any video processing method provided in the embodiments of the present invention, beneficial effects that can be achieved by any video processing method provided in the embodiments of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
In summary, although the present application has been described with reference to the preferred embodiments, the above-described preferred embodiments are not intended to limit the present application, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present application, so that the scope of the present application shall be determined by the appended claims.

Claims (10)

1. A method of video processing, the method comprising:
sequentially acquiring multi-frame target video images in the video shooting process;
and when the video shooting is finished, synthesizing the multi-frame target video images into a dynamic image, and taking the dynamic image as a preview image of the video.
2. The video processing method according to claim 1, wherein said sequentially acquiring multiple frames of target video images comprises:
judging whether the video image meets a preset condition or not when shooting one frame of video image;
and if so, taking the video image as the target video image.
3. The video processing method according to claim 2, wherein the determining whether the video image satisfies a preset condition comprises:
detecting whether the video image has preset image elements;
if the video image has preset image elements, judging whether the image elements in the video image are the same as the image elements in the newly acquired target video image, if so, determining that the video image does not meet preset conditions, and if not, determining that the video image meets the preset conditions;
and if the video image does not have the preset image elements, determining that the video image does not meet the preset condition.
4. The video processing method according to claim 1, wherein the synthesizing the plurality of frames of target video images into a dynamic image comprises:
sequencing the multi-frame target video images according to the acquisition time sequence to form an image sequence;
and converting the image sequence into a graph exchange format to form a dynamic graph.
5. The video processing method of claim 1, wherein the method further comprises:
establishing a jump link of the dynamic graph and a video file corresponding to the video;
and responding to the touch operation of the dynamic graph, jumping to the video file according to the jump link, and playing the video corresponding to the video file.
6. A video processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for sequentially acquiring multi-frame target video images in the video shooting process;
and the synthesis module is used for synthesizing the multi-frame target video images into a dynamic image when the video shooting is finished, and taking the dynamic image as a preview image of the video.
7. The video processing apparatus according to claim 6, wherein the obtaining module is specifically configured to:
judging whether the video image meets a preset condition or not when shooting one frame of video image;
and if so, taking the video image as the target video image.
8. The video processing apparatus according to claim 6, wherein the composition module is specifically configured to:
sequencing the multi-frame target video images according to the acquisition time sequence to form an image sequence;
and converting the image sequence into a graph exchange format to form a dynamic graph.
9. A computer-readable storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor to perform the video processing method of any of claims 1 to 5.
10. A mobile terminal comprising a processor and a memory, the processor being electrically connected to the memory, the memory being configured to store instructions and data, the processor being configured to perform the steps of the video processing method according to any one of claims 1 to 5.
CN202110592685.3A 2021-05-28 2021-05-28 Video processing method and device, storage medium and mobile terminal Pending CN113316016A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110592685.3A CN113316016A (en) 2021-05-28 2021-05-28 Video processing method and device, storage medium and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110592685.3A CN113316016A (en) 2021-05-28 2021-05-28 Video processing method and device, storage medium and mobile terminal

Publications (1)

Publication Number Publication Date
CN113316016A true CN113316016A (en) 2021-08-27

Family

ID=77375958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110592685.3A Pending CN113316016A (en) 2021-05-28 2021-05-28 Video processing method and device, storage medium and mobile terminal

Country Status (1)

Country Link
CN (1) CN113316016A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801942A (en) * 2012-07-23 2012-11-28 北京小米科技有限责任公司 Method and device for recording video and generating GIF (Graphic Interchange Format) dynamic graph
CN105721620A (en) * 2016-05-09 2016-06-29 百度在线网络技术(北京)有限公司 Video information push method and device as well as video information display method and device
CN105744292A (en) * 2016-02-02 2016-07-06 广东欧珀移动通信有限公司 Video data processing method and device
CN106572380A (en) * 2016-10-19 2017-04-19 上海传英信息技术有限公司 User terminal and video dynamic thumbnail generating method
CN106792272A (en) * 2016-11-28 2017-05-31 维沃移动通信有限公司 The generation method and mobile terminal of a kind of video thumbnails
CN108307239A (en) * 2018-01-10 2018-07-20 北京奇虎科技有限公司 A kind of video content recommendation method and apparatus
CN108718417A (en) * 2018-05-28 2018-10-30 广州虎牙信息科技有限公司 Generation method, device, server and the storage medium of direct broadcasting room preview icon
CN109756767A (en) * 2017-11-06 2019-05-14 腾讯科技(深圳)有限公司 Preview data playback method, device and storage medium
WO2020047691A1 (en) * 2018-09-03 2020-03-12 深圳市大疆创新科技有限公司 Method, device, and mobile platform for generating dynamic image and storage medium
CN111163358A (en) * 2020-01-07 2020-05-15 广州虎牙科技有限公司 GIF image generation method, device, server and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801942A (en) * 2012-07-23 2012-11-28 北京小米科技有限责任公司 Method and device for recording video and generating GIF (Graphic Interchange Format) dynamic graph
CN105744292A (en) * 2016-02-02 2016-07-06 广东欧珀移动通信有限公司 Video data processing method and device
CN105721620A (en) * 2016-05-09 2016-06-29 百度在线网络技术(北京)有限公司 Video information push method and device as well as video information display method and device
CN106572380A (en) * 2016-10-19 2017-04-19 上海传英信息技术有限公司 User terminal and video dynamic thumbnail generating method
CN106792272A (en) * 2016-11-28 2017-05-31 维沃移动通信有限公司 The generation method and mobile terminal of a kind of video thumbnails
CN109756767A (en) * 2017-11-06 2019-05-14 腾讯科技(深圳)有限公司 Preview data playback method, device and storage medium
CN108307239A (en) * 2018-01-10 2018-07-20 北京奇虎科技有限公司 A kind of video content recommendation method and apparatus
CN108718417A (en) * 2018-05-28 2018-10-30 广州虎牙信息科技有限公司 Generation method, device, server and the storage medium of direct broadcasting room preview icon
WO2020047691A1 (en) * 2018-09-03 2020-03-12 深圳市大疆创新科技有限公司 Method, device, and mobile platform for generating dynamic image and storage medium
CN111163358A (en) * 2020-01-07 2020-05-15 广州虎牙科技有限公司 GIF image generation method, device, server and storage medium

Similar Documents

Publication Publication Date Title
CN108495029B (en) Photographing method and mobile terminal
CN111050076B (en) Shooting processing method and electronic equipment
CN110365907B (en) Photographing method and device and electronic equipment
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN107241552B (en) Image acquisition method, device, storage medium and terminal
CN108924414B (en) Shooting method and terminal equipment
CN109618218B (en) Video processing method and mobile terminal
CN109922294B (en) Video processing method and mobile terminal
CN108718389B (en) Shooting mode selection method and mobile terminal
CN111405180A (en) Photographing method, photographing device, storage medium and mobile terminal
CN111182236A (en) Image synthesis method and device, storage medium and terminal equipment
JP6862564B2 (en) Methods, devices and non-volatile computer-readable media for image composition
CN108924035B (en) File sharing method and terminal
CN108549660B (en) Information pushing method and device
CN111131612B (en) Screen color temperature control method and device, storage medium and mobile terminal
CN111158815B (en) Dynamic wallpaper blurring method, terminal and computer readable storage medium
CN110505660B (en) Network rate adjusting method and terminal equipment
CN109561255B (en) Terminal photographing method and device and storage medium
CN111182211A (en) Shooting method, image processing method and electronic equipment
CN108243489B (en) Photographing control method and mobile terminal
CN105513098B (en) Image processing method and device
CN107734269B (en) Image processing method and mobile terminal
CN111028192B (en) Image synthesis method and electronic equipment
CN110086999B (en) Image information feedback method and terminal equipment
CN108076287B (en) Image processing method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210827