CN110022445B - Content output method and terminal equipment - Google Patents

Content output method and terminal equipment Download PDF

Info

Publication number
CN110022445B
CN110022445B CN201910143224.0A CN201910143224A CN110022445B CN 110022445 B CN110022445 B CN 110022445B CN 201910143224 A CN201910143224 A CN 201910143224A CN 110022445 B CN110022445 B CN 110022445B
Authority
CN
China
Prior art keywords
image
screen
target
input
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910143224.0A
Other languages
Chinese (zh)
Other versions
CN110022445A (en
Inventor
张繁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Software Technology Co Ltd
Original Assignee
Vivo Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Software Technology Co Ltd filed Critical Vivo Software Technology Co Ltd
Priority to CN201910143224.0A priority Critical patent/CN110022445B/en
Publication of CN110022445A publication Critical patent/CN110022445A/en
Application granted granted Critical
Publication of CN110022445B publication Critical patent/CN110022445B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention discloses a content output method and terminal equipment, relates to the technical field of communication, and aims to solve the problem that the conversion mode of pictures and videos in the prior art is not flexible enough. The method may be applied to a terminal device including a first screen and a second screen, the method including: receiving a first input of a user, wherein the first input is a selection input of a first image displayed on a first screen; outputting the target content through a second screen in response to the first input; the first image is a picture, and the target content is a target video clip comprising the first image; or, the first image is a frame image in a first video, the target content is the first image, and the first video is a video output through a first screen. The method can be applied to scenes for converting pictures and videos.

Description

Content output method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a content output method and terminal equipment.
Background
With the development of communication technology, pictures and videos have become two important media resources in terminal devices.
At present, pictures and videos can be converted into each other. For example, for pictures in an album of the terminal device, a user may select a plurality of pictures from the album, and then click a determination button to synthesize the pictures into a video, so that the user may trigger the terminal device to play the video; for a video being played by the terminal device, a user can trigger the terminal device to acquire a picture related to the video through screen capture operation on a screen.
However, for the video composed of multiple pictures, if the user is not satisfied with the video, the user may need to select pictures again and compose the video again; for the picture obtained by the screen capture, if the user wants to view the picture, the user may need to stop the video being played or wait for the video to be played and then view the picture. That is, the above-mentioned conversion method of pictures and videos is not flexible.
Disclosure of Invention
The embodiment of the invention provides a content output method and terminal equipment, and aims to solve the problem that the conversion mode of pictures and videos in the prior art is not flexible enough.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a content output method. The method may be applied to a terminal device including a first screen and a second screen. The method comprises the following steps: receiving a first input of a user, wherein the first input is a selection input of a first image displayed on a first screen; outputting the target content through a second screen in response to the first input; the first image is a picture, and the target content is a target video clip comprising the first image; or, the first image is a frame image in a first video, the target content is the first image, and the first video is a video output through a first screen.
In a second aspect, an embodiment of the present invention provides a terminal device. The terminal device includes a first screen and a second screen. The terminal equipment comprises a receiving module and an output module. The receiving module is used for receiving a first input of a user, wherein the first input is a selection input of a first image displayed on a first screen; an output module for outputting the target content through the second screen in response to the first input received by the receiving module; the first image is a picture, and the target content is a target video clip comprising the first image; or, the first image is a frame image in a first video, the target content is the first image, and the first video is a video output through a first screen.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the content output method provided in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the content output method provided in the first aspect.
In an embodiment of the present invention, a first input of a user (the first input being a selection input of a first image displayed on a first screen) may be received; and outputting target content through a second screen in response to the first input (wherein the first image is a picture and the target content is a target video clip including the first image, or the first image is one frame image in a first video, the target content is the first image, and the first video is a video output through the first screen). By the scheme, on one hand, under the condition that the first image displayed on the first screen is a picture, the second screen can update and display the video synthesized by the first image in real time, so that a user can watch the generated video in the process of selecting the picture; on the other hand, since the second screen can display the first image in the case where the first image displayed on the first screen is one frame image in the first video, the user can view a picture taken by the video in the course of viewing the video. Therefore, the method and the device for converting the pictures and the videos have higher flexibility.
Drawings
Fig. 1 is a schematic structural diagram of an android operating system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a content output method according to an embodiment of the present invention;
fig. 3 is one of schematic diagrams of a terminal device displaying a first image and target content according to an embodiment of the present invention;
fig. 4 is a second schematic diagram of the terminal device displaying the first image and the target content according to the embodiment of the present invention;
fig. 5 is a second schematic diagram of a content output method according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a display setting option of a terminal device according to an embodiment of the present invention;
fig. 7 is a third schematic diagram of a content output method according to an embodiment of the present invention;
fig. 8 is a third schematic diagram illustrating a terminal device displaying a first image and a target content according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a terminal device displaying a second image and target content according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a terminal device displaying a first image, a target object and target content according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 12 is a second schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 13 is a third schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 14 is a hardware schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first screen and the second screen, etc. are for distinguishing different screens, not for describing a specific order of screens.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of elements means two or more elements, and the like.
The embodiment of the invention provides a content output method and terminal equipment, which can receive a first input of a user (the first input is a selection input of a first image displayed on a first screen); and outputting target content through a second screen in response to the first input (wherein the first image is a picture and the target content is a target video clip including the first image, or the first image is one frame image in a first video, the target content is the first image, and the first video is a video output through the first screen). By the scheme, on one hand, under the condition that the first image displayed on the first screen is a picture, the second screen can update and display the video synthesized by the first image in real time, so that a user can watch the generated video in the process of selecting the picture; on the other hand, since the second screen can display the first image in the case where the first image displayed on the first screen is one frame image in the first video, the user can view a picture taken by the video in the course of viewing the video. Therefore, the method and the device for converting the pictures and the videos have higher flexibility.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
Taking an android operating system as an example, a software environment to which the content output method provided by the embodiment of the invention is applied is introduced.
Fig. 1 is a schematic diagram of an architecture of an android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the content output method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the content output method may operate based on the android operating system shown in fig. 1. That is, the processor or the terminal device may implement the content output method provided by the embodiment of the present invention by running the software program in the android operating system.
The terminal device in the embodiment of the invention can be a mobile terminal device and can also be a non-mobile terminal device. For example, the mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present invention is not particularly limited.
The execution main body of the content output method provided in the embodiment of the present invention may be the terminal device, or may also be a functional module and/or a functional entity capable of implementing the content output method in the terminal device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a terminal device as an example to exemplarily explain a content output method provided by the embodiment of the present invention.
As shown in fig. 2, an embodiment of the present invention provides a content output method. The method may be applied to a terminal device including a first screen and a second screen. The method may include steps 101 and 102 described below.
Step 101, a terminal device receives a first input of a user.
The first input may be a selection input of a first image displayed on the first screen.
Optionally, in the embodiment of the present invention, the first screen and the second screen of the terminal device may be two independent screens, and the first screen and the second screen may be connected by a shaft or a hinge; alternatively, the screen of the terminal device may be a flexible screen, and the flexible screen may be folded into at least two screens, such as a first screen and a second screen. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in an embodiment of the present invention, the first input may be at least one of a touch input, a gravity input, a voice input, a key input, and the like. For example, the touch input may be a press input or a slide input of the user on the first image displayed on the first screen; the gravity input can be that the user shakes the terminal device in a specific direction or shakes the terminal device for a specific number of times, etc.; the voice input may be a user voice input "select first image"; the key input may be a single-click input, a double-click input, a combined key input or the like of a user on a physical key of the terminal device.
Optionally, in an embodiment of the present invention, the first image may be (1) or (2) below:
(1) the first image may be a picture.
Optionally, in this embodiment of the present invention, the first image may be a picture selected by the user from a plurality of candidate pictures.
Illustratively, the terminal device displays a plurality of candidate pictures on the first screen, and the candidate pictures include the first image. If the user makes a first input to a first image of the candidate pictures, the terminal device may receive the first input and perform step 102 described below.
For example, the example is that the terminal device displays a first image on a full screen on a first screen, and the first image is one of a plurality of candidate pictures. If the user makes a first input on the first image, the terminal device may receive the first input and perform step 102 described below; and if the user makes other input on the first image, the terminal device may update the first image displayed on the first screen to a next picture of the first image.
(2) The first image may be a frame image in a first video, and the first video may be a video output through a first screen.
Optionally, in this embodiment of the present invention, the first video may include a plurality of frames of images, and the first image may be one frame of image selected by the user from the plurality of frames of images.
Illustratively, in the case where the terminal device plays the first video through the first screen, if the user wants to acquire a video frame currently being displayed on the first screen, the user may make a first input on the video frame, so that the terminal device may receive the first input and perform step 102 described below.
And 102, the terminal equipment responds to the first input and outputs the target content through the second screen.
In an embodiment of the present invention, the target content may be (1) or (2) below:
(1) the first image is a picture, and the target content may be a target video clip including the first image.
Optionally, in this embodiment of the present invention, the target video segment may be a video segment obtained by synthesizing the first image with the target object, and the target object may be an image selected before the first input is received or a video segment generated before the first input is received.
Optionally, in an embodiment of the present invention, a possible implementation manner is that the terminal device may automatically play the target video segment through the second screen according to the set first play speed. Another possible implementation is that the terminal device may update the first video frame image currently displayed on the second screen to the second video frame image in response to an input by the user. These two possible implementations will be described in the following embodiments, and will not be described herein.
Further, the terminal device may display the target object on the second screen before receiving the first input.
Illustratively, the target object is taken as an example of an image that has been selected before the first input is received. As shown in (a) of fig. 3, before receiving the first input, the terminal apparatus may display a first image on the first screen 01 and an image that the user has selected on the second screen 02; as shown in (b) of fig. 3, after receiving the first input, the terminal device may synthesize the selected image and the first image into a target video clip in response to the first input, and play the target video clip including the first image through the second screen 02.
Illustratively, the target object is taken as an example of a video clip that has been generated before the first input is received. As shown in (a) of fig. 3, before receiving the first input, the terminal device may display a first image on the first screen 01 and play the generated video clip through the second screen 02; as shown in (b) of fig. 3, after receiving the first input, the terminal device may synthesize the generated video clip and the first image into a target video clip in response to the first input, and play the target video clip through the second screen 02.
It should be noted that, for the case that the target object is a video clip that has been generated before the first input is received, the terminal device may play the generated video clip cyclically on the second screen before the first input is received (or, after the playing of the generated video clip is completed, the terminal device may display the last frame image of the target video clip by default on the second screen); after receiving the first input, the terminal device may play the newly generated target video clip from the first frame on the second screen (or, the terminal device may play the target video clip from the video frame corresponding to the first image). The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
(2) The first image is a frame image in a first video, the target content may be the first image, and the first video may be a video output through a first screen.
Illustratively, as shown in fig. 4 (a), the terminal device may play a first video through the first screen 01, and the second screen 02 may not need to display the content. As shown in (b) of fig. 4, if the user wants to acquire a video frame (i.e., a first image) currently being displayed on the first screen 01, the user may make a first input on the first image, so that the terminal device may display the first image (i.e., target content) on the second screen and continue to display the first image (or continue to play a next frame image of the first image) on the first screen in response to the first input. It is understood that if the user makes an image selection for the first video that continues to be played, the terminal device may update the image displayed on the second screen, i.e., display the image newly selected by the user on the second screen.
The embodiment of the invention provides a content output method, wherein under the condition that a first image displayed on a first screen is a picture, a second screen can update and display a video synthesized by the first image in real time, so that a user can watch the generated video in the process of selecting the picture; on the other hand, since the second screen can display the first image in the case where the first image displayed on the first screen is one frame image in the first video, the user can view a picture taken by the video in the course of viewing the video. Therefore, the method and the device for converting the pictures and the videos have higher flexibility.
Optionally, in this embodiment of the present invention, when the target content is the target video segment, the step 102 may be specifically implemented by any one of two optional implementations described below.
First alternative implementation,
Referring to fig. 2, as shown in fig. 5, before "outputting the target content through the second screen" in step 102, the content output method provided by the embodiment of the present invention may further include step 103 as described below. Further, "outputting the target content through the second screen" in step 102 may be specifically realized by step 102A described below.
And 103, the terminal equipment responds to the first input and synthesizes the first image and the target object into a target video clip.
The target object may be an image or a generated video clip that has been selected before the first input is received.
Optionally, in the embodiment of the present invention, when the target object is an image that has been selected before the first input is received, the terminal device may synthesize the first image and the target object into the target video clip according to a selection order of the images or a similarity degree of the images.
For example, assuming that the target object is image 1 and the first image is image 2, the terminal device may sequentially synthesize image 1 and image 2 into the target video clip in the order of selection of the images.
Optionally, in this embodiment of the present invention, in a case that the target object is a video segment that has been generated before the first input is received, the terminal device may add the first image on the basis of the generated video segment, that is, the first image is used as a last frame image of the target video segment.
For example, assuming that the target object is a video clip 1 and the first image is an image 2, the terminal device may add the image 2 on the basis of the video clip 1, that is, the image 2 is taken as the last frame image of the target video clip.
And 102A, the terminal equipment plays the target video clip through the second screen according to the set first playing speed.
Optionally, in this embodiment of the present invention, the first play speed may be predefined in the terminal device, or may be preset by the user.
Illustratively, the first play speed is set in advance for the user as an example. As shown in fig. 6, the user may trigger the terminal device to display a setting interface on the second screen 02, where the setting interface may include setting options of "video playing interval 0.5S", "video playing interval 1S", "video playing interval 2S", and "video playing interval 3S". If the user selects "video play interval 1S", the terminal device may determine that the first play speed is the video play interval 1S, and play the target video clip through the second screen according to the first play speed.
According to the content output method provided by the embodiment of the invention, after the first image and the target object are synthesized into the target video clip, the terminal device can automatically play the target video clip according to the set play speed, so that a user can watch the video automatically played by the terminal device in the process of selecting the picture.
Second alternative implementation,
Referring to fig. 2, as shown in fig. 7, before "outputting the target content through the second screen" in step 102, the content output method provided by the embodiment of the present invention may further include steps 103 and 104 described below. Further, "outputting the target content through the second screen" in step 102 may specifically be realized by step 102B described below.
And 103, the terminal equipment responds to the first input and synthesizes the first image and the target object into a target video clip.
For the specific description of step 103, reference may be made to the related description in the above embodiments, which is not repeated herein.
And 104, receiving a second input of the user by the terminal equipment.
And 102B, the terminal equipment responds to the second input and updates the first video frame image currently displayed on the second screen into a second video frame image.
The first video frame image and the second video frame image may be two adjacent frame images in the target video segment.
Optionally, in an embodiment of the present invention, the second input may be at least one of a touch input, a gravity input, a voice input, a key input, and the like.
For example, it is assumed that the target video segment sequentially includes video frame a, video frame B, video frame C, video frame D, and video frame E in the playing order. Assuming that the video frame currently displayed on the second screen is the video frame C, if the user clicks the "switch video frame" control 03 as shown in fig. 8, the terminal device may update the video frame C to the video frame D, i.e., update the first video frame image to the second video frame image, in response to the received click input (i.e., the second input). It will be appreciated that if the user continues to click on the "switch video frame" control 03, the terminal device may switch video frame D to video frame E.
According to the content output method provided by the embodiment of the invention, after the first image and the target object are synthesized into the target video clip, the terminal equipment can respond to the input of the user and trigger the terminal equipment to play the target video clip frame by frame, so that the flexibility of watching the video frame of the target video clip by the user is improved.
Optionally, in the embodiment of the present invention, after the step 103, the content output method provided in the embodiment of the present invention may further include the following step 105.
And 105, displaying prompt information by the terminal equipment.
The prompt information can be used for indicating that the terminal equipment generates a video according to the picture selected by the user.
For example, after the terminal device synthesizes the first image and the target object into the target video clip, the terminal device may display a prompt message "the terminal device has generated a video according to the picture selected by you" on the first screen or the second screen, so that the user may know that the terminal device has generated the video and trigger the terminal device to play the video (or directly watch the video automatically played by the terminal device).
According to the content output method provided by the embodiment of the invention, the user terminal equipment can be prompted to generate the video according to the picture selected by the user, so that the user can trigger the terminal equipment to play the video according to the prompt information or directly watch the video automatically played by the terminal equipment.
Optionally, in the embodiment of the present invention, in a case that the target content is the target video segment, after the step 102, the content output method provided in the embodiment of the present invention may further include the following step 106 to step 107, or may further include the following step 106 to step 108.
And 106, the terminal equipment receives a third input of the user.
And step 107, the terminal device responds to the third input, extracts at least two second images from the target video clip, and displays the at least two second images on the target screen.
And step 108, the terminal equipment responds to the third input and displays at least one third image on the target screen.
The target screen may be a first screen or a second screen. The at least two second images may include the first image. Each third image has a similarity to an image of the at least two second images greater than or equal to a first value.
Optionally, in an embodiment of the present invention, the third input may be at least one of a touch input, a gravity input, a voice input, a key input, and the like.
Optionally, in this embodiment of the present invention, the third image may be a still picture, a moving picture, or the like.
Optionally, in this embodiment of the present invention, the third image may be an image stored in the terminal device, or an image acquired by the terminal device from a server.
Further, in the embodiment of the present invention, in a case that at least two second images are displayed on the target screen, the user may adjust an arrangement order of images in the at least two second images, and trigger the terminal device to regenerate the video clip according to the order of the arranged second images.
Further, in the embodiment of the present invention, in a case that at least two second images are displayed on the target screen, the user may replace one second image with one third image, and trigger the terminal device to regenerate the video clip from the non-replaced second image and the replaced third image.
For example, as shown in fig. 9, the user may click a "one-click videos" control 04 displayed on the second screen 02, so that the terminal device extracts an image from the target video clip in response to receiving a click input (i.e., a third input) to the control 04, and displays the extracted "picture 1", "picture 2", "picture 3", "picture 4", "picture 5", "picture 6", "picture 7", "picture 8" in the first region on the first screen 01, and displays "similar picture a", "similar picture b", and "similar picture c" having a similarity greater than or equal to a first numerical value with respect to the extracted picture in the second region on the first screen 01. Further, if the user selects "similar picture b" and "picture 1" in the first screen 01, the terminal device may replace "picture 1" with "similar picture b", and regenerate video clips from "similar picture b", "picture 2", "picture 3" … … "picture 8".
The content output method provided by the embodiment of the invention can respond to the input of the user and extract at least two second images from the target video clip, so that the user can edit the at least two second images and trigger the terminal equipment to regenerate the video clip, thereby improving the satisfaction degree of the user on the generated video clip.
Optionally, in this embodiment of the present invention, when the target content is the target video segment, the step 102 may be specifically implemented by the following step 102C.
And 102C, the terminal equipment responds to the first input, outputs the target object through the first area of the second screen and outputs the target video clip through the second area of the second screen.
The target object may be an image or a generated video clip that has been selected before the first input is received. The second area may be a partial area in the first area, or the second area may be an area on the second screen other than the first area.
Optionally, in this embodiment of the present invention, the first area may be an area where a main interface in the second screen is located, and the second area may be an area where a pip interface in the second screen is located. Wherein the picture-in-picture interface may be superimposed on the main interface.
For example, as shown in fig. 10, the first area may be an area where a video preview plane in the second screen is located, and the second area 05 may be an area where a picture-in-picture interface in the second screen is located. Before the terminal device receives a first input of the user on the first screen 01, the terminal device may output a target object in the first area of the second screen 02; after the terminal device receives the first input of the user on the first screen 01, the terminal device may keep outputting the target object in the first area of the second screen 02 while outputting the target video clip through the second area 05 of the second screen 02. The first area may be an area where a main interface in the second screen is located, the second area may be an area where a pip interface in the second screen is located, and the second area is a partial area in the first area.
Further, after the step 102C, the content output method provided by the embodiment of the present invention may further include the following steps 109 and 110.
And step 109, the terminal equipment receives a fourth input of the user.
Step 110, the terminal device responds to the fourth input, and executes at least one of the following items: processing video frames in the target video clip; and outputting the target video clip in a full screen mode through the second screen.
Optionally, in an embodiment of the present invention, the fourth input may be at least one of a touch input, a gravity input, a voice input, a key input, and the like.
Optionally, in the embodiment of the present invention, processing the video frame in the target video segment includes at least one of the following: deleting the video frames in the target video clip, adjusting the sequence of the video frames in the target video clip, beautifying one or more video frames in the target video clip, setting a playing special effect for the video frames in the target video clip, and setting a playing time interval for the video frames in the target video clip.
Optionally, in this embodiment of the present invention, the step 110 may be implemented in any one of the following manners:
in the first mode, if the user thinks that the video frames in the target video clip do not need to be processed, the user can trigger the terminal device to output the target video clip in a full screen mode through the second screen through the fourth input.
And secondly, if the user thinks that the video frames in the target video clip need to be processed, the user can trigger the terminal equipment to process the video frames in the target video clip through the fourth input, and the processed target video clip is output in a full screen mode through the second screen.
And thirdly, if the user considers that the video frames in the target video clip need to be processed, the user can trigger the terminal equipment to process the video frames in the target video clip through the fourth input. After the terminal device completes processing of the video frames in the target video clip, the user can trigger the terminal device to output the target video clip in a full screen mode through the second screen through the fifth input.
According to the content output method provided by the embodiment of the invention, the target object can be output through the first area of the second screen, and the target video clip can be output through the second area of the second screen. Therefore, the user can preview the target video segment in the second area and determine whether to process the video frame in the target video, so that the efficiency of editing the video segment is improved.
Optionally, in the embodiment of the present invention, when the target content is the first image, after the step 101, the content output method provided in the embodiment of the present invention may further include the following step 111.
Step 111, the terminal device responds to the first input and deletes the first image from the first video.
It should be noted that, in the embodiment of the present invention, the execution sequence of step 102 and step 111 is not specifically limited. That is, the terminal device may perform step 102 first and then perform step 111; step 111 may be performed first, and then step 102 may be performed; step 102 and step 111 may also be performed simultaneously. The method can be determined according to actual use requirements.
Optionally, in this embodiment of the present invention, the first video may be stored in a first storage area of the terminal device.
Illustratively, assume that the first video includes video frame 1, video frame 2, … …, and video frame 10 in order of play. Assuming that the first image selected by the user is a video frame 8, the terminal device may combine the video frame 8 with the target object and delete the video frame 8 of the first video from the first storage area of the terminal device.
According to the content output method provided by the embodiment of the invention, the image selected by the user can be deleted from the first storage area in response to the input of the user, so that the first video can be edited.
As shown in fig. 11, an embodiment of the present invention provides a terminal device 1100. The terminal device may include a first screen and a second screen. The terminal device may include a receiving module 1101 and an output module 1102. A receiving module 1101, which may be configured to receive a first input of a user, where the first input may be a selection input of a first image displayed on a first screen; an output module 1102 may be configured to output the target content through the second screen in response to the first input received by the receiving module 1101. The first image is a picture, and the target content can be a target video clip comprising the first image; or, the first image is a frame image in a first video, the target content may be the first image, and the first video may be a video output through a first screen.
Optionally, in the embodiment of the present invention, the target content is a target video segment. With reference to fig. 11, as shown in fig. 12, the terminal device provided in the embodiment of the present invention may further include a combining module 1103. A synthesizing module 1103, which may be configured to synthesize the first image and a target object, which may be an image selected before the first input is received or a generated video clip, into a target video clip before the output module 1102 outputs the target content through the second screen; the output module 1102 may be specifically configured to play the target video segment synthesized by the synthesis module 1103 through the second screen according to the set first play speed.
Optionally, in the embodiment of the present invention, the target content is a target video segment. As shown in fig. 13, the terminal device provided in the embodiment of the present invention may further include a synthesis module 1103. A synthesizing module 1103, which may be configured to synthesize the first image and a target object, which may be an image selected before the first input is received or a generated video clip, into a target video clip before the output module 1102 outputs the target content through the second screen; the receiving module 1101 may be further configured to receive a second input from the user; the output module 1102 may be specifically configured to update a first video frame image currently displayed on a second screen to a second video frame image in response to the second input received by the receiving module 1101, where the first video frame image and the second video frame image may be two adjacent frame images in the target video segment.
Optionally, in the embodiment of the present invention, the target content is a target video segment. The receiving module 1101 may be further configured to receive a third input of the user after the output module 1102 outputs the target content through the second screen; the output module 1102 may be further configured to extract at least two second images from the target video clip in response to the third input received by the receiving module 1101, and display the at least two second images on the target screen, where the at least two second images may include the first image, and the target screen may be the first screen or the second screen.
Optionally, in this embodiment of the present invention, the output module 1102 may be further configured to display at least one third image on the target screen in response to a third input received by the receiving module 1101, where a similarity between each third image and an image in at least two second images may be greater than or equal to the first numerical value.
Optionally, in the embodiment of the present invention, the target content is a target video segment. The output module 1102 may be specifically configured to output the target object through the first area of the second screen, and output the target video clip through the second area of the second screen. Wherein the target object may be an image or a video clip that has been selected or generated prior to receiving the first input; the second area may be a partial area of the first area, or the second area may be an area on the second screen other than the first area.
Further, the receiving module 1101 may be further configured to receive a fourth input from the user; the output module 1102 may be further configured to, in response to the fourth input received by the receiving module 1101, perform at least one of: processing video frames in the target video clip; and outputting the target video clip in a full screen mode through the second screen.
Optionally, in this embodiment of the present invention, the target content is a first image. Referring to fig. 11, as shown in fig. 13, the terminal device may further include a processing module 1104. The processing module 1104 may be configured to delete the first image from the first video in response to the first input received by the receiving module 1101.
The terminal device provided by the embodiment of the present invention can implement each process implemented by the terminal device in the above method embodiments, and is not described here again to avoid repetition.
The embodiment of the invention provides a terminal device, wherein under the condition that a first image displayed on a first screen of the terminal device is a picture, a second screen of the terminal device can update and display a video synthesized by the first image in real time, so that a user can watch the generated video in the process of selecting the picture; on the other hand, since the second screen can display the first image in the case where the first image displayed on the first screen is one frame image in the first video, the user can view a picture taken by the video in the course of viewing the video. Therefore, the image and the video can be more flexibly converted through the terminal equipment.
Fig. 14 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention. As shown in fig. 14, the terminal device 200 includes, but is not limited to: radio frequency unit 201, network module 202, audio output unit 203, input unit 204, sensor 205, display unit 206, user input unit 207, interface unit 208, memory 209, processor 210, and power supply 211. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 14 is not intended to be limiting, and that terminal devices may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
The user input unit 207 may be configured to receive a first input of a user, where the first input is a selection input of a first image displayed on a first screen; the processor 210 may be configured to control the display unit 206 to output the target content through the second screen in response to the first input received by the user input unit 207. The first image is a picture, and the target content is a target video clip comprising the first image; or, the first image is a frame image in a first video, the target content is the first image, and the first video is a video output through a first screen.
The embodiment of the invention provides a terminal device, wherein under the condition that a first image displayed on a first screen of the terminal device is a picture, a second screen of the terminal device can update and display a video synthesized by the first image in real time, so that a user can watch the generated video in the process of selecting the picture; on the other hand, since the second screen can display the first image in the case where the first image displayed on the first screen is one frame image in the first video, the user can view a picture taken by the video in the course of viewing the video. Therefore, the image and the video can be more flexibly converted through the terminal equipment.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 201 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 210; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 201 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 201 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides the user with wireless broadband internet access through the network module 202, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 203 may convert audio data received by the radio frequency unit 201 or the network module 202 or stored in the memory 209 into an audio signal and output as sound. Also, the audio output unit 203 may also provide audio output related to a specific function performed by the terminal apparatus 200 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 203 includes a speaker, a buzzer, a receiver, and the like.
The input unit 204 is used to receive an audio or video signal. The input Unit 204 may include a Graphics Processing Unit (GPU) 2041 and a microphone 2042, and the Graphics processor 2041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 206. The image frames processed by the graphic processor 2041 may be stored in the memory 209 (or other storage medium) or transmitted via the radio frequency unit 201 or the network module 202. The microphone 2042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 201 in case of a phone call mode.
The terminal device 200 further comprises at least one sensor 205, such as light sensors, motion sensors and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 2061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 2061 and/or the backlight when the terminal apparatus 200 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 205 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 206 is used to display information input by the user or information provided to the user. The Display unit 206 may include a Display panel 2061, and the Display panel 2061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 207 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 207 includes a touch panel 2071 and other input devices 2072. Touch panel 2071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 2071 (e.g., user operation on or near the touch panel 2071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 2071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 210, and receives and executes commands sent by the processor 210. In addition, the touch panel 2071 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 207 may include other input devices 2072 in addition to the touch panel 2071. In particular, the other input devices 2072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not further described herein.
Further, a touch panel 2071 may be overlaid on the display panel 2061, and when the touch panel 2071 detects a touch operation on or near the touch panel 2071, the touch panel is transmitted to the processor 210 to determine the type of the touch event, and then the processor 210 provides a corresponding visual output on the display panel 2061 according to the type of the touch event. Although the touch panel 2071 and the display panel 2061 are shown as two separate components in fig. 14, in some embodiments, the touch panel 2071 and the display panel 2061 may be integrated to implement the input and output functions of the terminal device, which is not limited herein.
The interface unit 208 is an interface for connecting an external device to the terminal apparatus 200. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 208 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 200 or may be used to transmit data between the terminal apparatus 200 and the external device.
The memory 209 may be used to store software programs as well as various data. The memory 209 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 209 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 210 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 209 and calling data stored in the memory 209, thereby performing overall monitoring of the terminal device. Processor 210 may include one or more processing units; optionally, the processor 210 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 210.
Terminal device 200 may also include a power source 211 (e.g., a battery) for providing power to various components, and optionally, power source 211 may be logically connected to processor 210 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, the terminal device 200 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a terminal device, which includes the processor 210 shown in fig. 14, the memory 209, and a computer program stored in the memory 209 and capable of running on the processor 210, where the computer program is executed by the processor 210 to implement the processes of the foregoing method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. Examples of the computer-readable storage medium include a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method described in the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. A content output method applied to a terminal device including a first screen and a second screen, the method comprising:
receiving a first input of a user, wherein the first input is a selection input of a first image displayed on the first screen;
outputting target content through the second screen in response to the first input;
the first image is a picture, and the target content is a target video clip comprising the first image; or the first image is a frame of image in a first video, the target content is the first image, and the first video is a video played through the first screen;
the target content is a target video clip; after the outputting of the target content through the second screen, the method further includes:
receiving a third input of the user;
in response to the third input, at least two second images are extracted from the target video clip and displayed on a target screen, the at least two second images including the first image, the target screen being the first screen or the second screen.
2. The method of claim 1, wherein the target content is a target video clip; before outputting the target content through the second screen, the method further includes:
synthesizing the first image and a target object into the target video clip, wherein the target object is an image selected before the first input is received or a generated video clip;
the outputting of the target content through the second screen includes:
and playing the target video clip through the second screen according to the set first playing speed.
3. The method of claim 1, wherein the target content is a target video clip; before outputting the target content through the second screen, the method further includes:
synthesizing the first image and a target object into the target video clip, wherein the target object is an image selected before the first input is received or a generated video clip;
receiving a second input of the user;
the outputting of the target content through the second screen includes:
in response to the second input, updating a first video frame image currently displayed on the second screen to a second video frame image, wherein the first video frame image and the second video frame image are two adjacent frame images in the target video segment.
4. The method of claim 1, further comprising:
in response to the third input, displaying at least one third image on the target screen, each third image having a similarity greater than or equal to a first numerical value to an image of the at least two second images.
5. The method of claim 1, wherein the target content is a target video clip; the outputting of the target content through the second screen includes:
outputting a target object through a first region of the second screen and outputting the target video clip through a second region of the second screen;
wherein the target object is an image or a generated video clip that has been selected prior to receiving the first input; the second area is a partial area of the first area, or the second area is an area on the second screen except the first area.
6. The method of claim 1, wherein the target content is the first image; after receiving the first input of the user, the method further comprises:
deleting the first image from the first video in response to the first input.
7. A terminal device comprises a first screen and a second screen, and is characterized by comprising a receiving module and an output module;
the receiving module is used for receiving a first input of a user, wherein the first input is a selection input of a first image displayed on the first screen;
the output module is used for responding to the first input received by the receiving module and outputting target content through the second screen;
the first image is a picture, and the target content is a target video clip comprising the first image; or the first image is a frame of image in a first video, the target content is the first image, and the first video is a video played through the first screen;
the target content is a target video clip;
the receiving module is further configured to receive a third input of the user after the target content is output through the second screen by the output module;
the output module is further configured to extract at least two second images from the target video clip in response to the third input received by the receiving module, and display the at least two second images on a target screen, where the at least two second images include the first image, and the target screen is the first screen or the second screen.
8. The terminal device of claim 7, wherein the target content is a target video clip; the terminal equipment also comprises a synthesis module;
the synthesizing module is used for synthesizing the first image and a target object into the target video clip before the target content is output through the second screen by the output module, wherein the target object is an image selected before the first input is received or a generated video clip;
the output module is specifically configured to play the target video clip synthesized by the synthesis module through the second screen according to a set first play speed.
9. The terminal device of claim 7, wherein the target content is a target video clip; the terminal equipment also comprises a synthesis module;
the synthesizing module is used for synthesizing the first image and a target object into the target video clip before the target content is output through the second screen by the output module, wherein the target object is an image selected before the first input is received or a generated video clip;
the receiving module is further used for receiving a second input of the user;
the output module is specifically configured to update a first video frame image currently displayed on the second screen to a second video frame image in response to the second input received by the receiving module, where the first video frame image and the second video frame image are two adjacent frame images in the target video segment.
10. The terminal device of claim 7,
the output module is further used for responding to the third input received by the receiving module, and displaying at least one third image on the target screen, wherein the similarity of each third image and the image in the at least two second images is larger than or equal to a first numerical value.
11. The terminal device of claim 7, wherein the target content is a target video clip;
the output module is specifically configured to output a target object through a first area of the second screen, and output the target video clip through a second area of the second screen;
wherein the target object is an image or a generated video clip that has been selected prior to receiving the first input; the second area is a partial area of the first area, or the second area is an area on the second screen except the first area.
12. The terminal device according to claim 7, wherein the target content is the first image; the terminal equipment also comprises a processing module;
the processing module is configured to delete the first image from the first video in response to the first input received by the receiving module.
13. A terminal device, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the content output method according to any one of claims 1 to 6.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the content output method according to any one of claims 1 to 6.
CN201910143224.0A 2019-02-26 2019-02-26 Content output method and terminal equipment Active CN110022445B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910143224.0A CN110022445B (en) 2019-02-26 2019-02-26 Content output method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910143224.0A CN110022445B (en) 2019-02-26 2019-02-26 Content output method and terminal equipment

Publications (2)

Publication Number Publication Date
CN110022445A CN110022445A (en) 2019-07-16
CN110022445B true CN110022445B (en) 2022-01-28

Family

ID=67189118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910143224.0A Active CN110022445B (en) 2019-02-26 2019-02-26 Content output method and terminal equipment

Country Status (1)

Country Link
CN (1) CN110022445B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110865758B (en) * 2019-10-29 2022-02-22 维沃移动通信有限公司 Display method and electronic equipment
CN113849142B (en) * 2021-09-26 2024-05-28 深圳市火乐科技发展有限公司 Image display method, device, electronic equipment and computer readable storage medium
CN114125137A (en) * 2021-11-08 2022-03-01 维沃移动通信有限公司 Video display method and device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280136A (en) * 2017-12-27 2018-07-13 努比亚技术有限公司 A kind of multimedia object method for previewing, equipment and computer readable storage medium
CN108881742A (en) * 2018-06-28 2018-11-23 维沃移动通信有限公司 A kind of video generation method and terminal device
CN108920239A (en) * 2018-06-29 2018-11-30 维沃移动通信有限公司 A kind of long screenshotss method and mobile terminal
CN109005314A (en) * 2018-08-27 2018-12-14 维沃移动通信有限公司 A kind of image processing method and terminal
CN109102555A (en) * 2018-06-29 2018-12-28 维沃移动通信有限公司 A kind of image edit method and terminal

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024415B (en) * 2012-12-07 2015-11-18 深圳Tcl新技术有限公司 Based on the two aobvious information source recognition methods of a screen and device
KR102390809B1 (en) * 2015-08-12 2022-04-26 삼성전자주식회사 Method for providing image, electronic apparatus and storage medium
CN106933525B (en) * 2017-03-09 2019-09-20 青岛海信移动通信技术股份有限公司 A kind of method and apparatus showing image
CN108093128A (en) * 2017-11-30 2018-05-29 努比亚技术有限公司 A kind of display methods based on dual-screen mobile terminal, mobile terminal and storage medium
CN109085968B (en) * 2018-06-27 2021-02-12 维沃移动通信有限公司 Screen capturing method and terminal equipment
CN109089146A (en) * 2018-08-30 2018-12-25 维沃移动通信有限公司 A kind of method and terminal device controlling video playing
CN109917995B (en) * 2019-01-25 2021-01-08 维沃移动通信有限公司 Object processing method and terminal equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280136A (en) * 2017-12-27 2018-07-13 努比亚技术有限公司 A kind of multimedia object method for previewing, equipment and computer readable storage medium
CN108881742A (en) * 2018-06-28 2018-11-23 维沃移动通信有限公司 A kind of video generation method and terminal device
CN108920239A (en) * 2018-06-29 2018-11-30 维沃移动通信有限公司 A kind of long screenshotss method and mobile terminal
CN109102555A (en) * 2018-06-29 2018-12-28 维沃移动通信有限公司 A kind of image edit method and terminal
CN109005314A (en) * 2018-08-27 2018-12-14 维沃移动通信有限公司 A kind of image processing method and terminal

Also Published As

Publication number Publication date
CN110022445A (en) 2019-07-16

Similar Documents

Publication Publication Date Title
CN110891144B (en) Image display method and electronic equipment
CN109862267B (en) Shooting method and terminal equipment
CN108495029B (en) Photographing method and mobile terminal
CN109525874B (en) Screen capturing method and terminal equipment
CN110096326B (en) Screen capturing method, terminal equipment and computer readable storage medium
CN110062105B (en) Interface display method and terminal equipment
CN109240577B (en) Screen capturing method and terminal
CN109922265B (en) Video shooting method and terminal equipment
CN109032486B (en) Display control method and terminal equipment
CN111050070B (en) Video shooting method and device, electronic equipment and medium
CN110099296B (en) Information display method and terminal equipment
CN111010523B (en) Video recording method and electronic equipment
CN108124059B (en) Recording method and mobile terminal
CN110865745A (en) Screen capturing method and terminal equipment
CN110868633A (en) Video processing method and electronic equipment
CN109407948B (en) Interface display method and mobile terminal
CN109828731B (en) Searching method and terminal equipment
CN108616772B (en) Bullet screen display method, terminal and server
CN109618218B (en) Video processing method and mobile terminal
CN110022445B (en) Content output method and terminal equipment
CN110769174B (en) Video viewing method and electronic equipment
CN109246474B (en) Video file editing method and mobile terminal
CN108804628B (en) Picture display method and terminal
CN108174109B (en) Photographing method and mobile terminal
CN111083374B (en) Filter adding method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant