CN114827737A - Image generation method and device and electronic equipment - Google Patents

Image generation method and device and electronic equipment Download PDF

Info

Publication number
CN114827737A
CN114827737A CN202210443670.5A CN202210443670A CN114827737A CN 114827737 A CN114827737 A CN 114827737A CN 202210443670 A CN202210443670 A CN 202210443670A CN 114827737 A CN114827737 A CN 114827737A
Authority
CN
China
Prior art keywords
image
input
window
target
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210443670.5A
Other languages
Chinese (zh)
Inventor
王山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210443670.5A priority Critical patent/CN114827737A/en
Publication of CN114827737A publication Critical patent/CN114827737A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image generation method and device and electronic equipment, and belongs to the technical field of image generation. The specific scheme comprises the following steps: receiving a first input under the condition of displaying a playing interface of a target video; in response to the first input, displaying a first window and a second window, wherein the first window comprises a first image, the image content of the first image comprises at least part of the content of a first playing interface in the target video, and the second window comprises a second image, the image content of the second image comprises at least part of the content of a second playing interface in the target video; generating a target image based on the first image and the second image.

Description

Image generation method and device and electronic equipment
Technical Field
The application belongs to the technical field of image generation, and particularly relates to an image generation method and device and electronic equipment.
Background
When a user watches a video through the electronic equipment, if the user finds that a picture which the user is interested in exists in the video, the picture can be stored in a screenshot mode.
In the related technology, a user can trigger the electronic device to perform screenshot processing on a picture displayed on a current interface through screenshot operation, and store the screenshot in a gallery, and then, the user can perform editing processing on the obtained screenshot through the electronic device, for example, a part of the picture in the screenshot is captured, or a plurality of screenshots are spliced.
However, the above process is cumbersome.
Disclosure of Invention
The embodiment of the application aims to provide an image generation method and electronic equipment, and the problem that the user operation is relatively complicated in the process of screenshot splicing images can be solved.
In a first aspect, an embodiment of the present application provides an image generation method, where the method includes: receiving a first input under the condition of displaying a playing interface of a target video; in response to the first input, displaying a first window and a second window, wherein the first window comprises a first image, the image content of the first image comprises at least part of the content of a first playing interface in the target video, and the second window comprises a second image, the image content of the second image comprises at least part of the content of a second playing interface in the target video; generating a target image based on the first image and the second image.
In a second aspect, an embodiment of the present application provides an image generating apparatus, including: the device comprises a receiving module, a display module and a processing module; the receiving module is used for receiving a first input under the condition of displaying a playing interface of the target video; the display module is used for responding to the first input and displaying a first window and a second window, wherein the first window comprises a first image, the image content of the first image comprises at least part of content of a first playing interface in the target video, the second window comprises a second image, and the image content of the second image comprises at least part of content of a second playing interface in the target video; the processing module is used for generating a target image based on the first image and the second image.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first input can be received under the condition that a playing interface of a target video is displayed; in response to the first input, displaying a first window and a second window, wherein the first window comprises a first image, the image content of the first image comprises at least part of the content of a first playing interface in the target video, and the second window comprises a second image, the image content of the second image comprises at least part of the content of a second playing interface in the target video; generating a target image based on the first image and the second image. According to the scheme, the first image and the second image can be determined through the first input, the target image is generated based on the first image and the second image, and due to the fact that the image content of the first image comprises at least part of content of a first playing interface in the target video and the image content of the second image comprises at least part of content of a second playing interface in the target video, a user can capture at least part of content of the playing interface in the target video through the first input and generate the target image based on the capture, and therefore secondary editing processing of the capture by the user can be avoided, and user operation is simplified.
Drawings
Fig. 1 is a schematic flowchart of an image generation method provided in an embodiment of the present application;
FIG. 2 is a schematic interface diagram of an image generation method provided in an embodiment of the present application;
fig. 3 is a second schematic interface diagram of an image generation method according to an embodiment of the present application;
fig. 4 is a third schematic interface diagram of an image generation method according to the embodiment of the present application;
FIG. 5 is a fourth schematic interface diagram of an image generation method according to an embodiment of the present disclosure;
FIG. 6 is a fifth schematic interface diagram of an image generation method according to an embodiment of the present application;
FIG. 7 is a sixth schematic interface diagram of an image generation method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an image generating apparatus provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 10 is a hardware schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image generation method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings by specific embodiments and application scenarios thereof.
In the image generation method provided in the embodiment of the present application, an execution subject of the image generation method may be an electronic device or a functional module or a functional entity capable of implementing the image generation method in the electronic device, the electronic device mentioned in the embodiment of the present application includes, but is not limited to, a mobile phone, a tablet computer, a camera, a wearable device, and the like, and the image generation method provided in the embodiment of the present application is described below with the electronic device as an example of the execution subject.
As shown in fig. 1, an embodiment of the present application provides an image generation method, which may include steps 101 to 103:
step 101, the electronic device receives a first input under the condition that a playing interface of the target video is displayed.
Under the condition that the electronic equipment displays the playing interface of the target video, if a user wants to trigger the electronic equipment to intercept the content in the playing interface, a first input can be performed on the electronic equipment, and accordingly, the electronic equipment can receive the first input of the user. The first input may be used to trigger the electronic device to determine a screenshot area from a playing interface of the target video, and to capture a screenshot in the screenshot area.
Optionally, before the electronic device receives the first input, the user may perform a second input on the electronic device, and accordingly, the electronic device may receive the second input; and displaying a play interface of the target video in the screenshot mode in response to a second input.
That is, the user may perform the first input on the electronic device when the play interface of the target video is in the screenshot mode.
Optionally, the second input may be a touch input, a voice input, a gesture input, or the like. For example, the touch input may be a click input or a long-press input of a first control displayed by the electronic device by a user, and the first control may be used to trigger the electronic device to control the currently displayed interface to be in a screenshot mode.
Illustratively, as shown in fig. 2, the playing interface 21 of the target video may include a first control 22, and when the user wants to perform a screenshot operation on the playing interface 21, the user may perform a click input on the first control 22, and the electronic device may control the playing interface 21 to be in the screenshot mode in response to the click input.
Based on the scheme, the playing interface of the target video can be controlled to be in the screenshot mode in response to the second input of the user, so that when the user wants to perform screenshot operation on the playing interface of the target video, the display mode of the electronic equipment conversion interface can be triggered through the second input, and a basis is provided for the electronic equipment to perform screenshot operation on the playing interface of the target video.
It should be noted that the screenshot mode refers to a mode in which the user can determine a screenshot area from a playing interface in the screenshot mode through a first input, for example, when the playing interface of the target video is in the screenshot mode, the user can perform the first input on the playing interface of the target video; or, in a case that the playing interface of the target video is in the screenshot mode, the playing interface of the target video may include a screenshot area selection box, and the user may perform a first input on the screenshot area selection box.
And 102, responding to the first input by the electronic equipment, and displaying a first window and a second window.
The first window comprises a first image, the image content of the first image comprises at least part of content of a first playing interface in the target video, the second window comprises a second image, and the image content of the second image comprises at least part of content of a second playing interface in the target video.
Optionally, the first playing interface and the second playing interface may be the same interface in the target video, or may be different interfaces in the target video. Under the condition that the first playing interface and the second playing interface are the same interface, the first input can comprise a first sub-input and a second sub-input; in a case where the first play interface and the second play interface are different interfaces, the first input may include a third sub input, a fourth sub input, and a fifth sub input.
Optionally, in a case that the first playing interface and the second playing interface are the same interface, the electronic device may display the first window in response to a first sub-input of the user to the first display area in the first playing interface; responding to a second sub-input of the user to a second display area in the first playing interface, and displaying a second window; wherein the area range of the first display area is at least partially different from the area range of the second display area.
Optionally, the first sub-input may be a touch input that a user defines a first display area in the playing interface of the target video, and the second sub-input may be a touch input that a user defines a second display area in the playing interface of the target video; alternatively, the first sub-input may be input of the first display area determined by the user through editing input of a screenshot area selection box displayed in the play interface of the target video, and the second sub-input may be input of the second display area determined by the user through editing input of the screenshot area selection box.
Optionally, the screenshot area selection box in the embodiment of the present application may be a rectangle, a circle, an ellipse, a prism, or the like, and may be specifically determined according to actual use requirements, and the contrast in the embodiment of the present application is not limited.
Based on the scheme, the first input can comprise a first sub-input and a second sub-input, so that the electronic equipment can determine different screenshot areas of the same playing interface through the first sub-input and the second sub-input, and therefore screenshot operation of different areas of the same playing interface is completed.
Optionally, in a case that the first playing interface and the second playing interface are different interfaces, the electronic device may display the first window in response to a third sub-input of the user to the first playing interface; responding to the fourth sub-input of the user, canceling to display the first playing interface, and displaying the second playing interface; and responding to a fifth sub-input of the user to the second playing interface, and displaying the second window.
Optionally, the third sub-input may be a touch input that the user defines the first display area in the playing interface of the target video, and the fifth sub-input may be a touch input that the user defines the second display area in the playing interface of the target video; alternatively, the third sub-input may be input of the first display area determined by the user through editing input of a screenshot area selection box displayed in the play interface of the target video, and the fifth sub-input may be input of the second display area determined by the user through editing input of the screenshot area selection box.
Optionally, the fourth sub-input is an input for triggering the electronic device to switch the play interface, for example, the fourth sub-input may be a touch input of the user to a second control, where the touch input may be a click input, a long-press input, a drag input, or the like, and the second control is used for controlling the switching display of the interface.
It should be noted that, in the case that one play interface is in the screenshot mode, the play interface may include the second control.
Illustratively, as shown in fig. 3, the electronic device may display a first play interface 31 in the capture mode, and if the user wants to capture the capture operation of other play interfaces in the target video, a drag input may be made to a second control 32 on the play progress bar, as shown in fig. 4, after the user drags the play schedule of the target video from "10: 05" to "20: 10", the electronic device may display a second play interface 33 in the capture mode.
Based on the scheme, the first input can comprise a fourth sub-input, so that the user can trigger the electronic equipment to switch and display the playing interface in the screenshot mode through the fourth sub-input, and on one hand, the screenshot selection range of the user can be expanded; on the other hand, the electronic equipment can realize screenshot operation on different interfaces.
Optionally, in addition to determining different screenshot areas through different sub-inputs as described above, the electronic device may also perform an associated screenshot based on a target object determined by a user from the target video. Specifically, the electronic device may determine, in response to the first input, a target object; displaying a first window and a second window based on the target object; and the display contents of the first window and the second window comprise target objects.
Optionally, after the electronic device determines the target object, the electronic device may display first prompt information for prompting the user whether to perform the associated screenshot according to the target object.
Illustratively, as shown in fig. 5, a user may select a target object 52 from the third playing interface 51 through a first input, after the electronic device determines the target object 52 in response to the first input, the electronic device may display a first prompt box 53 on the third playing interface 51 in an overlapping manner, where the first prompt box 53 includes first prompt information, a determination control and a cancel control, the first prompt information is used for prompting the user whether to perform an associated screenshot according to the target object, and if the user clicks on the determination control, the electronic device may determine all image areas including the target object 52 from a target video and perform screenshot processing on images in all the image areas; if the user clicks on the cancel control, the electronic device may only perform screenshot on the selected target object 52.
Optionally, the target object may include at least one of: characters, words, scenery, animals, etc.
Based on the scheme, the first image and the second image can be determined according to the target object, so that on one hand, the mode of acquiring the screenshot image by the electronic equipment can be enriched, and on the other hand, the electronic equipment can be triggered to acquire a plurality of screenshots only by one-time operation of a user, so that the user operation can be simplified.
Optionally, before displaying the first window and the second window, the electronic device may receive a third input with the number input box displayed; determining a number of display windows in response to the third input; wherein the display window includes a first window and a second window.
Optionally, the electronic device may display the number input box in a case where the target object is determined and the associated screenshot according to the target object is determined.
Illustratively, as shown in fig. 6, taking the number of display windows as 2 as an example, after the electronic device determines the target object 52 from the third playing interface 51, if the user performs a click input on the determination control in fig. 5, the electronic device may display a number input box 61, and in the case of displaying the number input box 61, the user may input 2 in the number input box 61 and perform a click input on the determination control in the number input box 61, and then the electronic device may determine the first image and the second image from the target video according to a preset rule based on the target object.
Optionally, the preset rule may be to determine an image including the target object from the target video according to the interface playing order of the target video.
Based on the scheme, the user can trigger the electronic equipment to determine the number of the display windows through the third input, so that on one hand, a screenshot image meeting the user requirement can be obtained, and on the other hand, the defect that the running speed of the electronic equipment is influenced by too many images including the target object can be avoided.
Step 103, the electronic device generates a target image based on the first image and the second image.
Optionally, the electronic device may stitch the first image and the second image to obtain the target image.
Based on the scheme, the first image and the second image can be spliced to obtain the target image, so that one image comprises a plurality of screenshots of the target video, and the content richness of the target image is improved.
Optionally, before generating the target image, the electronic device may further receive a fourth input by the user to a target window, the target window including at least one of: the first window, the second window; in response to the fourth input, updating window information for the target window, the window information including at least one of: position information, size information, display angle information.
Exemplarily, as shown in fig. 7, in a case where the first window 71 and the second window 72 are displayed, the user may trigger the electronic device to perform a rotation process on the first window 71 and perform an enlargement process on the second window 72 through a fourth input. Then, the electronic device performs stitching processing on the updated and displayed image in the first window 71 and the image in the second window 72 to obtain a target image 73.
Based on the scheme, the window information of the first window and the second window can be preprocessed, and then the processed image in the first window and the processed image in the second window are spliced, so that the diversity of image display modes in the target image can be improved.
In the embodiment of the application, the first image and the second image can be determined through the first input, and the target image is generated based on the first image and the second image, because the image content of the first image comprises at least part of the content of the first playing interface in the target video, and the image content of the second image comprises at least part of the content of the second playing interface in the target video, the user can capture at least part of the content of the playing interface in the target video through the first input and generate the target image based on the capture, and thus, the secondary editing processing of the capture by the user can be avoided, and the user operation is simplified.
According to the image generation method provided by the embodiment of the application, the execution subject can be an image generation device. The image generation device provided by the embodiment of the present application will be described with an example in which an image generation device executes an image generation method.
As shown in fig. 8, an embodiment of the present application further provides an image generating apparatus 800, including: a receiving module 801, a display module 802 and a processing module 803; the receiving module 801 is configured to receive a first input under the condition that a playing interface of a target video is displayed; the display module 802 is configured to display, in response to the first input, a first window and a second window, where the first window includes a first image, image content of the first image includes at least part of content of a first playing interface in the target video, and the second window includes a second image, and image content of the second image includes at least part of content of a second playing interface in the target video; the processing module 803 is configured to generate a target image based on the first image and the second image.
Optionally, the receiving module 801 is further configured to receive a second input; the display module 802 is configured to display a play interface of the target video in the screenshot mode in response to the second input.
Optionally, in a case that the first playing interface and the second playing interface are the same interface, the first input includes a first sub-input and a second sub-input; the display module 802 is specifically configured to display the first window in response to the first sub-input to the first display area in the first play interface; responding to a second sub-input of a second display area in the first playing interface, and displaying the second window; wherein the area range of the first display area is at least partially different from the area range of the second display area.
Optionally, in a case that the first playing interface and the second playing interface are different interfaces, the first input includes a third sub input, a fourth sub input, and a fifth sub input; the display module 802 is specifically configured to respond to a third sub-input to the first play interface, and display the first window; responding to the fourth sub-input, canceling the display of the first playing interface, and displaying the second playing interface; displaying the second window in response to the fifth sub-input to the second playback interface.
Optionally, the processing module 803 is further configured to determine, in response to the first input, a target object; the display module 802 is specifically configured to display the first window and the second window based on the target object; wherein the display contents of the first window and the second window include the target object.
Optionally, the receiving module 801 is further configured to receive a third input in a case that the number input box is displayed; the processing module 803 is further configured to determine, in response to the third input, the number of display windows; wherein the display window includes the first window and the second window.
Optionally, the receiving module 801 is further configured to receive a fourth input of the target window by the user, where the target window includes at least one of: the first window, the second window; the display module 802 is further configured to update window information of the target window in response to the fourth input, where the window information includes at least one of: position information, size information, display angle information.
Optionally, the processing module 803 is specifically configured to splice the first image and the second image to obtain the target image.
In the embodiment of the application, the first image and the second image can be determined through the first input, and the target image is generated based on the first image and the second image, because the image content of the first image comprises at least part of the content of the first playing interface in the target video, and the image content of the second image comprises at least part of the content of the second playing interface in the target video, the user can capture at least part of the content of the playing interface in the target video through the first input and generate the target image based on the capture, and thus, the secondary editing processing of the capture by the user can be avoided, and the user operation is simplified.
The image generation apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image generation apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image generation device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 7, implement the same technical effect, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 9, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 901 and a memory 902, where the memory 902 stores a program or an instruction that can be executed on the processor 901, and when the program or the instruction is executed by the processor 901, the steps of the embodiment of the image generation method are implemented, and the same technical effects can be achieved, and are not described again to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 1007 is configured to receive a first input when a play interface of the target video is displayed; a display unit 1006, configured to display, in response to the first input, a first window and a second window, where the first window includes a first image, an image content of the first image includes at least part of a content of a first playing interface in the target video, and the second window includes a second image, and an image content of the second image includes at least part of a content of a second playing interface in the target video; a processor 1010 configured to generate a target image based on the first image and the second image.
In the embodiment of the application, the first image and the second image can be determined through the first input, and the target image is generated based on the first image and the second image, because the image content of the first image comprises at least part of the content of the first playing interface in the target video, and the image content of the second image comprises at least part of the content of the second playing interface in the target video, the user can capture at least part of the content of the playing interface in the target video through the first input and generate the target image based on the capture, and thus, the secondary editing processing of the capture by the user can be avoided, and the user operation is simplified.
Optionally, the user input unit 1007 is further configured to receive a second input; a display unit 1006, configured to display a play interface of the target video in the screenshot mode in response to the second input.
In the embodiment of the application, the playing interface of the target video can be controlled to be in the screenshot mode in response to the second input of the user, so that when the user wants to perform screenshot operation on the playing interface of the target video, the display mode of the conversion interface of the electronic device can be triggered through the second input, and a basis is provided for the electronic device to perform screenshot operation on the playing interface of the target video.
Optionally, in a case that the first playing interface and the second playing interface are the same interface, the first input includes a first sub-input and a second sub-input; a display unit 1006, specifically configured to display the first window in response to the first sub-input to a first display area in the first play interface; responding to a second sub-input of a second display area in the first playing interface, and displaying the second window; wherein the area range of the first display area is at least partially different from the area range of the second display area.
In the embodiment of the application, the first input may include a first sub-input and a second sub-input, so that the electronic device may determine different screenshot areas of the same play interface through the first sub-input and the second sub-input, thereby completing screenshot operations on different areas of the same play interface.
Optionally, in a case that the first playing interface and the second playing interface are different interfaces, the first input includes a third sub input, a fourth sub input, and a fifth sub input; a display unit 1006, specifically configured to display the first window in response to a third sub-input to the first play interface; responding to the fourth sub-input, canceling the display of the first playing interface, and displaying the second playing interface; displaying the second window in response to the fifth sub-input to the second playback interface.
In the embodiment of the application, the first input may include a fourth sub-input, so that the user may trigger the electronic device to switch and display the play interface in the screenshot mode through the fourth sub-input, and on one hand, the screenshot selection range of the user may be expanded; on the other hand, the electronic equipment can realize screenshot operation on different interfaces.
Optionally, a processor 1010, further configured to determine a target object in response to the first input; a display unit 1006, specifically configured to display the first window and the second window based on the target object; wherein the display contents of the first window and the second window include the target object.
In the embodiment of the application, the first image and the second image can be determined according to the target object, so that on one hand, the mode of acquiring the screenshot image by the electronic equipment can be enriched, and on the other hand, the electronic equipment can be triggered to acquire a plurality of screenshots only by one-time operation of a user, so that the user operation can be simplified.
Optionally, the user input unit 1007 is further configured to receive a third input in a case where the number input box is displayed; a processor 1010 further configured to determine a number of display windows in response to the third input; wherein the display window includes the first window and the second window.
In the embodiment of the application, the user can trigger the electronic device to determine the number of the display windows through the third input, so that on one hand, a screenshot image meeting the user requirement can be obtained, and on the other hand, the defect that the running speed of the electronic device is influenced by too many images including the target object can be avoided.
Optionally, the user input unit 1007 is further configured to receive a fourth input from the user to the target window, where the target window includes at least one of the following: the first window, the second window; a display unit 1006, further configured to update window information of the target window in response to the fourth input, where the window information includes at least one of: position information, size information, display angle information.
In the embodiment of the application, the window information of the first window and the second window can be preprocessed, and then the processed image in the first window and the processed image in the second window are spliced, so that the diversity of image display modes in the target image can be improved.
Optionally, the processor 1010 is specifically configured to splice the first image and the second image to obtain the target image.
In the embodiment of the application, the first image and the second image can be spliced to obtain the target image, so that one image can comprise a plurality of screenshots of the target video, and the content richness of the target image is improved.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1009 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor, which primarily handles operations involving the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the embodiment of the image generation method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing embodiments of the image generation method, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element identified by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image generation method, comprising:
receiving a first input under the condition of displaying a playing interface of a target video;
in response to the first input, displaying a first window and a second window, wherein the first window comprises a first image, the image content of the first image comprises at least part of the content of a first playing interface in the target video, and the second window comprises a second image, and the image content of the second image comprises at least part of the content of a second playing interface in the target video;
generating a target image based on the first image and the second image.
2. The image generation method of claim 1, wherein prior to receiving the first input, the method further comprises:
receiving a second input;
and responding to the second input, and displaying a playing interface of the target video in a screenshot mode.
3. The image generation method according to claim 1, wherein in a case where the first playback interface and the second playback interface are the same interface, the first input includes a first sub input and a second sub input;
the displaying the first window and the second window in response to the first input specifically includes:
displaying the first window in response to the first sub-input to a first display area in the first play interface;
responding to a second sub-input of a second display area in the first playing interface, and displaying the second window;
wherein the area range of the first display area is at least partially different from the area range of the second display area.
4. The image generation method according to claim 1, wherein in a case where the first playback interface and the second playback interface are different interfaces, the first input includes a third sub input, a fourth sub input, and a fifth sub input;
the displaying the first window and the second window in response to the first input specifically includes:
responding to a third sub-input of the first playing interface, and displaying the first window; responding to the fourth sub-input, canceling the display of the first playing interface, and displaying the second playing interface;
displaying the second window in response to the fifth sub-input to the second playback interface.
5. The image generation method according to claim 1, wherein the displaying a first window and a second window in response to the first input specifically comprises:
determining a target object in response to the first input;
displaying the first window and the second window based on the target object;
wherein the display contents of the first window and the second window include the target object.
6. The image generation method according to claim 5,
before the displaying the first window and the second window, the method further comprises: receiving a third input in a case where the number input box is displayed;
determining a number of display windows in response to the third input;
wherein the display window includes the first window and the second window.
7. The image generation method according to any one of claims 1 to 6, wherein before generating the target image based on the first image and the second image, the method further includes:
receiving a fourth input by the user to a target window, the target window comprising at least one of: the first window, the second window;
in response to the fourth input, updating window information of the target window, the window information including at least one of: position information, size information, display angle information.
8. The image generation method according to any one of claims 1 to 6, wherein generating the target image based on the first image and the second image specifically includes:
and splicing the first image and the second image to obtain the target image.
9. An image generation apparatus, comprising: the device comprises a receiving module, a display module and a processing module;
the receiving module is used for receiving a first input under the condition of displaying a playing interface of the target video;
the display module is used for responding to the first input and displaying a first window and a second window, wherein the first window comprises a first image, the image content of the first image comprises at least part of content of a first playing interface in the target video, the second window comprises a second image, and the image content of the second image comprises at least part of content of a second playing interface in the target video;
the processing module is used for generating a target image based on the first image and the second image.
10. An electronic device comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the image generation method of any of claims 1-8.
CN202210443670.5A 2022-04-25 2022-04-25 Image generation method and device and electronic equipment Pending CN114827737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210443670.5A CN114827737A (en) 2022-04-25 2022-04-25 Image generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210443670.5A CN114827737A (en) 2022-04-25 2022-04-25 Image generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114827737A true CN114827737A (en) 2022-07-29

Family

ID=82507747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210443670.5A Pending CN114827737A (en) 2022-04-25 2022-04-25 Image generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114827737A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095234A (en) * 2023-01-31 2023-05-09 维沃移动通信有限公司 Image generation method, device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107682650A (en) * 2017-09-30 2018-02-09 咪咕动漫有限公司 A kind of image processing method and device and storage medium
CN111143013A (en) * 2019-12-30 2020-05-12 维沃移动通信有限公司 Screenshot method and electronic equipment
CN111638849A (en) * 2020-05-29 2020-09-08 维沃移动通信有限公司 Screenshot method and device and electronic equipment
CN112181250A (en) * 2020-09-28 2021-01-05 四川封面传媒有限责任公司 Mobile terminal webpage screenshot method, device, equipment and storage medium
US20210006867A1 (en) * 2018-08-17 2021-01-07 Tencent Technology (Shenzhen) Company Limited Picture generation method and apparatus, device, and storage medium
CN112929745A (en) * 2018-12-18 2021-06-08 腾讯科技(深圳)有限公司 Video data processing method, device, computer readable storage medium and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107682650A (en) * 2017-09-30 2018-02-09 咪咕动漫有限公司 A kind of image processing method and device and storage medium
US20210006867A1 (en) * 2018-08-17 2021-01-07 Tencent Technology (Shenzhen) Company Limited Picture generation method and apparatus, device, and storage medium
CN112929745A (en) * 2018-12-18 2021-06-08 腾讯科技(深圳)有限公司 Video data processing method, device, computer readable storage medium and equipment
CN111143013A (en) * 2019-12-30 2020-05-12 维沃移动通信有限公司 Screenshot method and electronic equipment
CN111638849A (en) * 2020-05-29 2020-09-08 维沃移动通信有限公司 Screenshot method and device and electronic equipment
CN112181250A (en) * 2020-09-28 2021-01-05 四川封面传媒有限责任公司 Mobile terminal webpage screenshot method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095234A (en) * 2023-01-31 2023-05-09 维沃移动通信有限公司 Image generation method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112887802A (en) Video access method and device
CN114779977A (en) Interface display method and device, electronic equipment and storage medium
CN114518820A (en) Icon sorting method and device and electronic equipment
CN114385049A (en) Message processing method, device, equipment and storage medium
CN114518822A (en) Application icon management method and device and electronic equipment
WO2024114571A1 (en) Information display method and apparatus, electronic device, and storage medium
CN114327726A (en) Display control method, display control device, electronic equipment and storage medium
CN112181252B (en) Screen capturing method and device and electronic equipment
CN114827737A (en) Image generation method and device and electronic equipment
CN112711368A (en) Operation guidance method and device and electronic equipment
CN115202524B (en) Display method and device
CN115866314A (en) Video playing method and device
CN115729544A (en) Desktop component generation method and device, electronic equipment and readable storage medium
CN116107531A (en) Interface display method and device
CN114895815A (en) Data processing method and electronic equipment
CN115437736A (en) Method and device for recording notes
CN114679546A (en) Display method and device, electronic equipment and readable storage medium
CN115002551A (en) Video playing method and device, electronic equipment and medium
CN114995713A (en) Display control method and device, electronic equipment and readable storage medium
CN114998102A (en) Image processing method and device and electronic equipment
CN114416269A (en) Interface display method and display device
CN114115639A (en) Interface control method and device, electronic equipment and storage medium
CN114518821A (en) Application icon management method and device and electronic equipment
CN115328355B (en) Application function starting method and device
CN114860122A (en) Application program control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination