CN114531547A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN114531547A
CN114531547A CN202210170891.XA CN202210170891A CN114531547A CN 114531547 A CN114531547 A CN 114531547A CN 202210170891 A CN202210170891 A CN 202210170891A CN 114531547 A CN114531547 A CN 114531547A
Authority
CN
China
Prior art keywords
image
images
target
moving object
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210170891.XA
Other languages
Chinese (zh)
Inventor
李仕康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210170891.XA priority Critical patent/CN114531547A/en
Publication of CN114531547A publication Critical patent/CN114531547A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method and device and electronic equipment, and belongs to the field of image processing. The method comprises the following steps: shooting to obtain a first image and M second images, wherein the M second images all comprise a target moving object; displaying N third images on the first image according to the motion trail of the target motion object, wherein the third images are images corresponding to the area where the target motion object is located in the second image; generating a target image based on the first image and the N third images; wherein M, N are positive integers greater than or equal to 2, and M is greater than or equal to N.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the field of image processing, and particularly relates to an image processing method and device and electronic equipment.
Background
At present, most image objects are overlapped by post-production software (such as photoshop), and the basic principle is to overlap layers by using a mask or a cutout form. Some image superposition software superposes a plurality of whole images by using a difference mode, so that the effect of target smear fractal can be achieved.
However, post-production methods require cumbersome post-processing including but not limited to masking, matting, and selecting, which can be difficult and burdensome for people without a post-production base. And the mode of overlapping a plurality of whole images can introduce other noise which is not changed by the target, thereby reducing the quality of the target image.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, which can solve the problems of complex operation and poor image quality in image post-processing in the related art.
In a first aspect, an embodiment of the present application provides an image processing method, including:
shooting to obtain a first image and M second images, wherein the M second images all comprise a target moving object;
displaying N third images on the first image according to the motion trail of the target motion object, wherein the third images are images corresponding to the area where the target motion object is located in the second image;
generating a target image based on the first image and the N third images;
wherein M, N are positive integers greater than or equal to 2, and M is greater than or equal to N.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the shooting module is used for shooting to obtain a first image and M second images, wherein the M second images all comprise a target moving object;
a display module, configured to display, on the first image, N third images according to a motion trajectory of a target moving object, where the third images are images corresponding to an area where the target moving object is located in the second image;
a generating module, configured to generate a target image based on the first image and N third images;
wherein M, N are positive integers greater than or equal to 2, and M is greater than or equal to N.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first image and M second images are obtained through shooting, wherein the M second images all comprise a target moving object; displaying N third images on the first image according to the motion trail of the target motion object, wherein the third images are images corresponding to the area where the target motion object is located in the second image; and generating a target image based on the first image and the N third images. In the process, only N third images related to the area where the target moving object is located are displayed on the first image according to the motion track of the target moving object, and the target image is generated based on the first image and the N third images, so that the noise caused by the change of other non-target moving objects is not introduced, and the quality of the target image is improved; moreover, the whole process can be finished without complicated post-production, and the operation is simpler and easier.
Drawings
Fig. 1 is a flowchart of an image processing method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a selection page of a shooting mode provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a third image display provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an edit of a third image provided by an embodiment of the present application;
fig. 5 is a second schematic diagram of editing a third image according to an embodiment of the present application;
fig. 6 is a third schematic diagram of editing a third image according to an embodiment of the present application;
FIG. 7 is a fourth illustration of an editing process of a third image according to an embodiment of the present application;
FIG. 8 is a fifth illustration of editing a third image according to the embodiment of the present application;
FIG. 9 is a sixth illustration of an edit of a third image according to an embodiment of the present application;
fig. 10 is a seventh schematic diagram of editing a third image according to an embodiment of the present application;
fig. 11 is an eighth schematic diagram of editing a third image according to an embodiment of the present application;
FIG. 12 is a ninth illustration of an edit of a third image according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 14 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 15 is a block diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/", and generally means that the former and latter related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an embodiment of the present application provides an image processing method, including:
step 101, a first image and M second images are obtained through shooting, wherein the M second images all comprise a target moving object.
Specifically, the image processing method of the embodiment of the present application is applied to an electronic device. The electronic device has a plurality of shooting modes, and one of the shooting modes is selected from the plurality of shooting modes. If the selected photographing mode is the target photographing mode, photographing may be performed in the target photographing mode. As shown in fig. 2, the shooting operation may be a click input, a double-click input, a long-press input, a slide input, or the like to the shooting control 23 in the electronic device, and is not limited in this respect.
As shown in fig. 2, the target moving object 24 is still photographed by a photographing operation to obtain a first image, and the target moving object 24 is tracked to perform dynamic photographing to obtain M second images, where each of the M second images includes the target moving object.
102, displaying N third images on the first image according to the motion trail of the target motion object, wherein the third images are images corresponding to the area where the target motion object is located in the second image; wherein M, N are all positive integers greater than or equal to 2, and M is greater than or equal to N.
Specifically, the M second images are continuous images obtained by tracking and shooting the target moving object, and each second image includes a third image corresponding to the area where the target moving object is located, that is, the M third images can show the movement track of the target moving object. N third images of the M third images showing the movement trajectory of the target moving object may be displayed in the first image, which is equivalent to superimposing the N third images on the first image, the first image and the N third images being in different image layers. Therefore, N third images are displayed on the first image and used for displaying the motion trail of the target moving object.
And when N is smaller than M, the N third images are part of the M third images. And under the condition that N is equal to M, the N third images are M third images.
103, generating a target image based on the first image and the N third images;
specifically, the first image and the N third images displayed on the first image are subjected to image processing, thereby obtaining a target image with a dynamic effect.
In the above embodiment of the present application, a first image and M second images are obtained by shooting, where each of the M second images includes a target moving object; displaying N third images on the first image according to the motion trail of the target motion object, wherein the third images are images corresponding to the area where the target motion object is located in the second image; and generating a target image based on the first image and the N third images. In the process, only N third images related to the area where the target moving object is located are displayed on the first image according to the motion track of the target moving object, and the target image is generated based on the first image and the N third images, so that the noise caused by the change of other non-target moving objects is not introduced, and the quality of the target image is improved; in addition, the whole process can be finished without complex post-production, and the operation is simpler and easier.
As an optional embodiment, the step 101 obtains the first image and M second images by shooting, and specifically includes:
controlling a main camera module to shoot to obtain a first image; and
and controlling the periscopic camera to shoot the target moving object to obtain M second images.
Specifically, the electronic equipment comprises a main camera module and a periscopic camera. After entering the cell phone function interface, open main camera module and periscopic camera simultaneously, electronic equipment's shooting mode is in the target shooting mode, and the target shooting mode needs to make main camera module and periscopic camera all be in operating condition promptly.
When the electronic equipment shoots a target moving object in a target shooting mode, the main camera module needs to be controlled to carry out static shooting on the target moving object to obtain a static image, namely a first image; furthermore, as the target moving object moves, it is necessary to control the periscopic camera to rotate, and perform tracking shooting on the target moving object, thereby obtaining M second images.
The periscopic camera is a large-angle holder provided with the periscopic camera, so that a large-angle anti-shake effect can be realized, and a plurality of second images with better image quality, especially target moving objects at a far focus position, can be tracked and shot by utilizing the characteristics of the periscopic camera.
As an optional embodiment, before the step 101 obtains the first image and M second images, the method further includes:
receiving a first input of a user to a first moving object in a shooting preview interface;
in response to the first input, determining the first moving object as a target moving object.
Specifically, as shown in fig. 2, before shooting the target moving object, a shooting mode is selected, and a manual shooting control 22 may be selected in the shooting preview interface, where the manual shooting control 22 corresponds to the manual shooting mode. The first moving object can be determined as the target moving object through a first input of the user to the first moving object in the shooting preview interface, and therefore the user determines the target moving object through a manual selection mode.
And after the shooting mode is selected, the shooting control 23 is clicked to shoot, static shooting and dynamic tracking continuous shooting are carried out in the shooting process, and the shooting control 23 is clicked to stop shooting after shooting is finished.
It should be noted that the first moving object may be one object or may be a plurality of objects. For example: as shown in fig. 3, the flying bird flies and catches food in the air, and the manual locking object may be the flying bird, that is, the first moving object may be the flying bird. The manually-locked objects may also be flying birds and predated flying insects associated with the flying birds, whereby the first moving objects are the flying birds and the flying insects.
The first input may be a click input, a double click input, a long press input, a slide input, or the like for the first moving object in the shooting preview interface, which is not limited in this embodiment.
As an optional embodiment, before the step 101 obtains the first image and M second images, the method further includes:
and under the condition that a second moving object in the shooting preview interface meets a preset condition, determining the second moving object as a target moving object.
Specifically, as shown in fig. 2, before shooting the target moving object, a shooting mode is selected, and an automatic shooting control 21 may be selected in the shooting preview interface, where the automatic shooting control 21 corresponds to the automatic shooting mode. If the second moving object in the shooting preview interface meets the preset condition, the second moving object can be automatically determined as the target moving object.
And after the shooting mode is selected, the shooting control 23 is clicked to shoot, static shooting and dynamic tracking continuous shooting are carried out in the shooting process, and the shooting control 23 is clicked to stop shooting after shooting is finished.
It should be noted that the first moving object may be one object or may be a plurality of objects. For example: as shown in fig. 3, the flying bird flies and catches food in the air, and the automatic locking object may be the flying bird, that is, the second moving object may be the flying bird. The auto-locking object may also be a bird and a predated fly associated with the bird, whereby the second moving object is a bird and a fly.
The preset condition may be: the moving speed of the second moving object is greater than the preset speed, or the AI algorithm detects that the second moving object is moving. The preset speed is a speed threshold value which can judge whether the second moving object meets the preset condition, if the preset speed is exceeded, the second moving object can be judged to meet the preset condition, the second moving object can be determined as the target moving object, and otherwise, the second moving object cannot be determined as the target moving object.
As an optional embodiment, before the step 102 displays N third images on the first image according to the motion trail of the target moving object, the method further includes:
receiving a second input of the user to a display quantity control, wherein the display quantity control is used for adjusting the display quantity of the third image;
in response to the second input, determining a display number of a third image.
Specifically, before N third images are displayed on the first image according to the movement locus of the target moving object, a second input of the user to the display quantity control for adjusting the display quantity of the third images is received, and the display quantity of the third images displayed on the first image can be determined in response to the second input, so that the quantity of the third images displayed on the first image can be adjusted.
The second input may be a click input, a double click input, a long press input, a slide input, etc. to the display number control, which is not limited herein.
For example: as shown in fig. 3, when the circular slider 32 of the display quantity control 31 is located at the top of the display quantity control 31, the display quantity of the third image in the first image is 7, and if the circular slider 32 is slid from the top of the display quantity control 31 to the bottom of the display quantity control 31, as shown in fig. 12, the display quantity of the third image on the first image is 3.
Further, the display frequency of the third images in the first image is adjusted through the display number control, that is, not only the display number of the third images in the first image but also the separation distance between two adjacent third images in the first image is adjusted through the display number control.
The program background will superimpose the third image onto the first image taken by the main camera, at which time the frequency of superimposition of the third image can be set. In 60fps, the periscopic camera captures the target moving object every 80ms, so that 12 (or other number of) residual images are captured every 1 second, and the display number of the residual images can be controlled by the display number control to achieve the desired frequency.
As an optional embodiment, after the step 102 displays N third images on the first image according to the motion trail of the target moving object, the method further includes:
receiving a third input of a user to a first target image in the N third images;
canceling the display of the first target image in response to the third input.
Specifically, before displaying N third images on the first image according to the movement locus of the target moving object, a third input of the user to the first target image of the N third images is received, and in response to the third input, the first target image is cancelled to be displayed, thereby adjusting the number of the third images displayed on the first image.
The third input may be a click input, a double click input, a long press input, a slide input, or the like for the first target image in the N third images, which is not specifically limited herein.
For example: as shown in fig. 4, the fourth touch point 41 may be slid in the arrow direction to select the determined first target image; when the fourth contact 41 slides to the position shown in fig. 5, the image that the fourth contact 41 crosses (i.e., is selected) is the first target image, and the first target image may be deleted or undisplayed, and only the other images except the first target image in the N third images that are not selected by the fourth contact 41 are retained.
As an optional embodiment, before the step 103 generates the target image based on the first image and N third images, the method further includes:
receiving a fourth input of the user to the text editing area of the first image;
and responding to the fourth input, and acquiring target character information.
Specifically, before the target image is generated based on the first image and the N third images, a fourth input of the user to the text editing area of the first image is received, and in response to the fourth input, the target text information of the text editing area can be acquired. The target text information may be related to the entire first image or one of the third images, and is not limited in detail herein.
The fourth input may be operations such as text input and filling in the text editing area of the first image, and the specific manner is not specifically limited herein.
For example: as shown in fig. 6, 7 third images are displayed in the first image, and the third image at the upper left corner is selected as the third image 61 that needs to be edited. And receiving fourth input of target character information filled in the character editing area 62 below the third image 61 on the first image by the user, responding to the fourth input, acquiring the target character information of the character editing area, and improving the user experience.
The text editing area may be set at different positions as needed, may be set at the center of the first image, or may be set above or below one of the third images, and is not particularly limited herein.
As an optional embodiment, before the step 103 generates the target image based on the first image and N third images, the method further includes:
receiving a fifth input of a user to a second target image in the N third images;
updating image parameters of the second target image in response to the fifth input;
wherein the image parameters comprise at least one of: saturation, transparency, size information, display position information, display angle information.
Specifically, before generating the target image based on the first image and the N third images, a fifth input of the user to the second target image among the N third images is received, and in response to the fifth input, the saturation of the second target image, the transparency of the second target image, the size information of the second target image, the display position information of the second target image, the display angle information of the second target image, and the like may be updated.
The fifth input may be a click input, a double click input, a long press input, a slide input, or the like for the second target image of the N third images, which is not specifically limited herein.
As shown in fig. 7, after determining that the second target image is the third image of the upper left corner, the size information of the second target image may be modified by the first and second touch points 71 and 72. If the first touch point 71 is slid in the direction of the arrow toward the second touch point 72 and the second touch point 72 is slid in the direction of the arrow toward the first touch point 71, it means that the size of the second object image is reduced, whereas the size of the second object image is enlarged. Therefore, after the second target image size is modified, the size of the third image corresponding to the upper left corner is increased or reduced. The modification method for modifying the size information of the second target image is not limited to the above method, and the above method for modifying the size information is only an example.
As shown in fig. 8, after determining that the second target image is the third image of the upper left corner, the display angle information of the second target image may be modified through the third touch point 81. The display angle (i.e., direction) of the second object image may be rotated by pressing the third contact point 81 and sliding in the arrow direction. The manner of rotating the second target image is not limited to the above manner, and the above rotation manner is merely an example.
As shown in fig. 9, after determining that the second target image is the third image in the upper left corner, the saturation of the second target image may be modified by sliding the pull-down menu 91 downward, displaying the saturation control 92 and the transparency control 93, and sliding the circular slider on the saturation control 92 left and right; similarly, the transparency of the second target image is modified by sliding the circular slider on the transparency control 93 left or right.
And, after determining that the second target image is the third image of the upper left corner, the display position information of the second target image may be modified. The specific display position of the second target image on the first image can be modified by dragging or sliding the second target image, and the like.
As an optional embodiment, before the step 103 generates the target image based on the first image and N third images, the method further includes:
receiving a sixth input of the user to an effect editing control under the condition that at least one third target image in the N third images is selected;
in response to the sixth input, updating a display effect of the at least one third target image.
Specifically, before generating the target images based on the first image and the N third images, if at least one of the N third images is selected, a plurality of effect edit controls with respect to the third target image may be displayed. After the plurality of effect editing controls are displayed, a sixth input of the user to the effect editing controls may be received, one effect editing control may be selected from the plurality of effect editing controls as a target effect editing control in response to the sixth input, and corresponding target effect editing is performed on a third target image through the target effect editing control, so that the display effect of the selected third target image is updated.
For example: as shown in fig. 10, if the sixth input is a stroke-up input to the pull-up menu in the lower left corner, in response to the sixth input, a plurality of effect editing controls are displayed with respect to the third target image, that is, a list of special effects 94 is displayed, including: a motion blur effect editing control, a flame effect editing control and a halo effect editing control. And if the effect editing control with the motion blur is selected as the target effect editing control, performing motion-blurred target effect editing on the third target image, and updating the display effect of the third target image into a motion-blurred effect.
The sixth input may be a click input, a double click input, a long press input, a slide input, and the like for one of the effect editing controls, which is not specifically limited herein.
It should be noted that, the effect editing control may be installed in the form of a plug-in, so as to increase the expandability of the functions, for example: freezing special effect and the like.
Further, after selecting one effect editing control from the multiple effect editing controls as a target effect editing control, entering a target effect editing interface corresponding to the target effect editing control, and performing target effect editing on a third target image in the target effect editing interface. And determining an effect forming track of the third target image on the target effect editing interface, and performing target effect editing on the third target image according to the effect forming track, so that the display effect of the third target image is updated to a motion blurring effect, and the motion blurring effect is updated according to the effect forming track.
For example: if the motion-blurred effect editing control is selected as the target effect editing control, the motion-blurred target effect editing interface is entered, as shown in fig. 11, the motion-blurred target effect editing interface slides along the arrow direction through the fifth contact 95, and the whole sliding track is the effect forming track. The effect editing of motion blur is performed on the third target image according to the effect forming track, the blur direction and the blur degree can be controlled, and thus a sense of speed can be created.
In summary, in the embodiment of the present application, in a case that a shooting mode is a mode in which a main camera module is used to perform static shooting on a target moving object and a periscopic camera is used to perform dynamic shooting for tracking the target moving object, the target moving object is shot to obtain a first image and M second images including the target moving object, N third images related to an area where the target moving object is located are displayed on the first image according to a moving track of the target moving object, and the target image is generated based on the first image and the N third images, so that noise caused by changes of other non-target moving objects is not introduced, and dynamic tracking shooting is performed by the periscopic camera, so that the quality of the target image is improved; and the display quantity of the third image on the first image can be adjusted through the display quantity control, the effect editing and the image parameter editing of the third image can be carried out, the target image with the dynamic effect can be obtained without complex post-production, the operation is simpler and easier, and the image processing time is saved.
In the image processing method provided by the embodiment of the application, the execution main body can be an image processing device. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
As shown in fig. 13, an embodiment of the present application further provides an image processing apparatus including:
a shooting module 1301, configured to obtain a first image and M second images by shooting, where each of the M second images includes a target moving object;
a display module 1302, configured to display, on the first image, N third images according to a motion trajectory of a target moving object, where the third images are images corresponding to an area where the target moving object is located in the second image;
a generating module 1303, configured to generate a target image based on the first image and N third images;
wherein M, N are positive integers greater than or equal to 2, and M is greater than or equal to N.
Optionally, the shooting module 1301 is specifically configured to:
controlling a main camera module to shoot to obtain a first image; and
and controlling the periscopic camera to shoot the target moving object to obtain M second images.
Optionally, the apparatus further comprises:
the first receiving module is used for receiving first input of a user on a first moving object in the shooting preview interface;
a first determination module to determine the first moving object as a target moving object in response to the first input.
Optionally, the apparatus further comprises:
and the second determining module is used for determining the second moving object as the target moving object under the condition that the second moving object in the shooting preview interface meets the preset condition.
Optionally, the apparatus further comprises:
the second receiving module is used for receiving a second input of the user to a display quantity control, and the display quantity control is used for adjusting the display quantity of the third images;
a third determining module to determine a display number of a third image in response to the second input.
Optionally, the apparatus further comprises:
the third receiving module is used for receiving third input of a user to the first target image in the N third images;
and a cancellation display module for canceling the display of the first target image in response to the third input.
Optionally, the apparatus further comprises:
the fourth receiving module is used for receiving fourth input of the user to the character editing area of the first image;
and the acquisition module is used for responding to the fourth input and acquiring target character information.
Optionally, the apparatus further comprises:
a fifth receiving module, configured to receive a fifth input by a user to a second target image in the N third images;
a first updating module for updating the image parameters of the second target image in response to the fifth input;
wherein the image parameters comprise at least one of: saturation, transparency, size information, display position information, display angle information.
Optionally, the apparatus further comprises:
the sixth receiving module is used for receiving sixth input of the effect editing control by the user under the condition that at least one third target image in the N third images is selected;
a second updating module for updating the display effect of the at least one third target image in response to the sixth input.
In summary, in the embodiment of the present application, in a case that a shooting mode is a mode in which a main camera module is used to perform static shooting on a target moving object and a periscopic camera is used to perform dynamic shooting for tracking the target moving object, the target moving object is shot to obtain a first image and M second images including the target moving object, N third images related to an area where the target moving object is located are displayed on the first image according to a moving track of the target moving object, and the target image is generated based on the first image and the N third images, so that noise caused by changes of other non-target moving objects is not introduced, and dynamic tracking shooting is performed by the periscopic camera, so that the quality of the target image is improved; moreover, the display quantity of the third image on the first image can be adjusted through the display quantity control, effect editing and image parameter editing of the third image can be performed, a target image with a dynamic effect can be obtained without complex post-production, the operation is simpler, and the image processing time is saved.
The image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which is not specifically limited in the embodiment of the present application.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments in fig. 1 to 12, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 14, an electronic device 1400 is further provided in an embodiment of the present application, and includes a processor 1401 and a memory 1402, where the memory 1402 stores a program or an instruction that can be executed on the processor 1401, and when the program or the instruction is executed by the processor 1401, the steps of the embodiment of the image processing method are implemented, and the same technical effect can be achieved, and are not described herein again to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 15 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 15 does not constitute a limitation to the electronic device, and the electronic device may include more or less components than those shown in the drawings, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 1010 is configured to capture a first image and M second images, where each of the M second images includes a target moving object;
a display unit 1006, configured to display, on the first image, N third images according to a motion trajectory of a target moving object, where the third images are images corresponding to an area where the target moving object is located in the second image;
a processor 1010 further configured to generate a target image based on the first image and N of the third images;
wherein M, N are positive integers greater than or equal to 2, and M is greater than or equal to N.
Optionally, when the processor 1010 obtains the first image and the M second images by shooting, the processor is specifically configured to:
controlling a main camera module to shoot to obtain a first image; and
and controlling the periscopic camera to shoot the target moving object to obtain M second images.
Optionally, before the processor 1010 obtains the first image and the M second images by shooting, the input unit 1004 is configured to receive a first input of a user to a first moving object in a shooting preview interface;
the processor 1010 is further configured to determine the first moving object as a target moving object in response to the first input.
Optionally, before the capturing the first image and the M second images, the processor 1010 is further configured to:
and under the condition that a second moving object in the shooting preview interface meets a preset condition, determining the second moving object as a target moving object.
Optionally, before the display unit 1006 displays N third images on the first image according to the motion trajectory of the target moving object, the input unit 1004 is further configured to receive a second input of a display quantity control from a user, where the display quantity control is used to adjust the display number of the third images;
the processor 1010 is further configured to determine a number of displays of a third image in response to the second input.
Optionally, after the display unit 1006 displays N third images on the first image according to the motion trajectory of the target moving object, the input unit 1004 is further configured to receive a third input of the first target image in the N third images from the user;
the processor 1010 is further configured to cancel displaying the first target image in response to the third input.
Optionally, before the processor 1010 generates a target image based on the first image and N third images, the input unit 1004 is further configured to receive a fourth input of a user to a text editing area of the first image;
the processor 1010 is further configured to obtain target text information in response to the fourth input.
Optionally, before the processor 1010 generates the target image based on the first image and N third images, the input unit 1004 is further configured to receive a fifth input of a second target image of the N third images from the user;
the processor 1010, further configured to update image parameters of the second target image in response to the fifth input;
wherein the image parameters comprise at least one of: saturation, transparency, size information, display position information, display angle information.
Optionally, before generating the target images based on the first image and N third images, the processor 1010 is further configured to receive a sixth input of the effect editing control from the user when at least one third target image in the N third images is selected;
the processor 1010 is further configured to update a display effect of the at least one third target image in response to the sixth input.
In summary, in the embodiment of the present application, in a case that a shooting mode is a mode in which a main camera module is used to perform static shooting on a target moving object and a periscopic camera is used to perform dynamic shooting for tracking the target moving object, the target moving object is shot to obtain a first image and M second images including the target moving object, N third images related to an area where the target moving object is located are displayed on the first image according to a moving track of the target moving object, and the target image is generated based on the first image and the N third images, so that noise caused by changes of other non-target moving objects is not introduced, and dynamic tracking shooting is performed by the periscopic camera, so that the quality of the target image is improved; and the display quantity of the third image on the first image can be adjusted through the display quantity control, the effect editing and the image parameter editing of the third image can be carried out, the target image with the dynamic effect can be obtained without complex post-production, the operation is simpler and easier, and the image processing time is saved.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or non-volatile memory, or the memory 1009 may include both volatile and non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1009 in the embodiments of the subject application includes, but is not limited to, these and any other suitable types of memory.
Processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor, which primarily handles operations involving the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing embodiments of the image processing method, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (11)

1. An image processing method, comprising:
shooting to obtain a first image and M second images, wherein the M second images all comprise a target moving object;
displaying N third images on the first image according to the motion trail of the target motion object, wherein the third images are images corresponding to the area where the target motion object is located in the second image;
generating a target image based on the first image and the N third images;
wherein M, N are positive integers greater than or equal to 2, and M is greater than or equal to N.
2. The method of claim 1, wherein said capturing a first image and M second images comprises:
controlling a main camera module to shoot to obtain a first image; and
and controlling the periscopic camera to shoot the target moving object to obtain M second images.
3. The method of claim 1, wherein before the capturing the first image and the M second images, the method further comprises:
receiving a first input of a user to a first moving object in a shooting preview interface;
in response to the first input, determining the first moving object as a target moving object.
4. The method of claim 1, wherein before the capturing the first image and the M second images, the method further comprises:
and under the condition that a second moving object in the shooting preview interface meets a preset condition, determining the second moving object as a target moving object.
5. The method according to claim 1, wherein before displaying N third images on the first image according to the motion trail of the target moving object, the method further comprises:
receiving a second input of the user to a display quantity control, wherein the display quantity control is used for adjusting the display quantity of the third image;
in response to the second input, determining a display number of a third image.
6. The method according to claim 1, wherein after displaying N third images on the first image according to the motion trail of the target moving object, the method further comprises:
receiving a third input of a user to a first target image in the N third images;
canceling the display of the first target image in response to the third input.
7. The method of claim 1, wherein prior to generating a target image based on the first image and N of the third images, the method further comprises:
receiving a fourth input of the user to the text editing area of the first image;
and responding to the fourth input, and acquiring target character information.
8. The method according to any one of claims 1-7, wherein before generating the target image based on the first image and the N third images, the method further comprises:
receiving a fifth input of a user to a second target image in the N third images;
updating image parameters of the second target image in response to the fifth input;
wherein the image parameters comprise at least one of: saturation, transparency, size information, display position information, display angle information.
9. The method according to any one of claims 1-7, wherein before generating the target image based on the first image and the N third images, the method further comprises:
receiving a sixth input of the user to the effect editing control under the condition that at least one third target image in the N third images is selected;
in response to the sixth input, updating a display effect of the at least one third target image.
10. An image processing apparatus characterized by comprising:
the shooting module is used for shooting to obtain a first image and M second images, wherein the M second images all comprise a target moving object;
a display module, configured to display, on the first image, N third images according to a motion trajectory of a target moving object, where the third images are images corresponding to an area where the target moving object is located in the second image;
a generating module, configured to generate a target image based on the first image and N third images;
wherein M, N are positive integers greater than or equal to 2, and M is greater than or equal to N.
11. An electronic device, comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 9.
CN202210170891.XA 2022-02-23 2022-02-23 Image processing method and device and electronic equipment Pending CN114531547A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210170891.XA CN114531547A (en) 2022-02-23 2022-02-23 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210170891.XA CN114531547A (en) 2022-02-23 2022-02-23 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114531547A true CN114531547A (en) 2022-05-24

Family

ID=81625048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210170891.XA Pending CN114531547A (en) 2022-02-23 2022-02-23 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114531547A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827946A (en) * 2015-11-26 2016-08-03 维沃移动通信有限公司 Panoramic image generating method, panoramic image playing method and mobile terminal
CN107077720A (en) * 2016-12-27 2017-08-18 深圳市大疆创新科技有限公司 Method, device and the equipment of image procossing
CN109194865A (en) * 2018-08-06 2019-01-11 光锐恒宇(北京)科技有限公司 Image generating method, device, intelligent terminal and computer readable storage medium
CN112399077A (en) * 2020-10-30 2021-02-23 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN112492209A (en) * 2020-11-30 2021-03-12 维沃移动通信有限公司 Shooting method, shooting device and electronic equipment
CN112672056A (en) * 2020-12-25 2021-04-16 维沃移动通信有限公司 Image processing method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827946A (en) * 2015-11-26 2016-08-03 维沃移动通信有限公司 Panoramic image generating method, panoramic image playing method and mobile terminal
CN107077720A (en) * 2016-12-27 2017-08-18 深圳市大疆创新科技有限公司 Method, device and the equipment of image procossing
CN109194865A (en) * 2018-08-06 2019-01-11 光锐恒宇(北京)科技有限公司 Image generating method, device, intelligent terminal and computer readable storage medium
CN112399077A (en) * 2020-10-30 2021-02-23 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN112492209A (en) * 2020-11-30 2021-03-12 维沃移动通信有限公司 Shooting method, shooting device and electronic equipment
CN112672056A (en) * 2020-12-25 2021-04-16 维沃移动通信有限公司 Image processing method and device

Similar Documents

Publication Publication Date Title
CN112637507B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN111669507A (en) Photographing method and device and electronic equipment
CN112738402A (en) Shooting method, shooting device, electronic equipment and medium
CN113794829B (en) Shooting method and device and electronic equipment
CN113905175A (en) Video generation method and device, electronic equipment and readable storage medium
CN112738403A (en) Photographing method, photographing apparatus, electronic device, and medium
CN112492215A (en) Shooting control method and device and electronic equipment
CN114430460A (en) Shooting method and device and electronic equipment
CN112784081A (en) Image display method and device and electronic equipment
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114025237B (en) Video generation method and device and electronic equipment
CN114531547A (en) Image processing method and device and electronic equipment
CN112714256B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN114650370A (en) Image shooting method and device, electronic equipment and readable storage medium
CN114285922A (en) Screenshot method, screenshot device, electronic equipment and media
CN114500844A (en) Shooting method and device and electronic equipment
CN114245017A (en) Shooting method and device and electronic equipment
CN112954197A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112492205A (en) Image preview method and device and electronic equipment
CN114173178B (en) Video playing method, video playing device, electronic equipment and readable storage medium
CN112672059B (en) Shooting method and shooting device
CN114143455B (en) Shooting method and device and electronic equipment
CN115334242B (en) Video recording method, device, electronic equipment and medium
CN115134536B (en) Shooting method and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination