CN112532808A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112532808A
CN112532808A CN202011335262.5A CN202011335262A CN112532808A CN 112532808 A CN112532808 A CN 112532808A CN 202011335262 A CN202011335262 A CN 202011335262A CN 112532808 A CN112532808 A CN 112532808A
Authority
CN
China
Prior art keywords
image
target
background image
zoom
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011335262.5A
Other languages
Chinese (zh)
Inventor
孙楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011335262.5A priority Critical patent/CN112532808A/en
Publication of CN112532808A publication Critical patent/CN112532808A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, and belongs to the technical field of communication. The image processing method comprises the following steps: acquiring a target initial image; carrying out image segmentation on the target initial image to obtain a main image and a background image; zooming the background image according to zooming parameters input by a user to obtain a background image after multi-frame processing; fusing each frame of the processed background image with the main body image to obtain a plurality of frames of target images; and obtaining a target video according to the multi-frame target image. Therefore, under the condition that the electronic equipment does not need to be moved, the shooting effect of zooming the lens can be realized while the lens is drawn close to or drawn far away relative to the shot main body, and the operation of a user is facilitated.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to an image processing method and device and electronic equipment.
Background
The Hockey moving zoom technology is used in movie and TV works to shoot the distance change between the shot main body and the background through the track pushing lens and the zoom lens to create the visual effect of distorted picture and space and lead the audience to enter the psychological state of the chief angle. The principle of the Hirschhorn motion zoom technique is: and changing a focal section in the video shooting process. On the premise of ensuring that the proportion of the main body in each frame of image of the video is not changed, switching is carried out between the long-focus section and the wide-angle section, namely, the lens is zoomed to shoot while the lens is drawn close to or drawn far from the shot main body.
In the process of implementing the present application, the inventor finds that in the prior art, when a mobile terminal is used for shooting by using the xiekeck zoom technology, a user needs to shoot while walking, but an ordinary user has difficulty in mastering the moving speed and the zoom range and cannot achieve the expected shooting effect.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, which can solve the problems that when a mobile terminal is adopted to carry out shooting in a Hirschhorn zoom mode, the moving speed and the zoom range are difficult to control, and the shooting effect is poor.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a target initial image;
carrying out image segmentation on the target initial image to obtain a main image and a background image;
zooming the background image according to zooming parameters input by a user to obtain a background image after multi-frame processing;
fusing each frame of the processed background image with the main body image to obtain a plurality of frames of target images;
and obtaining a target video according to the multi-frame target image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring a target initial image;
the image segmentation module is used for carrying out image segmentation on the target initial image to obtain a main image and a background image;
the zooming module is used for zooming the background image according to zooming parameters input by a user to obtain a background image after multi-frame processing;
the fusion module is used for fusing each frame of the processed background image with the main body image to obtain a plurality of frames of target images;
and the processing module is used for obtaining a target video according to the multi-frame target image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a target initial image is obtained; carrying out image segmentation on the target initial image to obtain a main image and a background image; zooming the background image according to the zooming parameters input by the user to obtain a background image after multi-frame processing; respectively fusing the processed background image of each frame with the main body image to obtain a plurality of frames of target images; obtaining a target video according to the multi-frame target image; therefore, under the condition that the electronic equipment does not need to be moved, the shooting effect of zooming the lens can be realized while the lens is drawn close to or drawn far away relative to the shot main body, and the operation of a user is facilitated.
Drawings
FIG. 1 is a schematic flow chart diagram of an image processing method according to an embodiment of the present application;
FIG. 2 is a block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 3 is a block schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail the scheme of image processing provided by the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, an image processing method provided in an embodiment of the present application may be applied to an electronic device, where the electronic device includes a camera application. The method comprises the following steps:
step 101, acquiring a target initial image.
In this embodiment, before step 101, the user starts the camera application of the electronic device, and at this time, a third input from the user is received, and in response to the third input, the xicko zoom photographing mode is entered. Upon entering the Hirscoke zoom photographing mode, the electronic device acquires an initial image of the target in response to an input from the user.
The target initial image may be an image captured by a camera of the electronic device, or may be an image stored in the electronic device in advance. The camera of the electronic device may include two cameras.
In some embodiments of the present application, the step of obtaining an initial image of the target may comprise:
step 201, receiving a fourth input of the user.
Illustratively, before step 201, in a case where the xike zoom photographing mode is entered, the interactive interface of the camera application presents an interactive control of "camera", and the preview finder window is displayed in response to a user's touch operation on the interactive control of "camera". The fourth input may be, for example, a user operation on the preview finder window. The operation of the preview finder window by the user may be, for example, a click operation or a slide operation.
Step 202, in response to a fourth input, determining a preview image.
In the embodiment of the present application, taking an electronic device with two cameras as an example, in response to a fourth input, sampling an image of a preview view window by the two cameras, so as to obtain a preview image. Sampling and preview image acquisition are relatively mature technologies, and the embodiment of the present application does not limit this technology.
Step 203, in the case of displaying the preview image, receives a first operation of the preview image by the user.
And step 204, responding to the first operation, determining a target initial image, wherein the target initial image is the same as the preview image.
In this embodiment, the depth of field information of the background image may be calculated according to the preview image, and in combination with the subsequent steps, the zoom parameter may be recommended to the user based on the depth of field information, so that the background image is processed according to the zoom parameter to obtain the target video. Therefore, the target initial image and the preview image should be the same.
The electronic device has a zoom range, which is a numerical range of zoom magnifications. The larger the zoom magnification is, the farther a scene can be shot, and the less content can be shot. The smaller the zoom magnification, the more content can be captured. The target initial image is an image captured at a first zoom magnification. The first zoom magnification refers to a low zoom magnification, for example, the zoom range of the electronic device is 1 to 4 times, and when the zoom magnification is 1 time, the target initial image is captured. At this time, the target initial image includes more background content to facilitate super-resolution processing of the image.
In still other embodiments of the present application, the step of obtaining an initial image of the target may include:
and 301, receiving a fifth input of the user.
Illustratively, before step 201, in a case where the xitake kock zoom shooting mode is entered, the interactive interface of the camera application presents an interactive control of an "album", and images stored in the electronic device in advance are displayed in response to a touch operation of the user on the interactive control of the "album". The fifth input may be, for example, a user's selection operation of a pre-stored image. If a fifth input is detected, the user may be considered to have selected the target initial image, at which point the electronic device performs the following step 102.
Step 302, selecting a target initial image from the image set in response to a fifth input.
And 102, carrying out image segmentation on the target initial image to obtain a main image and a background image.
The target initial image includes a subject object and a background object. And carrying out main body separation on the target initial image to obtain a main body image and a background image. The main body image comprises a main body object, and the background image comprises a background object.
The embodiment of the application is used for achieving the shooting effect of the Hirschhook zoom. The principle of the Hirschhorn zoom technique is as follows: and changing a focal section in the video shooting process. It can be understood that, on the premise of ensuring that the proportion of the subject object in each frame of image of the target video is not changed, the zoom lens is switched between the telephoto range and the wide-angle range, that is, the zoom lens is zoomed while the zoom lens is zoomed in or zoomed out relative to the subject to be photographed. Based on the above, the image segmentation is performed on the target initial image to obtain a main image and a background image, so that different processing can be performed on the main image and the background image respectively, that is, the zooming processing is performed on the background image, and the shooting method of the Hirschk zooming is realized.
And 103, zooming the background image according to the zooming parameters input by the user to obtain a background image after multi-frame processing.
Zooming the background image according to the zooming parameters input by the user, namely determining the frame number of the background image according to the frame number of the target image set by the user; selecting a background image with a corresponding frame number according to the frame number of the background image, wherein the background image of each frame is the same; and respectively setting zooming parameters for each frame of background image, and sequentially zooming each frame of background image to obtain a background image after multi-frame processing. According to the embodiment of the application, different zooming parameters are adopted to zoom the background image to obtain the background image after multi-frame processing, the shooting effect that the lens zooms while the lens is zoomed relative to the shot main body can be achieved according to the background image after multi-frame processing, and the problems that when the mobile terminal is adopted to shoot in a Hirsch zooming mode, the moving speed and the zooming range are difficult to control, and the shooting effect is poor are solved.
Before describing step 103, a description will be given of a process of determining the target frame number.
The target frame number is determined according to the target frame rate and the target duration. In some embodiments of the present application, the image processing method may further include:
step 401, receiving a first input of a user.
Step 402, in response to a first input, determining a target duration.
Step 403, determining a target frame number according to the target duration and a preset target frame rate, wherein the target frame number is the frame number of the target image.
The target frame rate refers to a play frame rate of the target video. The target frame rate affects the image quality of the target video, and the higher the target frame rate is, the better the image quality of the target video is, and the lower the target frame rate is, the worse the image quality of the target video is. For example, the target frame rate is generally 30 frames/second (high definition image quality) or 24 frames/second (normal image quality). The target frame rate may be set in advance or may be input by the user.
The target duration refers to the playing duration of the target video, and for example, the target duration may include 2 seconds, 5 seconds, and 10 seconds.
For example, in the case of entering the xiekco zoom photographing mode, the user may be presented with an option of the target frame rate and an option of the target duration, so that the user may select the target frame rate and the target duration according to actual needs.
The target frame number is the product of the target frame rate and the target duration. For example, if the target frame rate selected by the user is 30 frames/second, the target duration is 2 seconds, and the target frame number is 60 frames. For example, if the target frame rate selected by the user is 24 frames/second, the target duration is 5 seconds, and the target frame number is 120 frames.
The zoom parameter may be, for example, a zoom step, which may be determined according to depth information of the background image or may be input by the user.
The zoom step determined according to the depth information of the background image and the zoom step determined according to the user's input will be described below, respectively.
In some embodiments of the present application, the background image comprises a plurality of image regions, and the zoom parameter comprises a first zoom step corresponding to an image region. A step of acquiring a first zoom step, comprising:
step 501, determining the target depth information of any image area of the background image.
Different image areas of the background image have different depths of field, and the target depth of field information of the image area is depth of field information corresponding to the image area in the background image.
In a more specific example, determining the target depth information for any image region of the background image comprises:
step 601, in the case of displaying the preview image, determining depth information of an image area of a background portion in the preview image.
In this embodiment, two cameras of the electronic device may be started, the preview image is captured to obtain a main shot image and a sub shot image, and the depth of field information of the image area of the background portion in the preview image is obtained according to the main shot image and the sub shot image.
Step 602, taking the depth information of the image area of the background portion in the preview image as the target depth information of the image area of the background image.
And 502, determining a target zoom interval corresponding to the image area according to the mapping relation between the depth of field information and the zoom interval and the target depth of field information.
The zoom section refers to a numerical range of zoom magnifications. The larger the zoom magnification is, the farther a scene can be shot, and the less content can be shot. The smaller the zoom magnification, the more content can be captured. Determining a target zoom interval corresponding to the image area according to the target depth of field information corresponding to the image area
And step 503, determining a first zoom step corresponding to the image area according to the target zoom interval and the target frame number.
The first zooming step is the ratio of the target zooming interval to the target frame number. For example, if the image area in the background image corresponds to 1 to 4 times the target zoom section and the target frame number is 60 frames, the first zoom step is (4-1)/60 is 0.05 times/frame. Here, the target frame number may be manually set by a user, or the electronic device may have corresponding default settings, which is not specifically limited in this embodiment of the application.
In still other embodiments of the present application, the zoom parameter includes a second zoom step size, and the step of obtaining the second zoom step size may include:
step 701, receiving a sixth input of a user;
step 702, in response to a sixth input, a second zoom step is obtained.
In some embodiments of the present application, for a case where the first zoom step is determined according to the depth information of the background image, the background image may be subjected to zoom processing according to the first zoom step. In a specific implementation, the step of performing zoom processing on the background image according to the zoom parameter to obtain a processed background image may further include: and zooming each image area of the background image according to the first zooming step length corresponding to each image area to obtain a processed background image.
In some more specific examples of the present application, the step of performing zoom processing on each image area of the background image according to the first zoom step corresponding to each image area, respectively, to obtain a processed background image may further include the following steps.
Step 801, determining a first processing parameter corresponding to each image area according to a first zoom step corresponding to each image area, where the first processing parameter includes a first magnification and a first blur degree.
Taking lens Zoom in as an example, a process of determining a first processing parameter corresponding to each image area according to a first Zoom step corresponding to each image area will be described.
As the lens is zoomed in, the focal length becomes larger, the angle of view becomes smaller, the image content in the same frame decreases, and the image size becomes larger. Based on this, for i frames of background images arranged in sequence, each frame of background image comprises j image areas. According to the arrangement sequence of the i-frame background images, the first magnification corresponding to the j-th image area of the background image is sequentially increased, and the first blurring degree corresponding to the j-th image area of the background image is also sequentially increased.
For example, if the first zoom step is 0.05 times per frame, the first magnification ratio corresponding to the jth image area of the 1 st frame background image is 0.05, the first magnification ratio corresponding to the jth image area of the 2 nd frame background image is 0.10, the first magnification ratio corresponding to the jth image area of the 3 rd frame background image is 0.15, and so on, the first magnification ratio corresponding to the jth image area of each frame background image can be obtained.
For example, the second zoom step is 1 time per frame, the first blurring degree corresponding to the jth image area of the 1 st frame background image is 1, the first blurring degree corresponding to the jth image area of the 2 nd frame background image is 2, the first blurring degree corresponding to the jth image area of the 3 rd frame background image is 3, and so on, the first blurring degree corresponding to the jth image area of each frame background image can be obtained.
Step 802, performing super-resolution processing on each image area according to the first magnification corresponding to each image area, and performing blur processing on each image area according to the first blur degree corresponding to each image area to obtain a processed background image.
The super-resolution processing is carried out on the background image, so that the resolution of each image area in the background image can be improved. Illustratively, each image area is processed by an upsampling operation according to a first magnification corresponding to the image area. Specifically, according to the first magnification, adaptive up-sampling operation is performed on a corresponding image area in the background image, so as to obtain an image of which the size is enlarged by the first magnification compared with the size of the background image.
It should be noted that an upsampling (up sampling) operation is used to enlarge the resolution of the image. In this step, interpolation methods such as interpolation value, neighbor interpolation, bilinear interpolation, mean interpolation, median interpolation and the like can be adopted for the size enlargement of the background image, that is, a suitable interpolation algorithm is adopted to insert new elements between pixel points on the basis of the background image.
The blurring process may be a gaussian blurring process, for example.
The processing order of the super-resolution processing and the blur processing is not limited, and may be set as needed.
According to the embodiment of the application, the first zoom step length is determined according to the depth of field information of the background image, the first processing parameter corresponding to each frame of the background image is determined according to the first zoom step length, further, zooming processing is sequentially performed on each frame of the background image according to the first processing parameter corresponding to each frame of the background image, the effect of different depth of field can be simulated while the Hooke zooming effect is achieved, and the shooting effect is better.
In still other embodiments of the present application, for the case of a second zoom step according to a user input, the background image may be subjected to zoom processing according to the second zoom step. In a specific implementation, the step of performing zoom processing on the background image according to the zoom parameter to obtain a processed background image may further include the following steps.
And step 901, determining a second processing parameter corresponding to the background image according to the second zoom step, wherein the second processing parameter comprises a second magnification and a second blur degree.
A procedure of determining the second processing parameter corresponding to the background image according to the second Zoom step will be described taking the lens Zoom in as an example.
As the lens is zoomed in, the focal length becomes larger, the angle of view becomes smaller, the image content in the same frame decreases, and the image size becomes larger. Based on this, for i-frame background images arranged in sequence, according to the arrangement order of the i-frame background images, the second magnification corresponding to the background images is increased in sequence, and the second blurring degree corresponding to the background images is also increased in sequence.
For example, if the second zoom step is 0.05 times per frame, the second magnification of the 1 st frame background image is 0.05, the second magnification of the 2 nd frame background image is 0.10, the second magnification of the 3 rd frame background image is 0.15, and so on, the corresponding second magnification of each frame background image can be obtained.
For example, the second zoom step is 1 times per frame, the second blurring degree of the 1 st frame background image is 1, the second blurring degree of the 2 nd frame background image is 2, the second blurring degree of the 3 rd frame background image is 3, and so on, the corresponding second blurring degree of each frame background image can be obtained.
And 902, performing super-resolution processing on the background image according to the second magnification, and performing blurring processing on the background image according to the second blurring degree to obtain a processed background image.
The super-resolution processing is carried out on the background image, so that the resolution of the background image can be improved. Illustratively, the background image is processed by an upsampling operation according to the second magnification corresponding to each image area. Specifically, according to the second magnification, the adaptive up-sampling operation is performed on the corresponding background image, and an image of which the size is enlarged by the second magnification compared with that of the background image is obtained.
It should be noted that an upsampling (up sampling) operation is used to enlarge the resolution of the image. In this step, interpolation methods such as interpolation value, neighbor interpolation, bilinear interpolation, mean interpolation, median interpolation and the like can be adopted for the size enlargement of the background image, that is, a suitable interpolation algorithm is adopted to insert new elements between pixel points on the basis of the background image.
The blurring process may be a gaussian blurring process, for example.
The processing order of the super-resolution processing and the blur processing is not limited, and may be set as needed.
According to the embodiment of the application, the second processing parameter corresponding to each frame of background image is determined according to the second zoom step length determined by the input of the user, and further, the zoom processing is sequentially performed on each frame of background image according to the second processing parameter corresponding to each frame of background image, so that the Hirschhorn zoom effect can be realized, and the personalized requirement and the intelligent requirement of the user can be met.
And step 104, fusing the processed background image and the main body image of each frame respectively to obtain a plurality of frames of target images.
In this embodiment, the processed background image of each frame is fused with the subject image, that is, the image is segmented to obtain the position where the subject image is attached to the subject object of the processed background image of each frame.
And 105, obtaining a target video according to the multi-frame target image.
In the embodiment of the application, when the background image is zoomed according to the zoom parameter input by the user, the target video is a video with no change of the picture main body and a constantly changing picture background. Further, the target video may include a dynamic image.
After obtaining the target video according to the multiple frames of target images, the image processing method further includes:
and step 1001, receiving a second input of the user.
Step 1002, determining a display effect in response to a second input.
The display effect includes a first display effect, which may be a Zoom-in (Zoom in) effect, and a second display effect, which may be a Zoom-out (Zoom out) effect.
And 1003, when the display effect is the first display effect, playing the target videos according to the first sequence.
And 1004, when the display effect is the second display effect, playing the target video according to a second sequence.
For the direction by shot in, the target video is played in the first order, i.e., the target video is played in the forward order. For the direction by shot advance (Zoom out), the target video is played in the second order, i.e., the target video is played in reverse order.
According to the embodiment of the application, a target initial image is obtained; carrying out image segmentation on the target initial image to obtain a main image and a background image; zooming the background image according to the zooming parameters input by the user to obtain a background image after multi-frame processing; respectively fusing the processed background image of each frame with the main body image to obtain a plurality of frames of target images; obtaining a target video according to the multi-frame target image; therefore, under the condition that the electronic equipment does not need to be moved, the shooting effect of zooming the lens can be realized while the lens is drawn close to or drawn far away relative to the shot main body, and the operation of a user is facilitated.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or alternatively, a control module in the image processing apparatus for executing a method for loading image processing. In the embodiment of the present application, a method for executing load image processing by an image processing apparatus is taken as an example, and the method for processing an image provided in the embodiment of the present application is described.
As shown in fig. 2, an embodiment of the present invention provides an image processing apparatus 20 including: an acquisition module 21, an image segmentation module 22, a zoom module 23, a fusion module 24 and a processing module 25.
And an obtaining module 21, configured to obtain an initial target image.
And the image segmentation module 22 is configured to perform image segmentation on the target initial image to obtain a main body image and a background image.
And the zooming module 23 is configured to perform zooming processing on the background image according to a zooming parameter input by a user, so as to obtain a background image after multi-frame processing.
And the fusion module 24 is configured to fuse the processed background image and the main body image of each frame, respectively, to obtain multiple frames of target images.
And the processing module 25 is configured to obtain a target video according to the multiple frames of target images.
In some embodiments of the present application, the background image comprises a plurality of image areas, and the zoom parameter comprises a first zoom step corresponding to the image areas.
The zooming module 23 is specifically configured to perform zooming processing on each image area of the background image according to a first zooming step corresponding to each image area, respectively, so as to obtain a processed background image.
In a more specific example, the zoom module includes:
and the first determining unit is used for determining a first processing parameter corresponding to each image area according to a first zoom step corresponding to each image area, wherein the first processing parameter comprises a first magnification and a first blurring degree.
The first processing unit is configured to perform super-resolution processing on each image area according to the first magnification corresponding to each image area, and perform blur processing on each image area according to the first blur degree corresponding to each image area to obtain the processed background image.
In some embodiments of the present application, the image processing apparatus further comprises:
and the second determination module is used for determining the target depth information of any image area of the background image.
And the third determining module is used for determining a target zoom interval corresponding to the image area according to the mapping relation between the depth information and the zoom interval and the target depth information.
And the fourth determining module is used for determining a first zooming step length corresponding to the image area according to the target zooming interval and the target frame number.
In a more specific example, the fourth determining module is specifically configured to determine depth information of an image area of a background portion in a preview image in a case that the preview image is displayed.
The fourth determining module is specifically further configured to use depth information of an image area of a background portion in the preview image as target depth information of the image area of the background image.
In still other embodiments of the present application, the zoom parameter comprises a second zoom step size, and the zoom module comprises:
and the second determining unit is used for determining a second processing parameter corresponding to the background image according to the second zoom step, wherein the second processing parameter comprises a second magnification and a second blurring degree.
And the second processing unit is used for performing super-resolution processing on the background image according to the second magnification and performing blurring processing on the background image according to the second blurring degree to obtain the processed background image.
In some embodiments of the present application, the image processing apparatus further comprises:
the first receiving module is used for receiving a first input of a user.
A first response module to determine a target duration in response to the first input.
And the first determining module is used for determining a target frame number according to the target duration and a preset target frame rate, wherein the target frame number is the frame number of the target image.
In some embodiments of the present application, the image processing apparatus further comprises:
and the second receiving module is used for receiving a second input of the user.
A second response module to determine a display effect in response to the second input.
And the display module is used for playing the target video according to a first sequence when the display effect is a first display effect.
And the display module is further used for playing the target video according to a second sequence when the display effect is a second display effect.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the image processing apparatus in the method embodiment of fig. 1, and is not described herein again to avoid repetition.
According to the embodiment of the application, a user can realize the Hooke zooming shooting method through the image processing device, and can realize the shooting effect of zooming the lens while zooming the lens close to or far away from the shot main body under the condition of not moving the electronic equipment, so that the user operation is facilitated.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 110, a memory 109, and a program or an instruction stored in the memory 109 and executable on the processor 110, where the program or the instruction is executed by the processor 110 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 3 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 3 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 110 is configured to obtain an initial target image;
the processor 110 is further configured to perform image segmentation on the target initial image to obtain a main body image and a background image;
the processor 110 is further configured to perform zoom processing on the background image according to a zoom parameter input by a user, so as to obtain a background image after multi-frame processing;
the processor 110 is further configured to fuse each frame of the processed background image with the main body image to obtain multiple frames of target images;
the processor 110 is further configured to obtain a target video according to the multiple frames of target images.
The electronic equipment provided by the embodiment of the application can realize the Hirschhok zoom shooting method, can realize the shooting effect of zooming the lens when the lens is pulled close or pulled far relative to the shot main body under the condition that the electronic equipment does not need to be moved, and is convenient for a user to operate.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. An image processing method, characterized in that the method comprises:
acquiring a target initial image;
carrying out image segmentation on the target initial image to obtain a main image and a background image;
zooming the background image according to zooming parameters input by a user to obtain a background image after multi-frame processing;
fusing each frame of the processed background image with the main body image to obtain a plurality of frames of target images;
and obtaining a target video according to the multi-frame target image.
2. The method of claim 1, wherein the background image comprises a plurality of image regions, wherein the zoom parameter comprises a first zoom step corresponding to the image regions,
the zooming the background image according to the zooming parameter to obtain a processed background image, including:
and zooming each image area of the background image according to the first zooming step length corresponding to each image area to obtain a processed background image.
3. The method according to claim 2, wherein the performing zoom processing on each image area of the background image according to the first zoom step corresponding to each image area to obtain a processed background image comprises:
determining a first processing parameter corresponding to each image area according to a first zoom step corresponding to each image area, wherein the first processing parameter comprises a first magnification and a first blurring degree;
performing super-resolution processing on each image area according to the first magnification ratio corresponding to each image area, and performing blurring processing on each image area according to the first blurring degree corresponding to each image area to obtain the processed background image.
4. The method of claim 2, further comprising:
determining target depth information of any one image area of the background image;
determining a target zoom interval corresponding to the image area according to the mapping relation between the depth of field information and the zoom interval and the target depth of field information;
and determining a first zooming step length corresponding to the image area according to the target zooming interval and the target frame number.
5. The method of claim 4, wherein the determining the target depth information for any of the image regions of the background image comprises:
determining depth information of an image area of a background portion in a preview image in a case where the preview image is displayed;
and taking the depth information of the image area of the background part in the preview image as the target depth information of the image area of the background image.
6. The method of claim 1, wherein the zoom parameter comprises a second zoom step size,
the zooming the background image according to the zooming parameter to obtain a processed background image, including:
determining a second processing parameter corresponding to the background image according to the second zoom step length, wherein the second processing parameter comprises a second magnification and a second blurring degree;
and performing super-resolution processing on the background image according to the second magnification, and performing blurring processing on the background image according to the second blurring degree to obtain the processed background image.
7. The method of claim 1, further comprising:
receiving a first input of a user;
determining a target duration in response to the first input;
and determining a target frame number according to the target duration and a preset target frame rate, wherein the target frame number is the frame number of the target image.
8. The method according to claim 1, wherein after obtaining the target video according to the plurality of frames of target images, the method further comprises:
receiving a second input of the user;
determining a display effect in response to the second input;
when the display effect is a first display effect, the target video is played according to a first sequence;
and when the display effect is a second display effect, playing the target video according to a second sequence.
9. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring a target initial image;
the image segmentation module is used for carrying out image segmentation on the target initial image to obtain a main image and a background image;
the zooming module is used for zooming the background image according to zooming parameters input by a user to obtain a background image after multi-frame processing;
the fusion module is used for fusing each frame of the processed background image with the main body image to obtain a plurality of frames of target images;
and the processing module is used for obtaining a target video according to the multi-frame target image.
10. The apparatus of claim 9, wherein the background image comprises a plurality of image regions, and wherein the zoom parameter comprises a first zoom step corresponding to the image regions;
the zoom module is specifically configured to perform zoom processing on each image area of the background image according to a first zoom step corresponding to each image area, so as to obtain a processed background image.
11. The apparatus of claim 10, wherein the zoom module comprises:
a first determining unit, configured to determine, according to a first zoom step corresponding to each of the image regions, a first processing parameter corresponding to each of the image regions, where the first processing parameter includes a first magnification and a first blur degree;
the first processing unit is configured to perform super-resolution processing on each image area according to the first magnification corresponding to each image area, and perform blur processing on each image area according to the first blur degree corresponding to each image area to obtain the processed background image.
12. The apparatus of claim 9, wherein the zoom parameter comprises a second zoom step size, and wherein the zoom module further comprises:
a second determining unit, configured to determine a second processing parameter corresponding to the background image according to the second zoom step, where the second processing parameter includes a second magnification and a second blur degree;
and the second processing unit is used for performing super-resolution processing on the background image according to the second magnification and performing blurring processing on the background image according to the second blurring degree to obtain the processed background image.
13. The apparatus of claim 9, further comprising:
the first receiving module is used for receiving a first input of a user;
a first response module for determining a target duration in response to the first input;
and the first determining module is used for determining a target frame number according to the target duration and a preset target frame rate, wherein the target frame number is the frame number of the target image.
14. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to claims 1-8.
15. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to claims 1-8.
CN202011335262.5A 2020-11-24 2020-11-24 Image processing method and device and electronic equipment Pending CN112532808A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011335262.5A CN112532808A (en) 2020-11-24 2020-11-24 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011335262.5A CN112532808A (en) 2020-11-24 2020-11-24 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112532808A true CN112532808A (en) 2021-03-19

Family

ID=74993237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011335262.5A Pending CN112532808A (en) 2020-11-24 2020-11-24 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112532808A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438401A (en) * 2021-06-30 2021-09-24 展讯通信(上海)有限公司 Digital zooming method, system, storage medium and terminal
CN113542625A (en) * 2021-05-28 2021-10-22 北京迈格威科技有限公司 Image processing method, device, equipment and storage medium
WO2022012231A1 (en) * 2020-07-17 2022-01-20 北京字节跳动网络技术有限公司 Video generation method and apparatus, and readable medium and electronic device
CN114710619A (en) * 2022-03-24 2022-07-05 维沃移动通信有限公司 Photographing method, photographing apparatus, electronic device, and readable storage medium
WO2023165390A1 (en) * 2022-03-03 2023-09-07 北京字跳网络技术有限公司 Zoom special effect generating method and apparatus, device, and storage medium
CN117615173A (en) * 2023-09-28 2024-02-27 书行科技(北京)有限公司 Video effect processing method, device, computer equipment and storage medium
CN118012319A (en) * 2024-04-08 2024-05-10 荣耀终端有限公司 Image processing method, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018093002A1 (en) * 2016-11-18 2018-05-24 엘지전자 주식회사 Mobile terminal and method for controlling same
CN110262737A (en) * 2019-06-25 2019-09-20 维沃移动通信有限公司 A kind of processing method and terminal of video data
CN111083380A (en) * 2019-12-31 2020-04-28 维沃移动通信有限公司 Video processing method, electronic equipment and storage medium
US10757319B1 (en) * 2017-06-15 2020-08-25 Snap Inc. Scaled perspective zoom on resource constrained devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018093002A1 (en) * 2016-11-18 2018-05-24 엘지전자 주식회사 Mobile terminal and method for controlling same
US10757319B1 (en) * 2017-06-15 2020-08-25 Snap Inc. Scaled perspective zoom on resource constrained devices
CN110262737A (en) * 2019-06-25 2019-09-20 维沃移动通信有限公司 A kind of processing method and terminal of video data
CN111083380A (en) * 2019-12-31 2020-04-28 维沃移动通信有限公司 Video processing method, electronic equipment and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022012231A1 (en) * 2020-07-17 2022-01-20 北京字节跳动网络技术有限公司 Video generation method and apparatus, and readable medium and electronic device
US11836887B2 (en) 2020-07-17 2023-12-05 Beijing Bytedance Network Technology Co., Ltd. Video generation method and apparatus, and readable medium and electronic device
CN113542625A (en) * 2021-05-28 2021-10-22 北京迈格威科技有限公司 Image processing method, device, equipment and storage medium
CN113438401A (en) * 2021-06-30 2021-09-24 展讯通信(上海)有限公司 Digital zooming method, system, storage medium and terminal
CN113438401B (en) * 2021-06-30 2022-08-05 展讯通信(上海)有限公司 Digital zooming method, system, storage medium and terminal
WO2023165390A1 (en) * 2022-03-03 2023-09-07 北京字跳网络技术有限公司 Zoom special effect generating method and apparatus, device, and storage medium
CN114710619A (en) * 2022-03-24 2022-07-05 维沃移动通信有限公司 Photographing method, photographing apparatus, electronic device, and readable storage medium
CN117615173A (en) * 2023-09-28 2024-02-27 书行科技(北京)有限公司 Video effect processing method, device, computer equipment and storage medium
CN118012319A (en) * 2024-04-08 2024-05-10 荣耀终端有限公司 Image processing method, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN112532808A (en) Image processing method and device and electronic equipment
TWI539226B (en) Object-tracing image processing method and system thereof
CN112887609B (en) Shooting method and device, electronic equipment and storage medium
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN112954193B (en) Shooting method, shooting device, electronic equipment and medium
CN113840070B (en) Shooting method, shooting device, electronic equipment and medium
CN112887617B (en) Shooting method and device and electronic equipment
CN113794829B (en) Shooting method and device and electronic equipment
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
US20230362477A1 (en) Photographing method and apparatus, electronic device and readable storage medium
CN112087579B (en) Video shooting method and device and electronic equipment
CN113014798A (en) Image display method and device and electronic equipment
CN112637500A (en) Image processing method and device
CN113727001B (en) Shooting method and device and electronic equipment
CN114449174A (en) Shooting method and device and electronic equipment
CN112367465B (en) Image output method and device and electronic equipment
WO2023125669A1 (en) Image processing circuit, image processing method, electronic device, and readable storage medium
CN112887624B (en) Shooting method and device and electronic equipment
CN112653841B (en) Shooting method and device and electronic equipment
CN111654620B (en) Shooting method and device
CN114785969A (en) Shooting method and device
CN112367464A (en) Image output method and device and electronic equipment
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210319

RJ01 Rejection of invention patent application after publication