CN112532881B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112532881B
CN112532881B CN202011355536.7A CN202011355536A CN112532881B CN 112532881 B CN112532881 B CN 112532881B CN 202011355536 A CN202011355536 A CN 202011355536A CN 112532881 B CN112532881 B CN 112532881B
Authority
CN
China
Prior art keywords
depth
target
image
field
field range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011355536.7A
Other languages
Chinese (zh)
Other versions
CN112532881A (en
Inventor
孟伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011355536.7A priority Critical patent/CN112532881B/en
Publication of CN112532881A publication Critical patent/CN112532881A/en
Application granted granted Critical
Publication of CN112532881B publication Critical patent/CN112532881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: selecting at least one target depth of field range from a plurality of depth of field ranges; respectively focusing and shooting an object corresponding to each target depth of field range according to the same exposure parameters and camera postures to obtain an initial image corresponding to each target depth of field range; taking an initial image corresponding to a target depth of field range as a target image under the condition that the target depth of field range is selected; the method and the device have the advantages that under the condition that the plurality of target depth of field ranges are selected, the initial images corresponding to all the target depth of field ranges are subjected to image fusion to obtain the target images, and the picture elements corresponding to all the target depth of field ranges selected by the user can be clearly displayed in the finally displayed images based on the requirements of the user on the depth of field, so that the requirements of the user on higher imaging quality of more picture elements in the shot images are met.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to an image processing method and device and electronic equipment.
Background
Currently, in the era of civil photography, people have high requirements on the imaging quality of picture elements in a shot image.
At present, due to the limitation of an aperture of a camera of an electronic device, the depth of field of the camera is shallow, when a portrait image is shot, a person in a foreground can be shot clearly, a background in a distant view is blurred, a portrait image with a clear person and a blurred background is obtained, when a landscape image is shot, a landscape in a distant view can be shot clearly, an object in a near view is blurred, and a landscape image with a clear landscape in a distant view and a blurred object in the near view in the landscape is obtained.
However, based on the higher requirement of the user on the imaging quality of a greater number of picture elements in the captured image, unclear picture elements always appear in the current scheme, so that the user requirement cannot be met.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, which can meet the requirement that a user has higher imaging quality on a greater number of picture elements in a shot image.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
selecting at least one target depth of field range from a plurality of depth of field ranges;
respectively focusing and shooting an object corresponding to each target depth of field range according to the same exposure parameters and camera postures to obtain an initial image corresponding to each target depth of field range;
under the condition that one target depth of field range is selected, taking an initial image corresponding to the target depth of field range as a target image;
and under the condition that a plurality of target depth of field ranges are selected, carrying out image fusion on the initial images corresponding to all the target depth of field ranges to obtain a target image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the selection module is used for selecting at least one target depth of field range from the plurality of depth of field ranges;
the first shooting module is used for focusing and shooting an object corresponding to each target depth of field range according to the same exposure parameters and camera postures to obtain an initial image corresponding to each target depth of field range;
the first determining module is used for taking an initial image corresponding to the target depth of field range as a target image under the condition that one target depth of field range is selected;
and the first fusion module is used for carrying out image fusion on the initial images corresponding to all the target depth of field ranges under the condition that a plurality of target depth of field ranges are selected to obtain target images.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, at least one target depth of field range is selected from a plurality of depth of field ranges; focusing and shooting an object corresponding to each target depth of field range respectively according to the same exposure parameters and camera postures to obtain an initial image corresponding to each target depth of field range; under the condition that a target depth of field range is selected, taking an initial image corresponding to the target depth of field range as a target image; the method and the device have the advantages that under the condition that the plurality of target depth of field ranges are selected, the initial images corresponding to all the target depth of field ranges are subjected to image fusion to obtain the target images, and the picture elements corresponding to all the target depth of field ranges selected by a user can be displayed clearly in the finally displayed images based on the requirement of the user on the depth of field, so that the problem that the picture elements corresponding to only one depth of field range can be displayed clearly in the prior art is solved, and the requirement that the user has high imaging quality on more picture elements in the shot images is met.
Drawings
Fig. 1 is a schematic diagram illustrating steps of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a captured image according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an initial image provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of another initial image provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of another initial image provided by an embodiment of the present application;
fig. 6 is a schematic diagram illustrating specific steps of an image processing method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a preview image provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of another preview image provided by an embodiment of the present application;
fig. 9 is a block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 10 is a block diagram of an electronic device provided in an embodiment of the present application;
fig. 11 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail an image processing method, an image processing apparatus, and an electronic device provided in the embodiments of the present application with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram illustrating steps of an image processing method provided in an embodiment of the present application includes:
step 101, selecting at least one target depth of field range from a plurality of depth of field ranges.
The Depth of Field (DOF) is an imaging range in which a sharp image can be obtained at the front edge of a camera lens or other imaging devices, and is a measured distance range between the front and rear of a subject, that is, a distance between a sharp image appearing in a range between the front and rear of a focal point after the completion of focusing by a camera, and is an important factor affecting the Depth of Field.
At present, a camera of an electronic device is limited by hardware such as an aperture and a lens, and cannot meet the requirement that elements corresponding to all depth ranges are imaged clearly in one image obtained by one-time shooting, for example, when clear close-range characters are shot, a background is blurred.
In the embodiment of the present application, different depth-of-field ranges may correspond to different distant and close objects in a shooting scene, for example, referring to fig. 2, which shows a schematic diagram of a shooting image provided in the embodiment of the present application, in a shooting image, the depth-of-field range may be divided into a far-field range, a middle-field range, and a near-field range, the far-field range corresponds to a distant object, the middle-field range corresponds to a moderate-distance object, and the near-field range corresponds to a near-field object, and when a user shoots, in order to achieve rich image effect, the user usually has different depth-of-field requirements, for example, referring to fig. 2, assuming that an object corresponding to the near-field range is a person, and the user wants to shoot a person image that highlights the near-field person 12, the user may focus and shoot in the near-field range to obtain a person image that is clear with the near-field person 12 and other distant objects 10 and the moderate objects 11 as backgrounds, so that the person in the picture is highlighted. In addition, the user may also require that objects corresponding to multiple ranges in the long-range, the medium-range, and the short-range in the final captured image are imaged clearly based on higher imaging requirements.
Therefore, the camera of the electronic device can select at least one target depth of field range according to the imaging requirement by the user based on the multiple depth of field ranges which can be provided by the camera, so that the object corresponding to the target depth of field range can be imaged clearly in the final imaging. The electronic device may include a cell phone, camera, watch, aerial camera, etc.
And step 102, focusing and shooting an object corresponding to each target depth of field range respectively according to the same exposure parameters and camera postures to obtain an initial image corresponding to each target depth of field range.
In the embodiment of the application, in order to meet the requirement of a user on clear imaging of an object corresponding to a target depth of field in final imaging, initial images shot by a camera under each target depth of field range need to be acquired, and in the initial image corresponding to each target depth of field range, an object corresponding to the target depth of field range is imaged clearly.
For example, when the user requests that an object corresponding to the close view and the long view is imaged clearly, the camera focuses and shoots the long view range based on the same exposure parameters and camera posture to obtain one initial image and focuses and shoots the short view range to obtain the other initial image after the user selects the long view range and the short view range as the target depth of field range.
Specifically, when the camera shoots an initial image, the camera needs to focus and shoot an object corresponding to each target depth of field range according to the same exposure parameters and camera postures, the same exposure parameters and camera postures can ensure that edge features in the shot initial image are consistent, and the defects of gaps and the like during subsequent synthesis are avoided.
In the embodiment of the present application, the same camera pose may refer to: when the camera is used for focusing and shooting an object corresponding to each target depth of field range, the three-dimensional attitude Euclidean angles (a pitch angle, a yaw angle and a roller angle) of the camera relative to the object are the same.
Therefore, the video camera needs to have a high response speed and a short interval time when shooting different initial images to reduce the influence of the change of the camera posture.
And 103, taking the initial image corresponding to the target depth of field range as a target image under the condition that one target depth of field range is selected.
In the embodiment of the present application, if the user selects only one depth of field range as the target depth of field range, the initial image corresponding to the target depth of field range may be directly output as the target image, for example, if the user wants to shoot a person image with a close-range person protruding, the user may use the close-range as the target depth of field range, and perform focusing and shooting in the close-range to obtain a person image with a clear person and a blurred object as a background, so that the person in the image is protruded.
And 104, under the condition that a plurality of target depth of field ranges are selected, carrying out image fusion on the initial images corresponding to all the target depth of field ranges to obtain target images.
In the embodiment of the application, under the condition that a plurality of target depth of field ranges are selected, the camera may shoot a corresponding initial image for each target depth of field range, and the definition of the picture element corresponding to the target depth of field range in the initial image is the highest, so that a higher fusion weight may be allocated to the picture element with the highest definition in each initial image, so that after the fusion of the initial images corresponding to all the target depth of field ranges is performed, the picture element with the highest definition in each initial image may be retained in the final target image, and the purpose of clearly displaying all the picture elements corresponding to the target depth of field range in the final target image is achieved.
For example, referring to fig. 2, the depth of field range is divided into a far field range, a middle field range, and a near field range, and assuming that the user selects the near field range, the middle field range, and the far field range as the target depth of field range, the camera may respectively perform shooting for the near field range, the middle field range, and the far field range according to the same exposure parameters and camera pose, to obtain three initial images shown in fig. 3, 4, and 5, where an object 12 corresponding to the near field range in the initial image shown in fig. 3 is displayed clearly, an object 11 corresponding to the middle field range in the initial image shown in fig. 4 is displayed clearly, an object 10 corresponding to the far field range in the initial image shown in fig. 5 is displayed clearly, and areas displaying clear objects in the three initial images are extracted and fused into a target image, so that the far, middle, and near objects in the target image are all displayed clearly. It should be noted that, after a higher fusion weight is added to a region in the initial image where a clear object is displayed, a fusion operation based on the fusion weight may be performed on the three initial images to obtain a final target image.
In practical application, a user can select at least one target depth of field range through selection operation, after selection, the camera can automatically shoot corresponding initial images according to the target depth of field range, all the initial images are automatically fused to obtain target images, the target images are provided for the user, and in the whole process, the user does not need to perform too many complicated operations.
In summary, the embodiment of the present application provides an image processing method, which selects at least one target depth of field range from a plurality of depth of field ranges; respectively focusing and shooting an object corresponding to each target depth of field range according to the same exposure parameters and camera postures to obtain an initial image corresponding to each target depth of field range; under the condition that a target depth of field range is selected, taking an initial image corresponding to the target depth of field range as a target image; the method and the device have the advantages that under the condition that the plurality of target depth of field ranges are selected, the initial images corresponding to all the target depth of field ranges are subjected to image fusion to obtain the target images, and the picture elements corresponding to all the target depth of field ranges selected by a user can be displayed clearly in the finally displayed images based on the requirement of the user on the depth of field, so that the problem that the picture elements corresponding to only one depth of field range can be displayed clearly in the prior art is solved, and the requirement that the user has high imaging quality on more picture elements in the shot images is met.
Referring to fig. 6, which shows a schematic diagram of specific steps of an image processing method provided in an embodiment of the present application, including:
step 201, according to the same exposure parameters and camera postures, respectively focusing and shooting an object corresponding to each depth of field range, and obtaining a depth of field image corresponding to each depth of field range.
When the user uses the shooting function, the user has a picture preview requirement for the current shooting scene, namely before the user clicks a shooting button, the electronic equipment can display a preview picture of the shooting scene collected by the camera for the user to refer to. In the embodiment of the application, after the user starts the shooting function, the camera can focus and shoot the object corresponding to each depth of field range according to the same exposure parameters and camera posture, so as to obtain the depth of field image corresponding to each depth of field range, the image elements corresponding to the depth of field range in each depth of field image are clearly imaged, and after all the depth of field images are fused, the clear preview image displayed by the image elements corresponding to all the depth of field ranges provided by the camera can be obtained for the user to refer to.
Step 202, determining a target area in each depth image, wherein the target area is an area with definition greater than or equal to a preset threshold value.
In the embodiment of the application, the target area may be an area corresponding to a picture element with the maximum definition in the depth image, and in addition, the target area may be an area corresponding to a picture element with the definition greater than or equal to a preset threshold in the depth image, and the preset threshold may be set according to actual requirements. It should be noted that the target area may be a mask area.
And 203, carrying out image fusion on the depth image with the target area to obtain a preview image.
Optionally, the display colors of the regions corresponding to the different depth-of-field range labels in the preview image are different.
In the embodiment of the application, all the picture elements corresponding to all the depth ranges which can be provided by the camera in the preview image are displayed clearly, and a user can specify a subsequent shooting strategy according to the effect of the optimal clear display area which can be provided by different depth ranges.
Further, referring to fig. 7, which shows a schematic diagram of a preview image according to an embodiment of the present application, in order to significantly distinguish regions of the picture elements corresponding to different depth of field ranges in the preview picture, the regions of the picture elements corresponding to different depth of field ranges in the preview picture may display different elements, for example, a region of the object 10 corresponding to a long range is displayed with a red hue, a region of the object 11 corresponding to a middle range is displayed with a blue hue, and a region of the object 12 corresponding to a short range is displayed with a green hue.
And step 204, displaying the preview image.
After the preview image is displayed, the user can designate a subsequent shooting strategy according to the effect of the optimal clear display area which can be provided by different depth of field ranges in the preview image.
Step 205, at least one target depth of field range is selected from the plurality of depth of field ranges.
For this step, reference may be made to step 101, which is not described herein again.
Optionally, in an implementation manner, referring to fig. 7, a depth-of-field range label is added to a region corresponding to each target region in the preview image, for example, a region of the object 10 corresponding to the far-field range has a far-field range label 20, a region of the object 11 corresponding to the middle-field range has a middle-field range label 21, and a region of the object 12 corresponding to the near-field range has a near-field range label 22; the depth of field range label is used to reflect the depth of field range of the depth of field image corresponding to the target area, and step 205 may include:
in the substep 2051, according to the selection operation on at least one depth of field range label in the preview image, the depth of field range corresponding to the selected depth of field range label is taken as the target depth of field range.
Optionally, in an implementation, the depth of view range label is located in an area corresponding to the depth of view range label.
In the embodiment of the present application, referring to fig. 7, in order to further significantly distinguish the regions of the screen elements corresponding to different depth of field ranges in the preview screen, a corresponding depth of field range label may be added at the position of each target region in the preview image, so that the user can distinguish the regions corresponding to different depth of field ranges according to the highlighted label.
The user can select the depth of field range label, and the depth of field range corresponding to the depth of field range label selected by the user is used as the target depth of field range.
Optionally, in another implementation, the depth of field range label is located in a preset area in the preview image.
Referring to fig. 8, which shows a schematic diagram of another preview image provided in the embodiment of the present application, in order to ensure cleanness and conciseness of a preview image, a depth of field range tag may be further disposed in a fixed preset area, so as to prevent the depth of field range tag from blocking important content of the image, for example, as shown in fig. 8, three depth of field range tags may be disposed in a bottom area of the image.
Optionally, in another implementation, step 205 may include:
and a substep 2052 of taking the depth of field range corresponding to the selected region as the target depth of field range according to the selection operation on at least one region in the preview image.
In the embodiment of the application, in order to ensure the simplicity of the preview image, the depth of field range corresponding to the selected area may be directly used as the target depth of field range according to the selection operation of the user on at least one area in the preview image.
For example, for an image including a portrait, a railing, a building, a mountain, and other elements, a user may sequentially click on areas of the portrait, the railing, the building, and the mountain, and use depth ranges corresponding to the areas as target depth ranges, so that the areas have a clear display effect in a final target image.
Substep 2053, adding a selected reminder label to the selected area.
And step 206, focusing and shooting the object corresponding to each target depth of field range respectively according to the same exposure parameters and camera postures to obtain an initial image corresponding to each target depth of field range.
This step may specifically refer to step 102, which is not described herein again.
And step 207, taking the initial image corresponding to the target depth of field range as a target image under the condition that one target depth of field range is selected.
This step may specifically refer to step 103, which is not described herein again.
And 208, under the condition that a plurality of target depth of field ranges are selected, aligning the initial images corresponding to all the target depth of field ranges to enable the edges of the initial images to be overlapped.
And 209, performing image fusion on the aligned plurality of initial images to obtain the target image.
In the embodiment of the application, for a plurality of initial images corresponding to a plurality of target depth of field ranges, the plurality of initial images may be subjected to image fusion after being subjected to operation, and since the sizes of the initial images are consistent, the outer contours of the plurality of initial images may be aligned to avoid the occurrence of defects such as gaps during subsequent fusion.
In summary, the embodiment of the present application provides an image processing method, which selects at least one target depth of field range from a plurality of depth of field ranges; focusing and shooting an object corresponding to each target depth of field range respectively according to the same exposure parameters and camera postures to obtain an initial image corresponding to each target depth of field range; under the condition that a target depth of field range is selected, taking an initial image corresponding to the target depth of field range as a target image; the method and the device have the advantages that under the condition that the plurality of target depth of field ranges are selected, the initial images corresponding to all the target depth of field ranges are subjected to image fusion to obtain the target images, and the picture elements corresponding to all the target depth of field ranges selected by a user can be displayed clearly in the finally displayed images based on the requirement of the user on the depth of field, so that the problem that the picture elements corresponding to only one depth of field range can be displayed clearly in the prior art is solved, and the requirement that the user has high imaging quality on more picture elements in the shot images is met.
Referring to fig. 9, a block diagram of an image processing apparatus provided in an embodiment of the present application is shown, including:
a selecting module 301, configured to select at least one target depth of field range from a plurality of depth of field ranges;
optionally, a depth of field range label is added to an area corresponding to each target area in the preview image, where the depth of field range label is used to reflect a depth of field range of the depth of field image corresponding to the target area;
the selecting module 301 includes:
and the first selection submodule is used for taking the depth of field range corresponding to the selected depth of field range label as the target depth of field range according to the selection operation of at least one depth of field range label in the preview image.
Optionally, the display colors of the regions corresponding to the different depth of field range labels in the preview image are different.
Optionally, the selecting module 301 includes:
the second selection submodule is used for taking the depth of field range corresponding to the selected area as the target depth of field range according to the selection operation of at least one area in the preview image;
and the adding submodule is used for adding the selected reminding label to the selected area.
Optionally, the depth of field range tag is located in a preset area in the preview image, or the depth of field range tag is located in an area corresponding to the depth of field range tag.
The first shooting module 302 is configured to focus and shoot an object corresponding to each target depth of field range according to the same exposure parameter and camera pose, so as to obtain an initial image corresponding to each target depth of field range;
a first determining module 303, configured to take an initial image corresponding to one target depth of field range as a target image when the target depth of field range is selected;
the first fusion module 304 is configured to perform image fusion on the initial images corresponding to all the target depth of field ranges to obtain a target image under the condition that a plurality of target depth of field ranges are selected.
Optionally, the first fusion module 304 includes:
the alignment module is used for aligning the initial images corresponding to all the target depth of field ranges under the condition that the target depth of field ranges are selected, so that the edges of the initial images are overlapped;
and the fusion submodule is used for carrying out image fusion on the aligned plurality of initial images to obtain the target image.
Optionally, the apparatus further comprises:
the second shooting module is used for focusing and shooting the object corresponding to each depth of field range according to the same exposure parameters and camera postures to obtain a depth of field image corresponding to each depth of field range;
the second determining module is used for determining a target area in each depth image, wherein the target area is an area with definition greater than or equal to a preset threshold value;
the second fusion module is used for carrying out image fusion on the depth-of-field image with the target area to obtain a preview image;
and the preview module is used for displaying the preview image.
In summary, the embodiment of the present application provides an image processing apparatus, which selects at least one target depth of field range from a plurality of depth of field ranges; focusing and shooting an object corresponding to each target depth of field range respectively according to the same exposure parameters and camera postures to obtain an initial image corresponding to each target depth of field range; taking an initial image corresponding to a target depth of field range as a target image under the condition that the target depth of field range is selected; the method and the device have the advantages that under the condition that the plurality of target depth of field ranges are selected, the initial images corresponding to all the target depth of field ranges are subjected to image fusion to obtain the target images, and the picture elements corresponding to all the target depth of field ranges selected by a user can be displayed clearly in the finally displayed images based on the requirement of the user on the depth of field, so that the problem that the picture elements corresponding to only one depth of field range can be displayed clearly in the prior art is solved, and the requirement that the user has high imaging quality on more picture elements in the shot images is met.
The image processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Network Attached Storage (NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 10, an electronic device 700 is further provided in this embodiment of the present application, and includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, where the program or the instruction is executed by the processor 701 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.
Those skilled in the art will appreciate that the electronic device 800 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
In the embodiment of the application, at least one target depth of field range is selected from a plurality of depth of field ranges; focusing and shooting an object corresponding to each target depth of field range respectively according to the same exposure parameters and camera postures to obtain an initial image corresponding to each target depth of field range; under the condition that a target depth of field range is selected, taking an initial image corresponding to the target depth of field range as a target image; under the condition that a plurality of target depth of field ranges are selected, image fusion is carried out on initial images corresponding to all the target depth of field ranges to obtain target images.
It should be understood that in the embodiment of the present application, the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics Processing Unit 8041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or a magnetic disk
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (8)

1. An image processing method, characterized in that the method comprises:
selecting at least one target depth of field range from a plurality of depth of field ranges;
respectively focusing and shooting an object corresponding to each target depth of field range according to the same exposure parameters and camera postures to obtain an initial image corresponding to each target depth of field range;
under the condition that one target depth of field range is selected, taking an initial image corresponding to the target depth of field range as a target image;
under the condition that a plurality of target depth of field ranges are selected, image fusion is carried out on initial images corresponding to all the target depth of field ranges to obtain target images;
before the selecting at least one target depth of field range among the plurality of depth of field ranges, the method further comprises:
fusing the depth images in different depth ranges to obtain a preview image; displaying the preview image; adding a depth of field range label to an area corresponding to each target area in the preview image, wherein the depth of field range label is used for reflecting the depth of field range of the depth of field image corresponding to the target area;
the selecting at least one target depth of field range from the plurality of depth of field ranges comprises:
and according to the selection operation of at least one depth of field range label in the preview image, taking the depth of field range corresponding to the selected depth of field range label as the target depth of field range.
2. The method according to claim 1, wherein the fusing the depth images of different depth ranges to obtain the preview image comprises:
respectively focusing and shooting an object corresponding to each depth of field range according to the same exposure parameters and camera postures to obtain a depth of field image corresponding to each depth of field range;
determining a target area in the depth image, wherein the target area is an area with definition greater than or equal to a preset threshold value;
and carrying out image fusion on the depth-of-field image with the target area to obtain a preview image.
3. The method of claim 2, wherein the display color of the regions of the preview image corresponding to different depth range labels is different.
4. The method of claim 1, wherein the depth range label is located in a preset area in the preview image or the depth range label is located in an area corresponding to the depth range label.
5. The method according to claim 1, wherein the image fusion of the initial images corresponding to all the target depth ranges to obtain the target image under the condition that the plurality of target depth ranges are selected comprises:
under the condition that a plurality of target depth of field ranges are selected, aligning all initial images corresponding to the target depth of field ranges to enable the edges of the initial images to be overlapped;
and carrying out image fusion on the plurality of aligned initial images to obtain the target image.
6. An image processing apparatus, characterized in that the apparatus comprises:
the selection module is used for selecting at least one target depth of field range from the plurality of depth of field ranges;
the first shooting module is used for focusing and shooting an object corresponding to each target depth of field range according to the same exposure parameters and camera postures to obtain an initial image corresponding to each target depth of field range;
the first determining module is used for taking an initial image corresponding to the target depth of field range as a target image under the condition that one target depth of field range is selected;
the first fusion module is used for carrying out image fusion on the initial images corresponding to all the target depth of field ranges under the condition that a plurality of target depth of field ranges are selected to obtain target images;
the apparatus is further configured to:
fusing the depth images in different depth ranges to obtain a preview image; displaying the preview image;
adding a depth-of-field range label to an area corresponding to each target area in the preview image, wherein the depth-of-field range label is used for reflecting the depth-of-field range of the depth-of-field image corresponding to the target area;
the selecting at least one target depth of field range from the plurality of depth of field ranges comprises:
and according to the selection operation of at least one depth of field range label in the preview image, taking the depth of field range corresponding to the selected depth of field range label as the target depth of field range.
7. The apparatus according to claim 6, wherein the fusion of the depth images of different depth ranges to obtain the preview image is implemented by:
the second shooting module is used for focusing and shooting the object corresponding to each depth of field range according to the same exposure parameters and camera postures to obtain a depth of field image corresponding to each depth of field range;
the second determining module is used for determining a target area in each depth image, wherein the target area is an area with definition greater than or equal to a preset threshold value;
and the second fusion module is used for carrying out image fusion on the depth-of-field image with the target area to obtain a preview image.
8. An electronic device, comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 5.
CN202011355536.7A 2020-11-26 2020-11-26 Image processing method and device and electronic equipment Active CN112532881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011355536.7A CN112532881B (en) 2020-11-26 2020-11-26 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011355536.7A CN112532881B (en) 2020-11-26 2020-11-26 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112532881A CN112532881A (en) 2021-03-19
CN112532881B true CN112532881B (en) 2022-07-05

Family

ID=74994589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011355536.7A Active CN112532881B (en) 2020-11-26 2020-11-26 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112532881B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113572961A (en) * 2021-07-23 2021-10-29 维沃移动通信(杭州)有限公司 Shooting processing method and electronic equipment
CN113824877B (en) * 2021-08-19 2023-04-28 惠州Tcl云创科技有限公司 Panoramic deep image synthesis method, storage medium and smart phone
CN115567783B (en) * 2022-08-29 2023-10-24 荣耀终端有限公司 Image processing method
CN116132791A (en) * 2023-03-10 2023-05-16 创视微电子(成都)有限公司 Method and device for acquiring multi-field-depth clear images of multiple moving objects

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103152521A (en) * 2013-01-30 2013-06-12 广东欧珀移动通信有限公司 Effect of depth of field achieving method for mobile terminal and mobile terminal
CN103903234A (en) * 2014-03-12 2014-07-02 南京第五十五所技术开发有限公司 Real-time image defogging method based on image field depth
CN106791119A (en) * 2016-12-27 2017-05-31 努比亚技术有限公司 A kind of photo processing method, device and terminal
FR3050597A1 (en) * 2016-04-26 2017-10-27 Stereolabs METHOD FOR ADJUSTING A STEREOSCOPIC VIEWING APPARATUS
CN107613199A (en) * 2016-06-02 2018-01-19 广东欧珀移动通信有限公司 Blur photograph generation method, device and mobile terminal
CN108234978A (en) * 2017-12-12 2018-06-29 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN111885307A (en) * 2020-07-30 2020-11-03 努比亚技术有限公司 Depth-of-field shooting method and device and computer readable storage medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100227679B1 (en) * 1994-06-15 1999-11-01 유무성 Apparatus and method for continuous photographic of object depth step in a camera
CN1469187A (en) * 2002-07-15 2004-01-21 英保达股份有限公司 Image synthesis and synthesizer for digital camera
CN103561205B (en) * 2013-11-15 2015-07-08 努比亚技术有限公司 Shooting method and shooting device
CN103973978B (en) * 2014-04-17 2018-06-26 华为技术有限公司 It is a kind of to realize the method focused again and electronic equipment
CN104243828B (en) * 2014-09-24 2019-01-11 宇龙计算机通信科技(深圳)有限公司 A kind of method, apparatus and terminal shooting photo
CN105187722B (en) * 2015-09-15 2018-12-21 努比亚技术有限公司 Depth of field adjusting method, device and terminal
CN106550184B (en) * 2015-09-18 2020-04-03 中兴通讯股份有限公司 Photo processing method and device
CN105933532A (en) * 2016-06-06 2016-09-07 广东欧珀移动通信有限公司 Image processing method and device, and mobile terminal
CN106973227A (en) * 2017-03-31 2017-07-21 努比亚技术有限公司 Intelligent photographing method and device based on dual camera
CN107483821B (en) * 2017-08-25 2020-08-14 维沃移动通信有限公司 Image processing method and mobile terminal
CN107493432B (en) * 2017-08-31 2020-01-10 Oppo广东移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107888833A (en) * 2017-11-28 2018-04-06 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN108259770B (en) * 2018-03-30 2020-06-02 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111866370A (en) * 2020-05-28 2020-10-30 北京迈格威科技有限公司 Method, device, equipment, medium, camera array and assembly for synthesizing panoramic deep image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103152521A (en) * 2013-01-30 2013-06-12 广东欧珀移动通信有限公司 Effect of depth of field achieving method for mobile terminal and mobile terminal
CN103903234A (en) * 2014-03-12 2014-07-02 南京第五十五所技术开发有限公司 Real-time image defogging method based on image field depth
FR3050597A1 (en) * 2016-04-26 2017-10-27 Stereolabs METHOD FOR ADJUSTING A STEREOSCOPIC VIEWING APPARATUS
CN107613199A (en) * 2016-06-02 2018-01-19 广东欧珀移动通信有限公司 Blur photograph generation method, device and mobile terminal
CN106791119A (en) * 2016-12-27 2017-05-31 努比亚技术有限公司 A kind of photo processing method, device and terminal
CN108234978A (en) * 2017-12-12 2018-06-29 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN111885307A (en) * 2020-07-30 2020-11-03 努比亚技术有限公司 Depth-of-field shooting method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN112532881A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
CN112532881B (en) Image processing method and device and electronic equipment
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN112492212B (en) Photographing method and device, electronic equipment and storage medium
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN112637500B (en) Image processing method and device
CN112291473B (en) Focusing method and device and electronic equipment
CN113794829B (en) Shooting method and device and electronic equipment
CN112637515B (en) Shooting method and device and electronic equipment
CN112333386A (en) Shooting method and device and electronic equipment
CN111787230A (en) Image display method and device and electronic equipment
CN112839166A (en) Shooting method and device and electronic equipment
CN111050081A (en) Shooting method and electronic equipment
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112653841B (en) Shooting method and device and electronic equipment
CN111654623B (en) Photographing method and device and electronic equipment
CN112672057B (en) Shooting method and device
CN112672058B (en) Shooting method and device
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN114125226A (en) Image shooting method and device, electronic equipment and readable storage medium
CN113286085A (en) Display control method and device and electronic equipment
CN112788239A (en) Shooting method and device and electronic equipment
CN112738399A (en) Image processing method and device and electronic equipment
CN113489901B (en) Shooting method and device thereof
CN114827471A (en) Shooting method, display method, shooting device and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant