CN107592466B - Photographing method and mobile terminal - Google Patents

Photographing method and mobile terminal Download PDF

Info

Publication number
CN107592466B
CN107592466B CN201710952964.XA CN201710952964A CN107592466B CN 107592466 B CN107592466 B CN 107592466B CN 201710952964 A CN201710952964 A CN 201710952964A CN 107592466 B CN107592466 B CN 107592466B
Authority
CN
China
Prior art keywords
area
focusing
image
areas
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710952964.XA
Other languages
Chinese (zh)
Other versions
CN107592466A (en
Inventor
金鑫
付琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710952964.XA priority Critical patent/CN107592466B/en
Publication of CN107592466A publication Critical patent/CN107592466A/en
Application granted granted Critical
Publication of CN107592466B publication Critical patent/CN107592466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a photographing method and a mobile terminal, wherein the photographing method comprises the following steps: acquiring a preview image captured by a camera of the mobile terminal; determining at least two focusing areas in the preview image according to selection operation of a user; performing image sharpening on the at least two focusing areas, and blurring the area except the at least two focusing areas in the preview image to generate a target image; and outputting a photo according to the target image. Therefore, the scheme of the invention solves the problem that the photographing method in the prior art can only focus on one point.

Description

Photographing method and mobile terminal
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a photographing method and a mobile terminal.
Background
At present, more and more mobile terminals are popular to carry double cameras. The double-camera system has the advantages that the distance between each object and the camera can be calculated by utilizing the principle of similar triangles, and the depth of field information image corresponding to the picture to be shot is calculated, so that the large-aperture virtual shooting effect which can be obtained when the single-lens reflex camera is mounted on the large-aperture lens can be simulated.
However, the existing focusing method based on two cameras has the disadvantages that focusing can be performed only based on a certain point, only one place on a picture can be guaranteed to be clear basically, and other places can be blurred.
Disclosure of Invention
The embodiment of the invention provides a photographing method and a mobile terminal, and aims to solve the problem that focusing can only be performed on the basis of one point in the photographing method in the prior art.
In a first aspect, an embodiment of the present invention provides a photographing method, including:
acquiring a preview image captured by a camera of the mobile terminal;
determining at least two focusing areas in the preview image according to selection operation of a user;
performing image sharpening on the at least two focusing areas, and blurring the area except the at least two focusing areas in the preview image to generate a target image;
and outputting a photo according to the target image.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, including:
the preview image acquisition module is used for acquiring a preview image captured by a camera of the mobile terminal;
the focusing area determining module is used for determining at least two focusing areas in the preview image according to the selection operation of a user;
the image processing module is used for carrying out image sharpening on the at least two focusing areas and carrying out blurring processing on areas except the at least two focusing areas in the preview image to generate a target image;
and the photo output module is used for outputting a photo according to the target image.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the photographing method described above.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the photographing method described above.
In the embodiment of the invention, at least two focusing areas are determined in the preview image captured by the camera of the mobile terminal according to the selection operation of the user, so that a target image with clear focusing areas and blurred areas outside the focusing areas is obtained, and a photo is output according to the target image. Therefore, the embodiment of the invention can focus based on at least two focusing areas to obtain the clear photos of at least two areas, thereby realizing the focusing shooting of a plurality of areas.
Drawings
FIG. 1 is a flow chart of a photographing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another photographing method according to an embodiment of the present invention;
FIG. 3 is a flow chart of another photographing method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the principle of calculating depth of field information in an embodiment of the present invention;
FIG. 5 is a display diagram illustrating an image preview area of a mobile terminal divided into a plurality of sub-areas according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a mapping relationship between pixel coordinates of a preview image and a generated photograph in an embodiment of the present invention;
fig. 7 is a block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 8 is a second schematic block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 9 is a schematic diagram illustrating a hardware structure of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present invention provides a photographing method, as shown in fig. 1, the method including:
step 101: the method includes acquiring a preview image captured by a camera of the mobile terminal.
The mobile terminal applied in the embodiment of the invention can be a mobile phone, a tablet computer and the like provided with a camera. In addition, the preview image is an image in a shooting scene captured by a camera of the mobile terminal.
Step 102: and determining at least two focusing areas in the preview image according to the selection operation of the user.
The selection operation is a selection operation performed by a user in an image preview area of the mobile terminal, for example, a click operation.
In addition, according to the embodiment of the invention, the shot objects on the plurality of focal planes can be focused respectively according to the selection operation of the user, so that the shot objects on the focal planes are ensured to be in a clear state, and the shot objects on other positions are in a blurred state, and an image meeting the shooting intention of the user is obtained.
Therefore, the embodiment of the present invention is not limited to focusing only on one point, but can focus on at least two points, so as to obtain at least two focusing areas. Therefore, compared with the prior art, the photographing method provided by the embodiment of the invention can better meet the photographing requirement of the user.
Step 103: and performing image sharpening on the at least two focusing areas, and blurring the area except the at least two focusing areas in the preview image to generate a target image.
Since the focusing area is an area that needs to be clearly displayed, after the focusing area in the preview image is determined in step 102, the image in the focusing area needs to be clearly processed. And the area outside the focusing area in the preview image belongs to the content which is not concerned by the user, blurring processing can be carried out, so that the image which has more obvious image display hierarchy and meets the requirement of the user is obtained.
Step 104: and outputting a photo according to the target image.
After the target image with the clearly displayed focusing area and the blurring displayed area outside the focusing area is obtained through step 103, the photo can be further output according to the target image, so as to obtain the photo desired by the user.
In addition, it is preferable that before step 104, the method further includes: and displaying the target image in an image preview area of the mobile terminal. That is, the target image acquired in step 103 may also be displayed on a photographing preview interface of the mobile terminal, so that the photographing effect is displayed to the user in a preview form, and the user may further determine whether the photographing effect meets the own requirements.
As can be seen from the above, according to the embodiment of the present invention, at least two focusing areas can be determined in the preview image captured by the camera of the mobile terminal according to the selection operation of the user, so as to obtain the target image in which the focusing areas are clear and the areas other than the focusing areas are blurred, and further output the photo according to the target image. Therefore, the embodiment of the invention can focus based on at least two focusing areas to obtain the clear photos of at least two areas, thereby realizing the focusing shooting of a plurality of areas.
An embodiment of the present invention further provides a photographing method, as shown in fig. 2, the method including:
step 201: the method includes acquiring a preview image captured by a camera of the mobile terminal.
The mobile terminal applied to the embodiment of the invention is provided with at least two cameras. The mobile terminal can be a mobile phone, a tablet computer and the like. In addition, the preview image is an image in a superposed view of at least two cameras on the mobile terminal.
Step 202: determining depth information for each pixel in the preview image.
Step 203: and determining a depth information map according to the depth information of each pixel.
Because the mobile terminal applied by the embodiment of the invention is provided with at least two cameras, the distance between each object in the preview image and the camera can be calculated by utilizing the similar triangle principle, and further the depth of field information map of the preview image can be obtained.
Specifically, as shown in fig. 4, L and R respectively represent the central positions of two cameras on the mobile terminal, LA is the left boundary of the L-field of view range of the camera, LB is the right boundary of the L-field of view range of the camera, RC is the left boundary of the R-field of view range of the camera, RD is the right boundary of the R-field of view range of the camera, b represents the distance between the central positions of the two cameras, f represents the focal lengths of the two cameras, P represents a point on the photographic object, P represents the imaging position of the P on the focal plane of the camera L, P' represents the imaging position of the P on the focal plane of the camera R, and Z represents the depth of field of the P, so according to similarity, the center positions of the two cameras are representedThe triangle principle can be derived:
Figure BDA0001433280430000041
as can be seen from fig. 4, pp' ═ b-XL-XR can be obtained
Figure BDA0001433280430000042
Further derivation is then
Figure BDA0001433280430000043
Where XL and XR are both known quantities that can be measured, and b, f are both camera parameters.
As a result, it is possible to obtain depth information for each pixel in the preview image according to the principle described above.
Step 204: and acquiring at least two click positions of the click operation.
The click operation is the click operation of the user in the image preview area of the mobile terminal. Namely, the user selects the target object needing to be clearly displayed according to the image in the photographing view range displayed in the image preview area of the mobile terminal, and clicks the corresponding position in the image preview area.
In addition, in the embodiment of the present invention, focusing may be performed with respect to at least two points, and thus the user may click at least two positions in the image preview area of the mobile terminal, so that the target object at the clicked position may be clearly displayed.
Step 205: and determining a focusing area in the preview image according to the depth information map and the at least two click positions.
Wherein each click position corresponds to a focusing area. Since the number of the click positions is at least two, at least two in-focus areas can be determined in the preview image in step 205.
Preferably, step 205 comprises: acquiring target depth information of the pixels at the at least two click positions according to the depth information map; determining the fluctuation range of the depth of field information according to the target depth of field information; acquiring a target area formed by pixels of which the depth information is within the fluctuation range of the depth information in the preview image according to the depth information image; determining the target area as the focusing area; the upper limit value of the fluctuation range of the depth information is the difference between the target depth information and a preset value, and the lower limit value is the sum of the target depth information and the preset value.
Since the depth of field information map of the preview image is already acquired in step 203, the target pixel value of the pixel at each click position can be determined according to the depth of field information map acquired in step 203, so that the fluctuation range of the depth of field information corresponding to each click position is obtained, and the focusing area corresponding to each click position is obtained.
For example, if the depth of field corresponding to the first click position (x1, y1) in the preview image is z1, the depth information of the plane where the target object at the click position is located fluctuates within the range of [ z 1- Δ z, z1+ Δ z ], and the area formed by all pixels whose depth of field information is within the range of [ z 1- Δ z, z1+ Δ z ] in the preview image is the focusing area corresponding to the first click position.
Similarly, the depth of field corresponding to the second click position (x2, y2) in the preview image is z2, and then the depth information of the plane where the target object at the click position is located fluctuates within the range of [ z 2- Δ z, z2+ Δ z ], and then the area formed by all pixels of which the depth of field information is within the range of [ z 2- Δ z, z2+ Δ z ] in the preview image is the focusing area corresponding to the second click position.
Therefore, for the region composed of pixels whose depth-of-field information is in the range of [ z 1- Δ z, z1+ Δ z ] and [ z 2- Δ z, z2+ Δ z ] in the preview image, it is necessary to be processed to be clear (i.e., without blurring).
According to the embodiment of the invention, the area needing focusing is selected by implementing the clicking operation in the image preview area of the mobile terminal by the user, and the focusing area corresponding to each clicking position is determined according to the depth of field information of the preview image. The method for determining the focusing area can flexibly determine the corresponding focusing area according to the actual click position of the user, thereby further meeting the photographing requirements of different users.
Step 206: and performing image sharpening on the at least two focusing areas, and blurring the area except the at least two focusing areas in the preview image to generate a target image.
Since the focusing area is an area that needs to be clearly displayed, after the focusing area in the preview image is determined through steps 202 to 205, the image in the focusing area needs to be clearly processed. And the area outside the focusing area in the preview image belongs to the content which is not concerned by the user, blurring processing can be carried out, so that the image which has more obvious image display hierarchy and meets the requirement of the user is obtained.
In addition, preferably, the blurring processing of the region other than the in-focus region in the preview image includes: and performing different levels of blurring processing on the areas except the at least two focusing areas in the preview image according to the depth of field information map and preset blurring levels corresponding to different depth of field ranges.
For example, if the depth of field [ z3, z4] is set in advance to correspond to a first blurring level, the depth of field [ z5, z6] is set to correspond to a second blurring level, the depth of field [ z7, z8] is set to correspond to a third blurring level, and the degrees of blurring of the first blurring level, the second blurring level, and the third blurring level gradually increase (that is, the degree of blurring after blurring increases), then blurring is performed at the first blurring level for a region of the preview image composed of pixels having a depth of field within the range of [ z3, z4], blurring is performed at the second blurring level for a region of the preview image composed of pixels having a depth of field within the range of [ z5, z6], and blurring is performed at the third blurring level for a region of the preview image composed of pixels having a depth of field within the range of [ z7, z8 ].
Therefore, the blurring processing of different levels is performed according to the depth of field information, and the target images with different blurring degrees can be obtained, so that the target images have more layering, and the photographing experience of the user is further improved.
In addition, for the specific method of blurring, algorithms such as gaussian blur may be used.
Step 207: and outputting a photo according to the target image.
After the target image with the focusing area clearly displayed and the area outside the focusing area virtually displayed is obtained through step 206, the photo can be further output according to the target image, so as to obtain the photo required by the user.
In addition, before step 207, it is preferable to further include: and displaying the target image in an image preview area of the mobile terminal. That is, the target image acquired in step 206 may also be displayed on a photographing preview interface of the mobile terminal, so that the photographing effect is displayed to the user in a preview form, and the user may further determine whether the photographing effect meets the own requirements.
In addition, in the embodiment of the invention, the size of the preview image of the mobile terminal can be set to be smaller than that of the target image obtained by photographing, and the mapping relation between the coordinates of the preview image and the coordinates of the target image obtained by photographing is established. Such as 1200 ten thousand pixels, the preview size is 1280 × 720(92 ten thousand pixels), the actual shot size is 4032 × 3024(1200 ten thousand pixels), and the difference is more than 10 times. The reason for this is mainly from the viewpoint of preview performance, preview is refreshed on the screen every frame, and the less the refreshed data is, the higher the efficiency is.
For example, as shown in fig. 6, the solid line box represents the size of the preview image, the dashed line box represents the size of the target image obtained by photographing, and assuming that the preview size is 1280 × 720(92 ten thousand pixels), the size of the target image obtained by actual photographing is 4032 × 3024(1200 ten thousand pixels), and assuming that the point a is on the preview image and the coordinates are (x, y), the point a 'is mapped onto the target image obtained by photographing, and then the coordinates (x', y ') of the point a' can be obtained by the following mapping relationship:
Figure BDA0001433280430000071
as can be seen from the above, according to the embodiment of the present invention, a focusing area corresponding to each click position in a preview image captured by a camera of a mobile terminal can be determined according to a click operation of a user in an image preview area of the mobile terminal, so as to obtain a target image in which the focusing area is clear and an area other than the focusing area is blurred, and further output a photo according to the target image. Therefore, the embodiment of the invention can focus based on at least two focusing areas to obtain the clear photos of at least two areas, thereby realizing the focusing shooting of a plurality of areas.
An embodiment of the present invention further provides a photographing method, as shown in fig. 3, the method including:
step 301: the method includes acquiring a preview image captured by a camera of the mobile terminal.
The mobile terminal applied to the embodiment of the invention is provided with at least two cameras. The mobile terminal can be a mobile phone, a tablet computer and the like. In addition, the preview image is an image in a superposed view of at least two cameras on the mobile terminal.
Step 302: at least two selected regions are determined according to the user-selected sub-regions.
In the embodiment of the present invention, the image preview area of the mobile terminal is divided into a plurality of sub-areas. In addition, adjacent sub-areas in the sub-area selected by the user constitute a selected area.
Specifically, as shown in fig. 5, the dotted square represents a plurality of sub-areas into which the image preview area is divided. For example, when the user selects the first sub-area 501, the second sub-area 502, and the third sub-area 503 in fig. 5, since the first sub-area 501 is adjacent to the second sub-area 502, the first sub-area 501 and the second sub-area 502 form a selected area, and the third sub-area 503 is not adjacent to both the first sub-area 501 and the second sub-area 502, so that the third sub-area 503 alone forms a selected area.
In addition, when the user selects a sub-region in the image preview region of the mobile terminal, the user can click any one of the sub-regions independently, and the clicked sub-region is the sub-region selected by the user, or the user can slide in the image preview region, and the sub-region through which the sliding track passes is the sub-region selected by the user. In addition, the specific operation of the user selecting the sub-area in the image preview area of the mobile terminal is not limited thereto.
Step 303: determining depth information for each pixel in the selected region.
Because the mobile terminal applied by the embodiment of the invention is provided with at least two cameras, the depth of field information of each pixel in the selected area can be calculated by utilizing the similar triangle principle.
Specifically, as shown in fig. 4, L and R respectively represent the central positions of two cameras on the mobile terminal, LA is the left boundary of the L-field range of the camera, LB is the right boundary of the L-field range of the camera, RC is the left boundary of the R-field range of the camera, RD is the right boundary of the R-field range of the camera, b represents the distance between the central positions of the two cameras, f represents the focal lengths of the two cameras, P is a point on the object to be photographed, P represents the imaging position of the P on the focal plane of the camera, P' represents the imaging position of the P on the focal plane of the camera, and Z represents the depth of field of the P, which can be obtained according to the triangle-like principle:
Figure BDA0001433280430000081
as can be seen from fig. 4, pp' ═ b-XL-XR can be obtained
Figure BDA0001433280430000082
Further derivation is then
Figure BDA0001433280430000083
Where XL and XR are both known quantities that can be measured, and b, f are both camera parameters.
It can be seen that the depth information of each pixel in the selected region can be obtained according to the above principle.
According to the embodiment of the invention, only the depth of field information of the selected area needs to be calculated, and the depth of field information of all areas in the preview image does not need to be calculated, so that the calculation area of the depth of field information image is greatly reduced, the calculation amount is reduced, and the processing speed is accelerated.
Step 304: and selecting a focusing area from the selected area according to the depth of field information of each pixel in the selected area.
Wherein each selected area corresponds to a focusing area. Since there are at least two selected regions, at least two in-focus regions can be determined in the preview image in step 304.
Preferably, step 304 includes: dividing the selected area into a main area and a background area according to the depth of field information of each pixel in the selected area; when the area ratio of the main body area to the selected area is larger than a preset threshold value, determining the main body area as the focusing area; and when the area ratio of the background area to the selected area is larger than a preset threshold value, determining the background area as the focusing area.
In one preview image, the depth of field ranges of the main body area and the background area are different, so that the main body area and the background area can be divided according to the depth of field of each pixel in the selected area in the preview image. In addition, the selected region is determined by the selection operation of the user, and the selection operation of the user is to select the focusing region which needs to be clearly displayed, so that the main body region or the background region occupying most area in the selected region is the focusing region which needs to be clearly displayed.
Therefore, according to the embodiment of the invention, the required focusing area can be determined only by calculating the depth of field information of the selected area in the preview image, and the depth of field information of the whole preview image is not required to be calculated, so that the calculation area of the depth of field information image is greatly reduced, the calculation amount is reduced, and the processing speed is accelerated.
Step 305: and performing image sharpening on the at least two focusing areas, and blurring the area except the at least two focusing areas in the preview image to generate a target image.
The focusing area is an area which needs to be clearly displayed, so after the focusing area in the preview image is determined through the steps 302-304, the image in the focusing area needs to be clearly processed. And the area outside the focusing area in the preview image belongs to the content which is not concerned by the user, blurring processing can be carried out, so that the image which has more obvious image display hierarchy and meets the requirement of the user is obtained.
Preferably, the blurring processing of the area other than the at least two focusing areas in the preview image includes: performing different levels of blurring processing on the areas outside the focusing area in the selected area according to the depth of field information of each pixel in the selected area and preset blurring levels corresponding to different depth of field ranges; determining a target virtualization level corresponding to the maximum depth of field information of the selected area according to preset virtualization levels corresponding to different depth of field ranges; and blurring the area except the selected area in the preview image according to the target blurring level.
The embodiment of the invention comprises two parts for the area outside the focusing area in the preview image: the first partial area is an area outside the focused area in the selected area, and the second partial area is a non-selected area (i.e. an area formed by sub-areas not selected by the user).
In the embodiment of the present invention, the depth information of the first partial region is already calculated, so that different levels of blurring processing can be performed on the first partial region according to preset blurring levels corresponding to different depth ranges, so that the blurring degree of the image of the partial region is more hierarchical.
In addition, in the embodiment of the present invention, the depth information of the second partial area is not calculated, but since the second partial area belongs to an area that is not selected by the user, the image in the second partial area belongs to a content that is not concerned by the user, and therefore, the second partial area may perform blurring processing according to the blurring level corresponding to the maximum depth information in the first partial area, so that the target image with an obvious blurring hierarchy is obtained without calculating the depth information of the second partial area.
In addition, for the specific method of blurring, algorithms such as gaussian blur may be used.
Step 306: and outputting a photo according to the target image.
After the target image in which the focusing area is clearly displayed and the area other than the focusing area is blurred and displayed is acquired in step 305, a photo can be further output according to the target image, so that a photo desired by the user can be obtained.
In addition, it is preferable that before step 306, the method further includes: and displaying the target image in an image preview area of the mobile terminal. That is, the target image acquired in step 305 may also be displayed on a photographing preview interface of the mobile terminal, so that the photographing effect is displayed to the user in a preview form, and the user may further determine whether the photographing effect meets the own requirements.
In addition, in the embodiment of the invention, the size of the preview image of the mobile terminal can be set to be smaller than that of the target image obtained by photographing, and the mapping relation between the coordinates of the preview image and the coordinates of the target image obtained by photographing is established. Such as 1200 ten thousand pixels, the preview size is 1280 × 720(92 ten thousand pixels), the actual shot size is 4032 × 3024(1200 ten thousand pixels), and the difference is more than 10 times. The reason for this is mainly from the viewpoint of preview performance, preview is refreshed on the screen every frame, and the less the refreshed data is, the higher the efficiency is.
For example, as shown in fig. 6, the solid line box represents the size of the preview image, the dashed line box represents the size of the target image obtained by photographing, and assuming that the preview size is 1280 × 720(92 ten thousand pixels), the size of the target image obtained by actual photographing is 4032 × 3024(1200 ten thousand pixels), and assuming that the point a is on the preview image and the coordinates are (x, y), the point a 'is mapped onto the target image obtained by photographing, and then the coordinates (x', y ') of the point a' can be obtained by the following mapping relationship:
Figure BDA0001433280430000111
as can be seen from the above, according to the sub-area selected by the user in the image preview area of the mobile terminal, the embodiment of the present invention can determine at least two focusing areas in the preview image captured by the camera of the mobile terminal, so as to obtain the target image with clear focusing areas and blurred areas outside the focusing areas, and further generate the photo according to the target image. Therefore, the embodiment of the invention can focus based on at least two focusing areas to obtain the clear photos of at least two areas, thereby realizing the focusing shooting of a plurality of areas.
An embodiment of the present invention provides a mobile terminal, as shown in fig. 7, the mobile terminal 700 includes:
a preview image obtaining module 701, configured to obtain a preview image captured by a camera of the mobile terminal;
a focusing area determining module 702, configured to determine at least two focusing areas in the preview image according to a selection operation of a user;
the image processing module 703 is configured to perform image sharpening on the at least two focusing areas, and perform blurring on an area in the preview image other than the at least two focusing areas to generate a target image;
a photo output module 705, configured to output a photo according to the target image.
Preferably, the selection operation is a click operation of a user in an image preview area of the mobile terminal; as shown in fig. 8, the focusing area determination module 702 includes:
a first depth information determining unit 7021 configured to determine depth information of each pixel in the preview image;
a depth information map determining unit 7022, configured to determine a depth information map according to the depth information of each pixel;
a click position obtaining unit 7023 configured to obtain at least two click positions of the click operation;
a first focus area determining unit 7024 is configured to determine a focus area in the preview image according to the depth information map and the at least two click positions.
Preferably, as shown in fig. 8, the first focus area determination unit 7024 includes:
a target depth of field acquiring subunit 70241, configured to acquire target depth of field information of the pixels at the at least two click positions according to the depth of field information map;
a fluctuation range determining subunit 70242, configured to determine a fluctuation range of the depth information according to the target depth information;
an area acquiring subunit 70243, configured to acquire, according to the depth information map, a target area formed by pixels in the preview image, where the depth information is within the fluctuation range of the depth information;
a first focus area determination subunit 70244 configured to determine the target area as the focus area;
the upper limit value of the fluctuation range of the depth information is the difference between the target depth information and a preset value, and the lower limit value is the sum of the target depth information and the preset value.
Preferably, as shown in fig. 8, the image processing module 703 includes:
a first blurring unit 7031, configured to perform blurring processing of different levels on an area other than the at least two focusing areas in the preview image according to a preset blurring level corresponding to a different depth of field range according to the depth of field information map.
Preferably, the image preview area of the mobile terminal is divided into a plurality of sub-areas, and the selection operation is a selection operation of a user on the sub-areas; as shown in fig. 8, the focusing area determination module 702 includes:
a selected region determining unit 7025, configured to determine at least two selected regions according to the sub-region selected by the user;
a second depth information determining unit 7026, configured to determine depth information of each pixel in the selected region;
a second focusing area determining unit 7027, configured to select a focusing area from the selected area according to depth information of each pixel in the selected area;
wherein, adjacent sub-areas in the sub-areas selected by the user form a selected area.
Preferably, as shown in fig. 8, the second focus area determination unit 7027 includes:
a region dividing subunit 70271, configured to divide the selected region into a main region and a background region according to depth information of each pixel in the selected region;
a second focusing region determining subunit 70272, configured to determine the body region as the focusing region when the ratio of the areas of the body region and the selected region is greater than a preset threshold;
a third focusing area determining subunit 70273, configured to determine the background area as the focusing area when the area ratio of the background area to the selected area is greater than a preset threshold.
Preferably, as shown in fig. 8, the image processing module 703 includes:
a second blurring unit 7032, configured to perform blurring processing of different levels on an area outside the focusing area in the selected area according to the depth of field information of each pixel in the selected area and according to preset blurring levels corresponding to different depth of field ranges;
a blurring level determining unit 7033, configured to determine, according to preset blurring levels corresponding to different depth of field ranges, a target blurring level corresponding to the maximum depth of field information of the selected area;
a third blurring unit 7034, configured to perform blurring processing on an area other than the selected area in the preview image according to the target blurring level.
Preferably, as shown in fig. 8, the mobile terminal 700 further includes:
an image preview module 704, configured to display the target image in an image preview area of the mobile terminal.
The mobile terminal provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 3, and is not described herein again to avoid repetition.
In addition, according to the embodiment of the invention, at least two focusing areas are determined in the preview image captured by the camera of the mobile terminal according to the selection operation of the user, so that the target image with clear focusing areas and blurred areas outside the focusing areas is obtained, and the photo is output according to the target image. Therefore, the embodiment of the invention can focus based on at least two focusing areas to obtain the clear photos of at least two areas, thereby realizing the focusing shooting of a plurality of areas.
An embodiment of the present invention further provides a mobile terminal, as shown in fig. 9, where the mobile terminal 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, a processor 910, and a power supply 911. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 9 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 910 is configured to obtain a preview image captured by a camera of the mobile terminal, determine at least two focusing areas in the preview image according to a selection operation of a user, perform image sharpening on the at least two focusing areas, perform blurring processing on an area other than the at least two focusing areas in the preview image, obtain a target image, and output a photo according to the target image.
Therefore, according to the mobile terminal 900 of the embodiment of the present invention, at least two focusing areas are determined in the preview image captured by the camera of the mobile terminal according to the selection operation of the user, so as to obtain a target image in which the focusing areas are clear and the areas other than the focusing areas are blurred, and then output a photo according to the target image. Therefore, the embodiment of the invention can focus based on at least two focusing areas to obtain the clear photos of at least two areas, thereby realizing the focusing shooting of a plurality of areas.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 901 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 910; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 901 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 901 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access via the network module 902, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 903 may convert audio data received by the radio frequency unit 901 or the network module 902 or stored in the memory 909 into an audio signal and output as sound. Also, the audio output unit 903 may also provide audio output related to a specific function performed by the mobile terminal 900 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 903 includes a speaker, a buzzer, a receiver, and the like.
The input unit 904 is used to receive audio or video signals. The input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics processor 9041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 906. The image frames processed by the graphic processor 9041 may be stored in the memory 909 (or other storage medium) or transmitted via the radio frequency unit 901 or the network module 902. The microphone 9042 can receive sounds and can process such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 901 in case of the phone call mode.
The mobile terminal 900 also includes at least one sensor 905, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 9061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 9061 and/or backlight when the mobile terminal 900 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 905 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described in detail herein.
The display unit 906 is used to display information input by the user or information provided to the user. The Display unit 906 may include a Display panel 9061, and the Display panel 9061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 907 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 907 includes a touch panel 9071 and other input devices 9072. The touch panel 9071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 9071 (e.g., operations by a user on or near the touch panel 9071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 9071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 910, receives a command from the processor 910, and executes the command. In addition, the touch panel 9071 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 907 may include other input devices 9072 in addition to the touch panel 9071. Specifically, the other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, and the like), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 9071 may be overlaid on the display panel 9061, and when the touch panel 9071 detects a touch operation on or near the touch panel 9071, the touch panel is transmitted to the processor 910 to determine the type of the touch event, and then the processor 910 provides a corresponding visual output on the display panel 9061 according to the type of the touch event. Although in fig. 9, the touch panel 9071 and the display panel 9061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 9071 and the display panel 9061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 908 is an interface through which an external device is connected to the mobile terminal 900. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 908 may be used to receive input from external devices (e.g., data information, power, etc.) and transmit the received input to one or more elements within the mobile terminal 900 or may be used to transmit data between the mobile terminal 900 and external devices.
The memory 909 may be used to store software programs as well as various data. The memory 909 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 909 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 910 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 909 and calling data stored in the memory 909, thereby performing overall monitoring of the mobile terminal. Processor 910 may include one or more processing units; preferably, the processor 910 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The mobile terminal 900 may also include a power supply 911 (e.g., a battery) for powering the various components, and preferably, the power supply 911 is logically connected to the processor 910 through a power management system that provides power management functions to manage charging, discharging, and power consumption.
In addition, the mobile terminal 900 includes some functional modules that are not shown, and thus will not be described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, including a processor 910, a memory 909, and a computer program stored in the memory 909 and capable of running on the processor 910, where the computer program is executed by the processor 910 to implement each process of the above-mentioned photographing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the above-mentioned photographing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the descriptions thereof are omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A method of taking a picture, comprising:
acquiring a preview image captured by a camera of the mobile terminal;
determining at least two focusing areas in the preview image according to selection operation of a user;
performing image sharpening on the at least two focusing areas, and blurring the area except the at least two focusing areas in the preview image to generate a target image;
outputting a photo according to the target image;
the image preview area of the mobile terminal is divided into a plurality of sub-areas, the selection operation is the selection operation of a user on the sub-areas, and the step of determining at least two focusing areas in the preview image according to the selection operation of the user comprises the following steps:
determining at least two selected areas according to the sub-areas selected by the user;
determining depth information of each pixel in the selected area;
selecting a focusing area from the selected area according to the depth of field information of each pixel in the selected area;
wherein, adjacent subregions in the subregions selected by the user form a selected region;
the step of selecting a focusing area from the selected area according to the depth information of each pixel in the selected area comprises:
dividing the selected area into a main area and a background area according to the depth of field information of each pixel in the selected area;
when the area ratio of the main body area to the selected area is larger than a preset threshold value, determining the main body area as the focusing area;
and when the area ratio of the background area to the selected area is larger than a preset threshold value, determining the background area as the focusing area.
2. The method of claim 1, wherein blurring the area of the preview image other than the at least two in-focus areas comprises:
performing different levels of blurring processing on the areas outside the focusing area in the selected area according to the depth of field information of each pixel in the selected area and preset blurring levels corresponding to different depth of field ranges;
determining a target virtualization level corresponding to the maximum depth of field information of the selected area according to preset virtualization levels corresponding to different depth of field ranges;
and blurring the area except the selected area in the preview image according to the target blurring level.
3. The method of claim 1, wherein prior to the step of outputting a photograph in accordance with the target image, the method further comprises:
and displaying the target image in an image preview area of the mobile terminal.
4. A mobile terminal, comprising:
the preview image acquisition module is used for acquiring a preview image captured by a camera of the mobile terminal;
the focusing area determining module is used for determining at least two focusing areas in the preview image according to the selection operation of a user;
the image processing module is used for carrying out image sharpening on the at least two focusing areas and carrying out blurring processing on areas except the at least two focusing areas in the preview image to generate a target image;
the photo output module is used for outputting a photo according to the target image;
the image preview area of the mobile terminal is divided into a plurality of sub-areas, the selection operation is the selection operation of the sub-areas by a user, and the focusing area determination module comprises:
the selected area determining unit is used for determining at least two selected areas according to the sub-areas selected by the user;
a second depth information determination unit configured to determine depth information of each pixel in the selected region;
the second focusing area determining unit is used for selecting a focusing area from the selected area according to the depth information of each pixel in the selected area;
wherein, adjacent subregions in the subregions selected by the user form a selected region;
the second focus area determination unit includes:
the area dividing subunit is used for dividing the selected area into a main area and a background area according to the depth of field information of each pixel in the selected area;
a second focusing region determining subunit configured to determine the main region as the focusing region when a ratio of areas of the main region and the selected region is greater than a preset threshold;
and the third focusing area determining subunit is configured to determine the background area as the focusing area when the area ratio of the background area to the selected area is greater than a preset threshold.
5. The mobile terminal of claim 4, wherein the image processing module comprises:
the second blurring processing unit is used for blurring the area outside the focusing area in the selected area in different levels according to the preset blurring levels corresponding to different depth-of-field ranges according to the depth-of-field information of each pixel in the selected area;
the blurring level determining unit is used for determining a target blurring level corresponding to the maximum depth of field information of the selected area according to preset blurring levels corresponding to different depth of field ranges;
and the third blurring processing unit is used for blurring the area except the selected area in the preview image according to the target blurring level.
6. The mobile terminal of claim 4, wherein the mobile terminal further comprises:
and the image preview module is used for displaying the target image in an image preview area of the mobile terminal.
7. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the photographing method according to any one of claims 1 to 3.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the photographing method according to any one of claims 1 to 3.
CN201710952964.XA 2017-10-13 2017-10-13 Photographing method and mobile terminal Active CN107592466B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710952964.XA CN107592466B (en) 2017-10-13 2017-10-13 Photographing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710952964.XA CN107592466B (en) 2017-10-13 2017-10-13 Photographing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107592466A CN107592466A (en) 2018-01-16
CN107592466B true CN107592466B (en) 2020-04-24

Family

ID=61053356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710952964.XA Active CN107592466B (en) 2017-10-13 2017-10-13 Photographing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107592466B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108243489B (en) * 2018-01-17 2020-09-11 维沃移动通信有限公司 Photographing control method and mobile terminal
CN108184070B (en) * 2018-03-23 2020-09-08 维沃移动通信有限公司 Shooting method and terminal
CN108989796A (en) 2018-08-09 2018-12-11 浙江大华技术股份有限公司 A kind of image capture device selection method and device
CN109068063B (en) * 2018-09-20 2021-01-15 维沃移动通信有限公司 Three-dimensional image data processing and displaying method and device and mobile terminal
CN112889265B (en) * 2018-11-02 2022-12-09 Oppo广东移动通信有限公司 Depth image processing method, depth image processing device and electronic device
CN109151329A (en) * 2018-11-22 2019-01-04 Oppo广东移动通信有限公司 Photographic method, device, terminal and computer readable storage medium
CN109859265B (en) * 2018-12-28 2024-04-19 维沃移动通信有限公司 Measurement method and mobile terminal
TWI693576B (en) * 2019-02-26 2020-05-11 緯創資通股份有限公司 Method and system for image blurring processing
CN110062157B (en) * 2019-04-04 2021-09-17 北京字节跳动网络技术有限公司 Method and device for rendering image, electronic equipment and computer readable storage medium
CN110958387B (en) * 2019-11-19 2021-06-29 维沃移动通信有限公司 Content updating method and electronic equipment
CN112184610B (en) * 2020-10-13 2023-11-28 深圳市锐尔觅移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN112351204A (en) * 2020-10-27 2021-02-09 歌尔智能科技有限公司 Photographing method, photographing device, mobile terminal and computer readable storage medium
CN115442527A (en) * 2022-08-31 2022-12-06 维沃移动通信有限公司 Shooting method, device and equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9204034B2 (en) * 2012-12-27 2015-12-01 Canon Kabushiki Kaisha Image processing apparatus and image processing method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6271990B2 (en) * 2013-01-31 2018-01-31 キヤノン株式会社 Image processing apparatus and image processing method
CN104363378B (en) * 2014-11-28 2018-01-16 广东欧珀移动通信有限公司 camera focusing method, device and terminal
US10284835B2 (en) * 2015-09-04 2019-05-07 Apple Inc. Photo-realistic shallow depth-of-field rendering from focal stacks
CN105227838B (en) * 2015-09-28 2018-07-06 广东欧珀移动通信有限公司 A kind of image processing method and mobile terminal
CN105933589B (en) * 2016-06-28 2019-05-28 Oppo广东移动通信有限公司 A kind of image processing method and terminal
CN106534619A (en) * 2016-11-29 2017-03-22 努比亚技术有限公司 Method and apparatus for adjusting focusing area, and terminal
CN107172346B (en) * 2017-04-28 2020-02-07 维沃移动通信有限公司 Virtualization method and mobile terminal
CN107038681B (en) * 2017-05-31 2020-01-10 Oppo广东移动通信有限公司 Image blurring method and device, computer readable storage medium and computer device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9204034B2 (en) * 2012-12-27 2015-12-01 Canon Kabushiki Kaisha Image processing apparatus and image processing method

Also Published As

Publication number Publication date
CN107592466A (en) 2018-01-16

Similar Documents

Publication Publication Date Title
CN107592466B (en) Photographing method and mobile terminal
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN111541845B (en) Image processing method and device and electronic equipment
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN110784651B (en) Anti-shake method and electronic equipment
CN108989678B (en) Image processing method and mobile terminal
CN107948505B (en) Panoramic shooting method and mobile terminal
CN110913131B (en) Moon shooting method and electronic equipment
CN109905603B (en) Shooting processing method and mobile terminal
CN111246106B (en) Image processing method, electronic device, and computer-readable storage medium
CN110266957B (en) Image shooting method and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN111064895B (en) Virtual shooting method and electronic equipment
CN111601032A (en) Shooting method and device and electronic equipment
CN110798621A (en) Image processing method and electronic equipment
CN111246102A (en) Shooting method, shooting device, electronic equipment and storage medium
CN108924422B (en) Panoramic photographing method and mobile terminal
CN109246351B (en) Composition method and terminal equipment
CN108174110B (en) Photographing method and flexible screen terminal
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN110769156A (en) Picture display method and electronic equipment
CN110555815B (en) Image processing method and electronic equipment
CN109104573B (en) Method for determining focusing point and terminal equipment
CN111432122B (en) Image processing method and electronic equipment
CN110913133B (en) Shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant