CN112866545B - Focusing control method and device, electronic equipment and computer readable storage medium - Google Patents

Focusing control method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112866545B
CN112866545B CN201911101407.2A CN201911101407A CN112866545B CN 112866545 B CN112866545 B CN 112866545B CN 201911101407 A CN201911101407 A CN 201911101407A CN 112866545 B CN112866545 B CN 112866545B
Authority
CN
China
Prior art keywords
phase difference
sub
pixel point
target
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911101407.2A
Other languages
Chinese (zh)
Other versions
CN112866545A (en
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911101407.2A priority Critical patent/CN112866545B/en
Publication of CN112866545A publication Critical patent/CN112866545A/en
Application granted granted Critical
Publication of CN112866545B publication Critical patent/CN112866545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions

Abstract

The application relates to a focusing control method and device, an electronic device and a computer readable storage medium, comprising the following steps: and performing main body detection on the original image to obtain a target main body area, and calculating a phase difference in a first direction and a phase difference in a second direction according to the original image of the target main body area, wherein a preset included angle is formed between the first direction and the second direction. And determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target main body region according to the target phase difference. Compared with the traditional method that only the phase difference in one direction can be calculated, the phase difference in two directions can obviously reflect more phase difference information. And finally, determining the target phase difference from the phase difference in the first direction and the phase difference in the second direction, wherein the accuracy of the target phase difference is higher, so that the target main body area is focused according to the target phase difference, and the accuracy in the focusing process can be greatly improved.

Description

Focusing control method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a focus control method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of electronic device technology, more and more users shoot images through electronic devices. In order to ensure that a shot image is clear, a camera module of the electronic device generally needs to be focused, that is, a distance between a lens and an image sensor is adjusted so that a shot object is on a focal plane. The conventional focusing method includes Phase Detection Auto Focus (PDAF).
Traditional phase detection automatic focusing sets up the phase detection pixel in pairs in the pixel that image sensor includes, wherein, a phase detection pixel in every phase detection pixel pair carries out the left side and shelters from, and another phase detection pixel carries out the right side and shelters from, so, just can separate into two parts about the formation of image light beam of every phase detection pixel pair of directive, through the image that two parts formation of image light beam about the contrast, can obtain the phase difference, can focus according to this phase difference after obtaining the phase difference, wherein, the phase difference refers to the difference on the formation of image position of the formation of image light that incides from different directions.
However, the above-mentioned method of setting phase detection pixel points in the image sensor to perform focusing is not accurate.
Disclosure of Invention
The embodiment of the application provides a focusing control method and device, electronic equipment and a computer readable storage medium, which can improve the focusing accuracy in the photographing process.
A focusing control method is applied to electronic equipment and comprises the following steps:
performing main body detection on the original image to obtain a target main body area;
calculating a phase difference in a first direction and a phase difference in a second direction according to the original image of the target main body area, wherein a preset included angle is formed between the first direction and the second direction;
and determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target main body area according to the target phase difference.
A focus control apparatus comprising:
the main body detection module is used for carrying out main body detection on the original image to obtain a target main body area;
the phase difference calculation module is used for calculating a phase difference in a first direction and a phase difference in a second direction according to the original image of the target main body area, and the first direction and the second direction form a preset included angle;
and the phase difference focusing module is used for determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction and focusing the target main body area according to the target phase difference.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the above method.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as above.
According to the focusing control method, the focusing control device, the electronic equipment and the computer readable storage medium, main body detection is carried out on the original image to obtain the target main body area, the phase difference in the first direction and the phase difference in the second direction are calculated according to the original image of the target main body area, and the first direction and the second direction form a preset included angle. And determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target main body region according to the target phase difference. Firstly, a target subject region is extracted from an original image, and secondly, a phase difference in two directions, namely a first direction and a second direction, is calculated for the original image of the target subject region in a targeted manner. Compared with the traditional method that only the phase difference in one direction can be calculated, the phase difference in two directions can obviously reflect more phase difference information. And finally, determining the target phase difference from the phase difference in the first direction and the phase difference in the second direction, wherein the accuracy of the target phase difference is higher, so that the target main body area is focused according to the target phase difference, and the accuracy in the focusing process can be greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of phase detection autofocus;
fig. 2 is a schematic diagram of arranging phase detection pixels in pairs among pixels included in an image sensor;
FIG. 3 is a schematic diagram showing a partial structure of an image sensor according to an embodiment;
FIG. 4 is a schematic diagram of a pixel site in one embodiment;
FIG. 5 is a diagram showing an internal structure of an image sensor according to an embodiment;
FIG. 6 is a diagram illustrating an embodiment of a filter disposed on a pixel group;
FIG. 7 is a flow chart of a focus control method in one embodiment;
FIG. 8 is a flowchart of a method of calculating a phase difference between the first direction and the second direction of FIG. 7;
FIG. 9 is a flowchart illustrating a method for calculating a phase difference between the first direction and the second direction according to the target luminance graph in FIG. 8;
FIG. 10 is a diagram illustrating a group of pixels in one embodiment;
FIG. 11 is a diagram illustrating sub-luminance graphs in one embodiment;
FIG. 12 is a flowchart illustrating a method for focusing the target main area according to the target phase difference in FIG. 7;
FIG. 13 is a diagram illustrating an image processing effect according to an embodiment;
FIG. 14 is a schematic structural diagram of a focus control apparatus according to an embodiment;
fig. 15 is a schematic internal structure diagram of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first camera may be referred to as a second camera, and similarly, a second camera may be referred to as a first camera, without departing from the scope of the present application. The first camera and the second camera are both cameras, but they are not the same camera.
Fig. 1 is a schematic diagram of a Phase Detection Auto Focus (PDAF) principle. As shown in fig. 1, M1 is a position where the image sensor is located when the imaging device is in a focusing state, wherein the focusing state refers to a state of successful focusing. When the image sensor is located at the position M1, the imaging light rays g reflected by the object W in different directions toward the Lens converge on the image sensor, that is, the imaging light rays g reflected by the object W in different directions toward the Lens are imaged at the same position on the image sensor, and at this time, the image sensor is imaged clearly.
M2 and M3 are positions where the image sensor may be located when the imaging device is not in the in-focus state, as shown in fig. 1, when the image sensor is located at the M2 position or the M3 position, the imaging light rays g reflected by the object W to the Lens in different directions will be imaged at different positions. Referring to fig. 1, when the image sensor is located at the position M2, the imaging light rays g reflected by the object W to the Lens in different directions are respectively imaged at the position a and the position B, and when the image sensor is located at the position M3, the imaging light rays g reflected by the object W to the Lens in different directions are respectively imaged at the position C and the position D, and at this time, the image sensor is not clear.
In the PDAF technology, differences in positions of images formed by imaging light rays entering the lens from different directions in the image sensor can be obtained, for example, as shown in fig. 1, a difference between a position a and a position B, or a difference between a position C and a position D can be obtained; after acquiring the difference of the positions of images formed by imaging light rays entering the lens from different directions in the image sensor, obtaining the out-of-focus distance according to the difference and the geometric relationship between the lens and the image sensor in the camera, wherein the out-of-focus distance refers to the distance between the current position of the image sensor and the position where the image sensor is supposed to be in the in-focus state; the imaging device can focus according to the obtained defocus distance.
From this, it is understood that the calculated PD value is 0 at the time of focusing, whereas the larger the calculated value is, the farther the position of the clutch focus is indicated, and the smaller the value is, the closer the clutch focus is indicated. When PDAF focusing is adopted, the PD value is calculated, the corresponding relation between the PD value and the defocusing distance is obtained according to calibration, the defocusing distance can be obtained, and then the lens is controlled to move to reach the focusing point according to the defocusing distance, so that focusing is realized.
In the related art, some phase detection pixel points may be provided in pairs among the pixel points included in the image sensor, and as shown in fig. 2, a phase detection pixel point pair (hereinafter, referred to as a pixel point pair) a, a pixel point pair B, and a pixel point pair C may be provided in the image sensor. In each pixel point pair, one phase detection pixel point performs Left shielding (English), and the other phase detection pixel point performs Right shielding (English).
For the phase detection pixel point which is shielded on the left side, only the light beam on the right side in the imaging light beam which is emitted to the phase detection pixel point can image on the photosensitive part (namely, the part which is not shielded) of the phase detection pixel point, and for the phase detection pixel point which is shielded on the right side, only the light beam on the left side in the imaging light beam which is emitted to the phase detection pixel point can image on the photosensitive part (namely, the part which is not shielded) of the phase detection pixel point. Therefore, the imaging light beam can be divided into a left part and a right part, and the phase difference can be obtained by comparing images formed by the left part and the right part of the imaging light beam.
However, since the phase detection pixel points arranged in the image sensor are generally sparse, only a horizontal phase difference can be obtained through the phase detection pixel points, and a scene with horizontal textures cannot be calculated, and the calculated PD values are mixed up to obtain an incorrect result, for example, a scene is photographed as a horizontal line, two left and right images are obtained according to PD characteristics, but the PD values cannot be calculated.
In order to solve the problem that the phase detection autofocus cannot calculate a PD value for some horizontal texture scenes to achieve focusing, an embodiment of the present application provides an imaging component, which may be configured to detect and output a phase difference value in a first direction and a phase difference value in a second direction, and may implement focusing by using the phase difference value in the second direction for horizontal texture scenes.
In one embodiment, the present application provides an imaging assembly. The imaging assembly includes an image sensor. The image sensor may be a Metal Oxide Semiconductor (CMOS) image sensor, a Charge-coupled Device (CCD), a quantum thin film sensor, an organic sensor, or the like.
Fig. 3 is a schematic structural diagram of a part of an image sensor in one embodiment. The image sensor 300 includes a plurality of pixel groups Z arranged in an array, each pixel group Z includes a plurality of pixels D arranged in an array, and each pixel D corresponds to one photosensitive unit. Each pixel point D comprises a plurality of sub-pixel points D arranged in an array. That is, each photosensitive unit may be composed of a plurality of photosensitive elements arranged in an array. The photosensitive element is an element capable of converting an optical signal into an electrical signal. In one embodiment, the light sensing element may be a photodiode. In the embodiment of the present application, each pixel group Z includes 4 pixels D arranged in 2 × 2 arrays, and each pixel may include 4 sub-pixels D arranged in 2 × 2 arrays. Each pixel group forms 2 × 2PD, can directly receive optical signals, performs photoelectric conversion, and can simultaneously output left and right and up and down signals. Each color channel may consist of 4 sub-pixel points.
As shown in fig. 4, taking an example that each pixel includes a subpixel 1, a subpixel 2, a subpixel 3, and a subpixel 4, the subpixel 1 and the subpixel 2 may be synthesized, the subpixel 3 and the subpixel 4 are synthesized, a PD pixel pair in the up-down direction is formed, and a horizontal edge is detected to obtain a phase difference value in the second direction, that is, a PD value in the vertical direction; the sub-pixel point 1 and the sub-pixel point 3 are synthesized, the sub-pixel point 2 and the sub-pixel point 4 are synthesized to form a PD pixel pair in the left and right directions, a vertical edge can be detected, and a phase difference value in the first direction, namely a PD value in the horizontal direction, is obtained.
Fig. 5 is a schematic internal configuration diagram of an image forming apparatus in one embodiment. As shown in fig. 5, the imaging device includes a lens 50, a filter 52, and an image sensor 54. The lens 50, the filter 52 and the image sensor 54 are sequentially located on the incident light path, i.e., the lens 50 is disposed on the filter 52 and the filter 52 is disposed on the image sensor 54.
The filter 52 may include three types of red, green and blue, which only transmit the light with the wavelengths corresponding to the red, green and blue colors, respectively. A filter 52 is disposed on one pixel.
Image sensor 54 may be the image sensor of fig. 3.
The lens 50 is used to receive incident light and transmit the incident light to the filter 52. The filter 52 smoothes incident light, and then makes the smoothed light incident on the light receiving unit of the image sensor 54 on a pixel basis.
The light-sensing unit in the image sensor 54 converts light incident from the optical filter 52 into a charge signal by the photoelectric effect, and generates a pixel signal in accordance with the charge signal. The charge signal corresponds to the received light intensity.
Fig. 6 is a schematic diagram illustrating an embodiment of disposing an optical filter on a pixel group. The pixel point group Z comprises 4 pixel points D arranged in an array arrangement manner of two rows and two columns, wherein a color channel of the pixel point in the first row and the first column is green, that is, the optical filter arranged on the pixel point in the first row and the first column is a green optical filter; the color channel of the pixel points in the first row and the second column is red, that is, the optical filter arranged on the pixel points in the first row and the second column is a red optical filter; the color channel of the pixel points in the second row and the first column is blue, that is, the optical filter arranged on the pixel points in the second row and the first column is a blue optical filter; the color channel of the pixel points in the second row and the second column is green, that is, the optical filter arranged on the pixel points in the second row and the second column is a green optical filter.
FIG. 7 is a flowchart of a focusing method in one embodiment. The focus control method in this embodiment is described by taking the image sensor in fig. 5 as an example. As shown in fig. 7, the focus control method includes steps 720 to 760.
And 720, performing main body detection on the original image to obtain a target main body area.
The original image refers to an RGB image obtained by shooting a shooting scene by a camera module of the electronic device, and the display range of the original image is consistent with the range of image information that can be captured by the camera module. The electronic device performs subject detection on the original image, and the trained deep learning neural network model can be used for performing subject detection on the original image. Of course, in this embodiment, other methods may also be used to perform subject detection to obtain the target subject region. This is not limited in this application.
The target subject region is a region including a target subject in the original image. The target subject region may be an irregular region obtained along the edge of the target subject or a rectangular region including the target subject. This is not a limitation of the present application.
Step 740, calculating a phase difference in a first direction and a phase difference in a second direction according to the original image of the target subject region, wherein a preset included angle is formed between the first direction and the second direction.
When the original image is subject-detected to obtain a target subject region, the target subject region is extracted from the original image. And carrying out main body segmentation on the original image to obtain a target main body area, wherein the target main body area comprises a target main body and a background area. For a target subject with a relatively regular pattern, the background region contained in the target subject region obtained by performing subject segmentation on the original image is relatively small. For example, for a target subject such as a person, a cat, or a dog, a target subject region obtained by subject segmentation of an original image is mostly a rectangular region or a region obtained by segmentation along the edge of the target subject; in the case of an irregular target subject, since the edge of the target subject is irregular, a background region is generally included in a target subject region obtained by subject segmentation of an original image. For example, in the case of an irregular target subject such as a spider, since it is difficult to divide the edge of the target subject, a target subject region obtained by dividing the subject of the original image is often rectangular, and in this case, a background region included in the rectangular target subject region is often large. When the target subject in the target subject region is focused, the background region may affect the focusing accuracy to a certain extent when the background region is large. Image preprocessing may be performed on a background region in the subject region to reduce interference of the background region with the target subject. The image preprocessing can be to erase the background area and reduce the definition, so that the target main body in the target main body area is more prominent and clearer, and the target main body can be focused more accurately when the target main body area is focused later.
And calculating the phase difference in the first direction and the phase difference in the second direction of the original image corresponding to the target main body area after image preprocessing, wherein a preset included angle is formed between the first direction and the second direction. For example, the first direction and the second direction are perpendicular to each other. Of course, in other embodiments, only two different orientations are required, and the two different orientations need not be arranged in a perpendicular relationship. Specifically, the first direction and the second direction may be determined according to texture features in the original image. When the determined first direction is a horizontal direction in the original image, then the second direction may be a vertical direction in the original image. The phase difference in the horizontal direction can reflect the horizontal texture features in the target subject region, and the phase difference in the vertical direction can reflect the vertical texture features in the target subject region, so that the body of the horizontal texture and the body of the vertical texture in the target subject region can be considered.
Specifically, a luminance map of the target subject region is generated from an original image (RGB map) of the target subject region. Then, a phase difference in a first direction and a phase difference in a second direction, which are perpendicular to each other, are calculated from the luminance map of the target subject region. Generally, a frequency domain algorithm and a space domain algorithm can be used for calculating the phase difference value, but other methods can be used for calculating the phase difference value. The frequency domain algorithm is to calculate by using the characteristic of Fourier displacement, convert the acquired target brightness image from the space domain to the frequency domain by using Fourier transformation, then calculate phase compensation, when the compensation calculates the maximum value (peak), it shows that there is the maximum displacement, at this time, it can know how much the maximum displacement is in the space domain by doing inverse Fourier. The spatial domain algorithm is to find out feature points, such as edge features, doG (difference of Gaussian), harris corners, and the like, and then calculate the displacement by using the feature points.
And 760, determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target main body area according to the target phase difference.
After the phase difference in the first direction and the phase difference in the second direction are calculated, the phase difference with the highest accuracy is extracted as the target phase difference from the phase difference in the first direction and the phase difference in the second direction. And calculating out-of-focus distance according to the target phase difference, and further controlling the lens to move according to the out-of-focus distance so as to focus the target main body area.
In the process of controlling the lens to focus to the target main body area, other automatic focusing modes can be adopted to match the phase difference focusing PDAF. For example, the hybrid focusing with the phase focusing method may be performed by using one or more of Time of flight Auto Focus (TOFAF), contrast Auto Focus (CAF), laser focusing, and the like. Specifically, TOF can be used for rough focusing, and then PDAF can be used for precise focusing; or PDAF is firstly adopted for rough focusing, and then CAF is adopted for precise focusing and the like. The automatic focusing mode of PDAF for rough focusing and CAF for precise focusing can combine the speed of phase automatic focusing and the precision of contrast automatic focusing. First, the phase autofocus adjusts the distance setting quickly, at which time the object being photographed is already clear. And then fine adjustment is carried out by contrast focusing, and due to the advance adjustment of the focusing position, the maximum contrast can be determined in less time, so that more accurate focusing can be realized.
In the embodiment of the application, firstly, a target subject region is extracted from an original image, and secondly, a phase difference in two directions, namely a first direction and a second direction, is calculated for the original image of the target subject region in a targeted manner. Compared with the traditional method that only the phase difference in one direction can be calculated, the phase difference in two directions can obviously reflect more phase difference information. And finally, determining the target phase difference from the phase difference in the first direction and the phase difference in the second direction, wherein the target phase difference is higher in accuracy, so that the target main body area is focused according to the target phase difference, and the accuracy in the focusing process can be greatly improved.
In one embodiment, an electronic device includes an image sensor including a plurality of pixel groups arranged in an array, each pixel group including M × N pixels arranged in an array; each pixel point corresponds to one photosensitive unit; each pixel point comprises a plurality of sub pixel points arranged in an array, wherein M and N are both natural numbers which are more than or equal to 2;
as shown in fig. 8, step 740, calculating a phase difference in a first direction and a phase difference in a second direction from the original image of the target subject region, includes:
step 742, for each pixel point group in the original image of the target subject area, obtaining a sub-luminance map corresponding to the pixel point group according to the luminance value of the sub-pixel point at the same position of each pixel point in the pixel point group.
In general, the luminance value of a pixel of an image sensor may be represented by the luminance value of a sub-pixel included in the pixel. The imaging device can obtain the sub-brightness map corresponding to the pixel point group according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group. The brightness value of the sub-pixel point refers to the brightness value of the optical signal received by the photosensitive element corresponding to the sub-pixel point.
As described above, the sub-pixel included in the image sensor is a photosensitive element capable of converting an optical signal into an electrical signal, so that the intensity of the optical signal received by the sub-pixel can be obtained according to the electrical signal output by the sub-pixel, and the luminance value of the sub-pixel can be obtained according to the intensity of the optical signal received by the sub-pixel.
And 744, generating a target brightness map according to the sub-brightness maps corresponding to the pixel point groups.
The imaging device can splice the sub-luminance graphs corresponding to the pixel groups according to the array arrangement mode of the pixel groups in the image sensor to obtain a target luminance graph.
Step 746, calculating the phase difference in the first direction and the phase difference in the second direction according to the target luminance map.
And performing segmentation processing on the target brightness image to obtain a first segmentation brightness image and a second segmentation brightness image, and determining the phase difference value of the matched pixels according to the position difference of the matched pixels in the first segmentation brightness image and the second segmentation brightness image. And determining the phase difference value of the first direction and the phase difference value of the second direction according to the phase difference values of the matched pixels.
In this embodiment of the application, because the image sensor in the electronic device includes a plurality of pixel groups that the array was arranged, every pixel group includes a plurality of pixels that the array was arranged. Each pixel point corresponds to one photosensitive unit and comprises a plurality of sub-pixel points arranged in an array, and each sub-pixel point corresponds to one photodiode. Therefore, a brightness value, namely the brightness value of the sub-pixel point, can be obtained through each photodiode. And acquiring a sub-brightness image corresponding to the pixel group according to the brightness value of the sub-pixel, and splicing the sub-brightness images corresponding to the pixel groups according to the array arrangement mode of the pixel groups in the image sensor to obtain a target brightness image. Finally, the phase difference in the first direction and the phase difference in the second direction can be calculated according to the target brightness map.
And then, the target phase difference is determined from the phase difference in the first direction and the phase difference in the second direction, so that the accuracy of the target phase difference is higher, and the accuracy in the focusing process can be greatly improved by focusing the target main body area according to the target phase difference.
In one embodiment, as shown in fig. 9, step 746, calculating the phase difference in the first direction and the phase difference in the second direction according to the target luminance map includes: step 746a and step 746b.
Step 746a, performing segmentation processing on the target luminance graph to obtain a first segmentation luminance graph and a second segmentation luminance graph, and determining a phase difference value of mutually matched pixels according to a position difference of mutually matched pixels in the first segmentation luminance graph and the second segmentation luminance graph.
In a manner of performing the segmentation processing on the target luminance map, the imaging device may perform the segmentation processing on the target luminance map in a column direction (y-axis direction in an image coordinate system), and each segmentation line of the segmentation processing is perpendicular to the column direction in the process of performing the segmentation processing on the target luminance map in the column direction.
In another way of performing the segmentation processing on the target luminance graph, the imaging device may perform the segmentation processing on the target luminance graph along the row direction (the x-axis direction in the image coordinate system), and in the process of performing the segmentation processing on the target luminance graph along the row direction, each segmentation line of the segmentation processing is perpendicular to the row direction.
The first and second sliced luminance maps obtained by slicing the target luminance map in the column direction may be referred to as an upper map and a lower map, respectively. The first and second sliced luminance maps obtained by slicing the target luminance map in the row direction may be referred to as left and right maps, respectively.
Here, "pixels matched with each other" means that pixel matrices composed of the pixels themselves and their surrounding pixels are similar to each other. For example, pixel a and its surrounding pixels in the first tangential luminance map form a pixel matrix with 3 rows and 3 columns, and the pixel values of the pixel matrix are:
2 15 70
1 35 60
0 100 1
the pixel b and its surrounding pixels in the second sliced luminance graph also form a pixel matrix with 3 rows and 3 columns, and the pixel values of the pixel matrix are:
1 15 70
1 36 60
0 100 2
as can be seen from the above, the two matrices are similar, and pixel a and pixel b can be considered to match each other. The pixel matrixes are judged to be similar in many ways, usually, the pixel values of each corresponding pixel in two pixel matrixes are subtracted, the absolute values of the obtained difference values are added, and the result of the addition is used for judging whether the pixel matrixes are similar, that is, if the result of the addition is smaller than a preset threshold, the pixel matrixes are considered to be similar, otherwise, the pixel matrixes are considered to be dissimilar.
For example, for the two pixel matrices of 3 rows and 3 columns, 1 and 2, 15 and 15, 70 and 70, \8230 \ 8230, respectively, may be subtracted, and the absolute values of the obtained differences are added to obtain an addition result of 3, and if the addition result of 3 is less than a preset threshold, the two pixel matrices of 3 rows and 3 columns are considered to be similar.
Another way to judge whether the pixel matrixes are similar is to extract the edge features of the pixel matrixes by using a sobel convolution kernel calculation way or a high laplacian calculation way, and the like, and judge whether the pixel matrixes are similar through the edge features.
Here, the "positional difference of mutually matched pixels" refers to a difference in the position of a pixel located in the first sliced luminance map and the position of a pixel located in the second sliced luminance map among mutually matched pixels. As exemplified above, the positional difference of the pixel a and the pixel b that match each other refers to the difference in the position of the pixel a in the first sliced luminance graph and the position of the pixel b in the second sliced luminance graph.
The pixels matched with each other respectively correspond to different images formed in the image sensor by imaging light rays entering the lens from different directions. For example, a pixel a in the first sliced luminance graph and a pixel B in the second sliced luminance graph match each other, where the pixel a may correspond to the image formed at the a position in fig. 1 and the pixel B may correspond to the image formed at the B position in fig. 1.
Since the matched pixels respectively correspond to different images formed by imaging light rays entering the lens from different directions in the image sensor, the phase difference of the matched pixels can be determined according to the position difference of the matched pixels.
Step 746b, determining a phase difference value in the first direction or a phase difference value in the second direction according to the phase difference values of the pixels matched with each other.
When the first sliced luminance graph includes pixels in even-numbered rows and the second sliced luminance graph includes pixels in odd-numbered rows, and the pixel a in the first sliced luminance graph and the pixel b in the second sliced luminance graph are matched with each other, the phase difference value in the first direction can be determined according to the phase difference between the pixel a and the pixel b which are matched with each other.
When the first split luminance map includes pixels in even columns and the second split luminance map includes pixels in odd columns, and the pixel a in the first split luminance map and the pixel b in the second split luminance map are matched with each other, the phase difference value in the second direction can be determined according to the phase difference between the pixel a and the pixel b which are matched with each other.
In the embodiment of the application, the sub-luminance graphs corresponding to the pixel point groups are spliced to obtain the target luminance graph, the target luminance graph is divided into two segmentation luminance graphs, the phase difference values of the pixels matched with each other can be rapidly determined through pixel matching, meanwhile, rich phase difference values are contained, the phase difference value is improved, the accuracy and the stability of focusing are improved.
Fig. 10 is a schematic diagram of a pixel group in an embodiment, as shown in fig. 10, the pixel group includes 4 pixels arranged in an array arrangement manner of two rows and two columns, where the 4 pixels are D1 pixels, D2 pixels, D3 pixels and D4 pixels, each pixel includes 4 sub-pixels arranged in an array arrangement manner of two rows and two columns, and the sub-pixels are D11, D12, D13, D14, D21, D22, D23, D24, D31, D32, D33, D34, D41, D42, D43 and D44.
As shown in fig. 10, the arrangement positions of the sub-pixels d11, d21, d31, and d41 in each pixel are the same and are all the first row and the first column, the arrangement positions of the sub-pixels d12, d22, d32, and d42 in each pixel are the same and are all the first row and the second column, the arrangement positions of the sub-pixels d13, d23, d33, and d43 in each pixel are the same and are all the second row and the first column, and the arrangement positions of the sub-pixels d14, d24, d34, and d44 in each pixel are the same and are all the second row and the second column.
In an embodiment, step 742 obtains a sub-luminance map corresponding to the pixel group according to the luminance value of the sub-pixel at the same position of each pixel in the pixel group, which may include steps A1 to A3.
Step A1, the imaging device determines sub-pixel points at the same position from each pixel point to obtain a plurality of sub-pixel point sets. And the positions of the sub-pixel points included in each sub-pixel point set in the pixel points are the same.
The imaging device determines sub-pixel points at the same position from the D1 pixel point, the D2 pixel point, the D3 pixel point and the D4 pixel point respectively to obtain 4 sub-pixel point sets J1, J2, J3 and J4, wherein the sub-pixel point set J1 comprises sub-pixel points D11, D21, D31 and D41, the positions of the sub-pixel points included in the pixel points are the same, the sub-pixel point set is a first row and a first column, the sub-pixel point set J2 comprises sub-pixel points D12, D22, D32 and D42, the positions of the sub-pixel points included in the pixel points are the same, the sub-pixel point set is a first row and a second column, the sub-pixel point set J3 comprises sub-pixel points D13, D23, D33 and D43, the positions of the sub-pixel points included in the pixel points are the same, the sub-pixel point set is a second row and a first column, the sub-pixel point set J4 comprises sub-pixel points D14, D24, D34 and D44, the positions of the sub-pixel points included in the pixel points are the same, and the second row and the second column.
And step A2, for each sub-pixel point set, the imaging equipment acquires the brightness value corresponding to the sub-pixel point set according to the brightness value of each sub-pixel point in the sub-pixel point set.
Optionally, in step A2, the imaging device may determine a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, where the color coefficient is determined according to a color channel corresponding to the sub-pixel point.
For example, the sub-pixel D11 belongs to the D1 pixel, the optical filter included in the D1 pixel may be a green optical filter, that is, the color channel of the D1 pixel is green, the color channel of the sub-pixel D11 included therein is also green, and the imaging device may determine the color coefficient corresponding to the sub-pixel D11 according to the color channel (green) of the sub-pixel D11.
After the color coefficient corresponding to each sub-pixel point in the sub-pixel point set is determined, the imaging device may multiply the color coefficient corresponding to each sub-pixel point in the sub-pixel point set with the brightness value to obtain the weighted brightness value of each sub-pixel point in the sub-pixel point set.
For example, the imaging device may multiply the luminance value of the subpixel point d11 by a color coefficient corresponding to the subpixel point d11 to obtain a weighted luminance value of the subpixel point d 11.
After the weighted brightness value of each sub-pixel in the sub-pixel set is obtained, the imaging device may add the weighted brightness values of each sub-pixel in the sub-pixel set to obtain a brightness value corresponding to the sub-pixel set.
For example, for the sub-pixel point set J1, the brightness value corresponding to the sub-pixel point set J1 may be calculated based on the following first formula.
Y_TL=Y_21*C_R+(Y_11+Y_41)*C_G/2+Y_31*C_B。
Y _ TL is a brightness value corresponding to the sub-pixel point set J1, Y _21 is a brightness value of the sub-pixel point d21, Y _11 is a brightness value of the sub-pixel point d11, Y _41 is a brightness value of the sub-pixel point d41, Y _31 is a brightness value of the sub-pixel point d31, C _ R is a color coefficient corresponding to the sub-pixel point d21, C _ G/2 is color coefficients corresponding to the sub-pixel points d11 and d41, and C _ B is a color coefficient corresponding to the sub-pixel point d31, wherein Y _21 × C \\ R is a weighted brightness value of the sub-pixel point d21, Y _11 × C \/2 is a weighted brightness value of the sub-pixel point d11, Y _41 × C G/2 is a weighted brightness value of the sub-pixel point d41, and Y _31 × C \\ B is a weighted brightness value of the sub-pixel point d 31.
For the sub-pixel point set J2, the brightness value corresponding to the sub-pixel point set J2 may be calculated based on the following second formula.
Y_TR=Y_22*C_R+(Y_12+Y_42)*C_G/2+Y_32*C_B。
Y _ TR is a brightness value corresponding to the sub-pixel point set J2, Y _22 is a brightness value of the sub-pixel point d22, Y _12 is a brightness value of the sub-pixel point d12, Y _42 is a brightness value of the sub-pixel point d42, Y _32 is a brightness value of the sub-pixel point d32, C _ R is a color coefficient corresponding to the sub-pixel point d22, C _ G/2 is a color coefficient corresponding to the sub-pixel points d12 and d42, C _ B is a color coefficient corresponding to the sub-pixel point d32, wherein Y _22 × C _ris a weighted brightness value of the sub-pixel point d22, Y _12 × C _g/2 is a weighted brightness value of the sub-pixel point d12, Y _ 42C _g/2 is a weighted brightness value of the sub-pixel point d42, and Y _ 32C _bis a weighted brightness value of the sub-pixel point d 32.
For the sub-pixel point set J3, the brightness value corresponding to the sub-pixel point set J3 may be calculated based on the following third formula.
Y_BL=Y_23*C_R+(Y_13+Y_43)*C_G/2+Y_33*C_B。
Y _ BL is a brightness value corresponding to the sub-pixel point set J3, Y _23 is a brightness value of the sub-pixel point d23, Y _13 is a brightness value of the sub-pixel point d13, Y _43 is a brightness value of the sub-pixel point d43, Y _33 is a brightness value of the sub-pixel point d33, C _ R is a color coefficient corresponding to the sub-pixel point d23, C _ G/2 is a color coefficient corresponding to the sub-pixel points d13 and d43, C _ B is a color coefficient corresponding to the sub-pixel point d33, wherein Y _23 × C_r is a weighted brightness value of the sub-pixel point d23, Y _13 × C_g/2 is a weighted brightness value of the sub-pixel point d13, Y _43 × C \/2 is a weighted brightness value of the sub-pixel point d43, and Y _33 × C _bis a weighted brightness value of the sub-pixel point d 33.
For the sub-pixel point set J4, the brightness value corresponding to the sub-pixel point set J4 may be calculated based on the following fourth formula.
Y_BR=Y_24*C_R+(Y_14+Y_44)*C_G/2+Y_34*C_B。
Y _ BR is a brightness value corresponding to the sub-pixel set J4, Y _24 is a brightness value of the sub-pixel d24, Y _14 is a brightness value of the sub-pixel d14, Y _44 is a brightness value of the sub-pixel d44, Y _34 is a brightness value of the sub-pixel d34, C _ R is a color coefficient corresponding to the sub-pixel d24, C _ G/2 is a color coefficient corresponding to the sub-pixels d14 and d44, C _ B is a color coefficient corresponding to the sub-pixel d34, wherein Y _24 × C _ris a weighted brightness value of the sub-pixel d24, Y _14 × C _g/2 is a weighted brightness value of the sub-pixel d14, Y _44 × G/2 is a weighted brightness value of the sub-pixel d44, and Y _ 34C _bis a weighted brightness value of the sub-pixel d 34.
And step A3, the imaging device generates a sub-brightness map according to the brightness value corresponding to each sub-pixel set.
The sub-luminance map comprises a plurality of pixels, each pixel in the sub-luminance map corresponds to one sub-pixel set, and the pixel value of each pixel is equal to the luminance value corresponding to the corresponding sub-pixel set.
FIG. 11 is a diagram of a sub-luminance graph in one embodiment. As shown in fig. 11, the sub-luminance map includes 4 pixels, wherein the pixel of the first row and the first column corresponds to the sub-pixel set J1 and has a pixel value of Y _ TL, the pixel of the first row and the second column corresponds to the sub-pixel set J2 and has a pixel value of Y _ TR, the pixel of the second row and the first column corresponds to the sub-pixel set J3 and has a pixel value of Y _ BL, and the pixel of the second row and the second column corresponds to the sub-pixel set J4 and has a pixel value of Y _ BR.
In one embodiment, as shown in fig. 12, step 760, determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target subject area according to the target phase difference includes:
step 762, obtaining a first confidence coefficient of the phase difference in the first direction and a second confidence coefficient of the phase difference in the second direction;
step 764, selecting a larger phase difference between the first confidence coefficient and the second confidence coefficient as a target phase difference, and determining a corresponding defocus distance from a corresponding relationship between the phase difference and the defocus distance according to the target phase difference;
step 766, controlling the lens to move according to the defocusing distance so as to focus.
Specifically, when the confidence of the phase difference value in the first direction is greater than the confidence of the phase difference value in the second direction, the phase difference value in the first direction is selected, a corresponding defocus distance value is obtained according to the phase difference value in the first direction, and the moving direction is determined to be the horizontal direction.
And when the confidence coefficient of the phase difference value in the first direction is smaller than that of the phase difference value in the second direction, selecting the phase difference value in the second direction, obtaining a corresponding defocus distance value according to the phase difference value in the second direction, and determining the moving direction as the vertical direction.
When the confidence of the phase difference value in the first direction is equal to the confidence of the phase difference value in the second direction, the defocus distance value in the horizontal direction can be determined according to the phase difference value in the first direction, and the defocus distance value in the vertical direction can be determined according to the phase difference value in the second direction, and the defocus distance value in the horizontal direction is moved first and then the defocus distance value in the vertical direction is moved, or the defocus distance value in the vertical direction is moved first and then the defocus distance value in the horizontal direction is moved.
For a scene with horizontal texture, because the PD pixel pair in the horizontal direction can not obtain the phase difference value in the first direction, the phase difference value in the second direction in the vertical direction can be compared with the PD pixel pair in the vertical direction, the defocusing distance value is calculated according to the phase difference value in the second direction, and then the lens is controlled to move according to the defocusing distance value in the vertical direction to realize focusing.
For a scene with vertical texture, because the phase difference value in the second direction cannot be obtained by the PD pixel pair in the vertical direction, the phase difference value in the first direction in the horizontal direction can be calculated compared with the phase difference value in the horizontal direction, the defocusing distance value is calculated according to the phase difference value in the first direction, and then the lens is controlled to move according to the defocusing distance value in the horizontal direction to realize focusing.
According to the focusing control method, the phase difference value in the first direction and the phase difference value in the second direction are obtained, the defocusing distance value and the moving direction are determined according to the phase difference value in the first direction and the phase difference value in the second direction, and the lens is controlled to move according to the defocusing distance value and the moving direction, so that automatic focusing of phase detection is realized.
In one embodiment, subject detection is performed on the original image to obtain a target subject region. The trained deep learning neural network model can be used for carrying out subject detection on the original image. The process of the main body detection is as follows:
first, a visible light map is acquired.
The subject detection (subject detection) is to automatically process the region of interest and selectively ignore the region of no interest when facing a scene. The region of interest is referred to as the subject region. The visible light map refers to an RGB (Red, green, blue) image. A color camera can be used for shooting any scene to obtain a color image, namely an RGB image. The visible light map may be stored locally by the electronic device, may be stored by other devices, may be stored from a network, and may also be captured in real time by the electronic device, without being limited thereto. Specifically, an ISP processor or a central processor of the electronic device may obtain a visible light map from a local or other device or a network, or obtain the visible light map by shooting a scene through a camera.
And secondly, generating a central weight map corresponding to the visible light map, wherein the weight value represented by the central weight map is gradually reduced from the center to the edge.
The central weight map is a map used for recording the weight value of each pixel point in the visible light map. The weight values recorded in the central weight map gradually decrease from the center to the four sides, i.e., the central weight is the largest, and the weight values gradually decrease toward the four sides. And the weight value from the image center pixel point to the image edge pixel point of the visible light image is characterized by the center weight chart to be gradually reduced.
The ISP processor or central processor may generate a corresponding central weight map according to the size of the visible light map. The weight value represented by the central weight map gradually decreases from the center to the four sides. The central weight map may be generated using a gaussian function, or using a first order equation, or a second order equation. The gaussian function may be a two-dimensional gaussian function.
And thirdly, inputting the visible light image and the central weight image into a main body detection model to obtain a main body region confidence image, wherein the main body detection model is obtained by training in advance according to the visible light image, the depth image, the central weight image and the corresponding marked main body mask image of the same scene.
The subject detection model is obtained by acquiring a large amount of training data in advance and inputting the training data into the subject detection model containing the initial network weight for training. Each set of training data comprises a visible light graph, a center weight graph and a labeled main body mask graph corresponding to the same scene. The visible light map and the central weight map are used as input of a trained subject detection model, and the labeled subject mask (mask) map is used as an expected output real value (ground true) of the trained subject detection model. The main body mask image is an image filter template used for identifying a main body in an image, and can shield other parts of the image and screen out the main body in the image. The subject detection model may be trained to recognize and detect various subjects, such as people, flowers, cats, dogs, backgrounds, etc.
Specifically, the ISP processor or central processor may input the visible light map and the central weight map into the subject detection model, and perform detection to obtain a subject region confidence map. The subject region confidence map is used to record the probability of which recognizable subject the subject belongs to, for example, the probability of a certain pixel point belonging to a person is 0.8, the probability of a flower is 0.1, and the probability of a background is 0.1.
And fourthly, determining a target subject in the visible light image according to the subject region confidence map.
The subject refers to various subjects, such as human, flower, cat, dog, cow, blue sky, white cloud, background, etc. The target subject refers to a desired subject, and can be selected as desired. Specifically, the ISP processor or the central processing unit may select the highest or the highest confidence level as the subject in the visible light image according to the subject region confidence map, and if there is one subject, the subject is used as the target subject; if multiple subjects exist, one or more of the subjects can be selected as target subjects as desired.
FIG. 13 is a diagram illustrating an image processing effect according to an embodiment. As shown in fig. 13, a butterfly exists in the RGB map 1302, the RGB map is input to a subject detection model to obtain a subject region confidence map 1304, then the subject region confidence map 1304 is filtered and binarized to obtain a binarized mask map 1306, and then the binarized mask map 1306 is subjected to morphological processing and guided filtering to realize edge enhancement, so as to obtain a subject mask map 1308.
In the embodiment of the application, a visible light map is obtained, and a central weight map corresponding to the visible light map is generated, wherein a weight value represented by the central weight map is gradually reduced from the center to the edge. And inputting the visible light image and the central weight image into a main body detection model to obtain a main body region confidence image, wherein the main body detection model is obtained by training in advance according to the visible light image, the depth image, the central weight image and the corresponding marked main body mask image of the same scene. And determining the target subject in the visible light map according to the subject region confidence map. Therefore, the main body detection is carried out on the original image, and the target main body area is accurately obtained.
In one embodiment, the subject detection on the original image to obtain the target subject region includes:
and carrying out main body detection on the original image to obtain at least two main body areas.
After an original image obtained by shooting of the electronic equipment is obtained, main body detection is carried out on the original image by adopting a main body detection network model, and at least two main body detection results are obtained. If the original image contains several subjects, several subject detection results will be generated correspondingly after subject detection. A subject detection result may refer to a detection frame in which the original image includes all regions of a subject, for example, a rectangular detection frame including the whole body of a dog, but the detection frame may also be in other planar patterns such as a circle, an ellipse, and a trapezoid. The subject detection result may be a subject detection result of a dog other than a region occupied by the whole body of the dog, for example, a region occupied by the whole body of the dog in the original image.
Of course, if another dog exists in the original image, the subject detection result corresponding to the other dog may be a rectangular detection frame including the whole body of the other dog, and of course, the detection frame may also be other planar figures such as a circle, an ellipse, and a trapezoid. The subject detection result may be a subject detection result of the other dog except for a region occupied by the whole body of the other dog, for example, a region occupied by the whole body of the other dog in the original image. Namely, each subject in the original image is subjected to individual subject detection to obtain a corresponding subject detection result. So that the target main body can be screened out by applying a uniform size range.
Acquiring shooting data of a shot original image and size information of an image display interface of the electronic equipment, and determining the size range of a target area in the original image according to the shooting data of the original image and the size information of the image display interface of the electronic equipment.
Since both the shooting data of the shot original image and the size information of the image display interface of the electronic device affect the size range of the target subject in the original image, the size range of the target subject in the original image is determined according to the two parameter information. The shooting data for shooting the original image may include parameters such as a focal length and an aperture size when the original image is shot. The size information of the image display interface of the electronic device includes size information of a display screen of the electronic device, scale information of a display screen (e.g., 1.
For example, when the lens used for capturing the original image is a telephoto lens, the display scale of the original image is small, and the size range of the target subject in the original image is correspondingly small. When the lens adopted in the process of shooting the original image is the short-focus lens, the display scale of the original image is larger, and the size range of the target main body in the original image is correspondingly larger. The original images taken by the same camera at different zoom factors and presented on the CCD are all the same.
When the lens adopted in the process of shooting the original image is the short-focus lens and the 1-time zooming is switched to the 2-time zooming, the range of the preview image intercepted from the original image is reduced, and the preview image is enlarged to be suitable for the size of an image display interface of the electronic equipment for display. Therefore, the size range of the subject in the original image at the time of 2 times zooming becomes correspondingly smaller.
When the size information of the display screen of the electronic device is smaller, the size range of the target subject in the original image is correspondingly smaller, and the size range of the target subject in the original image is also related to the scale information of the display screen of the electronic device.
A subject region corresponding to the size range is selected from the plurality of subject regions, and the subject region corresponding to the size range is set as a target subject region.
After an original image obtained by shooting of the electronic equipment is obtained, main body detection is carried out on the original image by adopting a main body detection network model, and at least two main body detection results are obtained. And determining the size range of the target subject in the original image according to the shooting data of the original image and the size information of the image display interface of the electronic equipment. The subject detection result conforming to the size range can be screened out from the plurality of subject detection results, and the subject detection result conforming to the size range is taken as the target subject.
In the image processing method in the embodiment of the present application, in the electronic device, a display area of an original image obtained by shooting by the electronic device is generally larger than a display area of a preview image displayed on an image display interface. In order to accurately display a subject in an original image in a preview image, therefore, subject detection is first performed on the original image; second, the size range of the target subject in the original image is determined. And finally, screening out a main body detection result with the size range conforming to the size range of the target main body from at least two main body detection results of the original image. The subject detection result is displayed as a target subject of the preview image. According to the size range of the target subject in the original image, the target subject can be accurately screened out from the subject detection result so as to be displayed in the preview image. Since both the shooting data of the shot original image and the size information of the image display interface of the electronic device affect the size range of the target subject in the original image, the size range of the target subject in the original image is determined according to the two parameter information.
In one embodiment, the target subject region includes a face region.
Specifically, when a person is photographed, the target main body area is a face area under most conditions, that is, the face area needs to be focused accurately, so that the definition of the face area is high. The face region has a smaller area than the whole screen, so that relatively less phase difference information can be acquired. Therefore, focusing is performed by using the electronic device in the above embodiment, and the phase difference in the first direction and the phase difference in the second direction, which are perpendicular to each other, can be calculated from the original image of the face area. And then determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and performing focusing accuracy on the human face region according to the target phase difference.
The electronic equipment comprises an image sensor, wherein the image sensor comprises a plurality of pixel point groups which are arranged in an array, and each pixel point group comprises a plurality of pixel points which are arranged in an array; each pixel point corresponds to one photosensitive unit; each pixel point comprises a plurality of sub pixel points arranged in an array; each sub-pixel point corresponds to a photodiode. The shape of the photodiode may be a circle, a square, or a sector, which is not limited in this application.
In the embodiment of the present application, compared to a conventional method for focusing a face region, only a phase difference in one direction can be acquired, and obviously, phase difference information that can be acquired in an originally smaller face region is more limited. Therefore, the phase difference in two directions can be calculated according to the luminance map collected by the image sensor in the foregoing embodiment, so that the information amount of the phase difference is doubled, and more detailed phase difference information can be acquired. Furthermore, the target phase difference is determined from the phase difference in the first direction and the phase difference in the second direction, and the target main body area is focused according to the target phase difference, so that the face area can be focused more accurately.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in the above-described flowcharts may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or the stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 14, there is provided a focus control apparatus 800 including: a body detection module 1420, a phase difference calculation module 1440 and a phase difference focusing module 1460. Wherein, the first and the second end of the pipe are connected with each other,
a subject detection module 1420, configured to perform subject detection on the original image to obtain a target subject region;
the phase difference calculating module 1440 is configured to calculate a phase difference in a first direction and a phase difference in a second direction according to the original image of the target subject area, where a preset included angle is formed between the first direction and the second direction;
the phase difference focusing module 1460 is configured to determine a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focus the target main body region according to the target phase difference.
In one embodiment, the phase difference calculating module 1440 further comprises:
the sub-brightness map acquisition unit is used for acquiring a sub-brightness map corresponding to each pixel group according to the brightness value of the sub-pixel at the same position of each pixel in the pixel group for each pixel group in the original image of the target main body region;
the target brightness graph acquisition unit is used for generating a target brightness graph according to the sub-brightness graph corresponding to each pixel point group;
and the first direction and second direction phase difference unit is used for calculating the phase difference of the first direction and the phase difference of the second direction according to the target brightness map.
In one embodiment, the phase difference unit in the first direction and the second direction is further configured to segment the target luminance map to obtain a first segmented luminance map and a second segmented luminance map, and determine a phase difference value of pixels matched with each other according to a position difference of the pixels matched with each other in the first segmented luminance map and the second segmented luminance map; and determining the phase difference value of the first direction and the phase difference value of the second direction according to the phase difference values of the matched pixels.
In an embodiment, the sub-luminance map obtaining unit is further configured to determine sub-pixel points at the same position from each pixel point to obtain a plurality of sub-pixel point sets, where positions of the sub-pixel points included in each sub-pixel point set in the pixel points are the same; for each sub-pixel point set, acquiring a brightness value corresponding to the sub-pixel point set according to the brightness value of each sub-pixel point in the sub-pixel point set; and generating a sub-brightness map according to the brightness value corresponding to each sub-pixel set.
In one embodiment, the sub-luminance map obtaining unit is further configured to determine a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, where the color coefficient is determined according to a color channel corresponding to the sub-pixel point; multiplying a color coefficient corresponding to each sub-pixel point in the sub-pixel point set by the brightness value to obtain the weighted brightness of each sub-pixel point in the sub-pixel point set; and adding the weighted brightness of each sub-pixel point in the sub-pixel point set to obtain the brightness value corresponding to the sub-pixel point set.
In one embodiment, the phase difference focusing module 1460 includes: the device comprises a confidence coefficient acquisition unit, a defocusing distance determination unit and a focusing unit. Wherein the content of the first and second substances,
a confidence degree acquisition unit configured to acquire a first confidence degree of the phase difference in the first direction and a second confidence degree of the phase difference in the second direction;
the defocusing distance determining unit is used for selecting a larger phase difference from the first confidence coefficient and the second confidence coefficient as a target phase difference and determining a corresponding defocusing distance from the corresponding relation between the phase difference and the defocusing distance according to the target phase difference;
and the focusing unit is used for controlling the lens to move according to the defocusing distance so as to focus.
In one embodiment, the subject detection module 1420 is further configured to obtain a visible light map of the original image; generating a central weight map corresponding to a visible light map of the original image; inputting the visible light image and the central weight image into a main body detection model to obtain a main body region confidence map; and determining a target subject region in the original image according to the subject region confidence map.
In one embodiment, subject detection module 1420 includes:
the device comprises a main body detection unit, a main body detection unit and a main body detection unit, wherein the main body detection unit is used for performing main body detection on an original image obtained by shooting electronic equipment to obtain at least two main body areas;
the size range determining unit of the target area is used for acquiring shooting data of a shot original image and size information of an image display interface of the electronic equipment, and determining the size range of the target area in the original image according to the shooting data of the original image and the size information of the image display interface of the electronic equipment;
and a target subject region selection unit configured to select a subject region corresponding to the size range from the plurality of subject regions, and to set the subject region corresponding to the size range as a target subject region.
In one embodiment, the target subject region includes a face region.
The division of the modules in the focusing control apparatus is only used for illustration, and in other embodiments, the focusing control apparatus may be divided into different modules as needed to complete all or part of the functions of the focusing control apparatus.
Fig. 15 is a schematic internal structure diagram of an electronic device in one embodiment. As shown in fig. 15, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing a focus control method provided in the following embodiments. The internal memory provides a cached operating environment for operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The implementation of each module in the focus control apparatus provided in the embodiments of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The process of the electronic device implementing the focus control method is as described in the above embodiments, and is not described herein again.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the focus control method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform a focus control method.
Any reference to memory, storage, databases, or other media used by embodiments of the application may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct bused dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM).
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (10)

1. A focusing control method is applied to electronic equipment and is characterized in that the electronic equipment comprises an image sensor, the image sensor comprises a plurality of pixel point groups which are arranged in an array, and each pixel point group comprises M x N pixel points which are arranged in an array; each pixel point corresponds to one photosensitive unit; each pixel point comprises a plurality of sub pixel points arranged in an array, and each sub pixel point corresponds to one photodiode; wherein M and N are both natural numbers greater than or equal to 2; the method comprises the following steps:
performing main body detection on the original image to obtain a target main body area;
for each pixel point group in the original image of the target main body region, acquiring a sub-brightness map corresponding to the pixel point group according to the brightness value of a sub-pixel point at the same position of each pixel point in the pixel point group;
generating a target brightness map according to the sub-brightness map corresponding to each pixel point group;
performing segmentation processing on the target brightness image to obtain a first segmentation brightness image and a second segmentation brightness image, and determining the phase difference value of the mutually matched pixels according to the position difference of the mutually matched pixels in the first segmentation brightness image and the second segmentation brightness image;
determining a phase difference value in a first direction and a phase difference value in a second direction according to the phase difference values of the mutually matched pixels, wherein a preset included angle is formed between the first direction and the second direction;
and determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target main body area according to the target phase difference.
2. The method according to claim 1, wherein the obtaining the sub-luminance graph corresponding to the pixel point group according to the luminance value of the sub-pixel point at the same position of each pixel point in the pixel point group comprises:
determining sub-pixel points at the same position from each pixel point to obtain a plurality of sub-pixel point sets, wherein the positions of the sub-pixel points in the pixel points included in each sub-pixel point set are the same;
for each sub-pixel point set, acquiring a brightness value corresponding to the sub-pixel point set according to the brightness value of each sub-pixel point in the sub-pixel point set;
and generating the sub-brightness map according to the brightness value corresponding to each sub-pixel set.
3. The method according to claim 2, wherein the obtaining the brightness value corresponding to the set of sub-pixels according to the brightness value of each sub-pixel in the set of sub-pixels comprises:
determining a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, wherein the color coefficient is determined according to a color channel corresponding to the sub-pixel point;
multiplying a color coefficient corresponding to each sub-pixel point in the sub-pixel point set by a brightness value to obtain the weighted brightness of each sub-pixel point in the sub-pixel point set;
and adding the weighted brightness of each sub-pixel point in the sub-pixel point set to obtain a brightness value corresponding to the sub-pixel point set.
4. The method of claim 1, wherein determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target subject area according to the target phase difference comprises:
acquiring a first confidence coefficient of the phase difference in the first direction and a second confidence coefficient of the phase difference in the second direction;
selecting a larger phase difference between the first confidence coefficient and the second confidence coefficient as a target phase difference, and determining a corresponding defocusing distance from the corresponding relation between the phase difference and the defocusing distance according to the target phase difference;
and controlling the lens to move according to the defocusing distance so as to focus.
5. The method according to claim 1, wherein the performing subject detection on the original image to obtain a target subject region comprises:
acquiring a visible light image of an original image;
generating a central weight map corresponding to a visible light map of the original image;
inputting the visible light image and the central weight image into a main body detection model to obtain a main body region confidence map;
and determining a target subject region in the original image according to the subject region confidence map.
6. The method according to claim 1, wherein the performing subject detection on the original image to obtain a target subject region comprises:
performing main body detection on an original image to obtain at least two main body areas;
acquiring shooting data for shooting the original image and size information of an image display interface of the electronic equipment, and determining the size range of a target area in the original image according to the shooting data of the original image and the size information of the image display interface of the electronic equipment;
and screening out the main body regions conforming to the size range from the plurality of main body regions, and taking the main body regions conforming to the size range as target main body regions.
7. The method of claim 1, wherein the target subject region comprises a face region.
8. A focusing control device is characterized in that an electronic device comprises an image sensor, wherein the image sensor comprises a plurality of pixel groups arranged in an array, and each pixel group comprises M x N pixels arranged in an array; each pixel point corresponds to one photosensitive unit; each pixel point comprises a plurality of sub-pixel points which are arranged in an array, and each sub-pixel point corresponds to a photodiode; wherein M and N are both natural numbers greater than or equal to 2; the device comprises:
the main body detection module is used for carrying out main body detection on the original image to obtain a target main body area;
the phase difference calculation module is used for acquiring a sub-brightness map corresponding to each pixel point group according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group for each pixel point group in the original image of the target main body area; generating a target brightness map according to the sub-brightness map corresponding to each pixel point group; performing segmentation processing on the target brightness image to obtain a first segmentation brightness image and a second segmentation brightness image, and determining the phase difference value of the mutually matched pixels according to the position difference of the mutually matched pixels in the first segmentation brightness image and the second segmentation brightness image; determining a phase difference value in a first direction and a phase difference value in a second direction according to the phase difference values of the mutually matched pixels, wherein a preset included angle is formed between the first direction and the second direction;
and the phase difference focusing module is used for determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction and focusing the target main body area according to the target phase difference.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the focus control method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201911101407.2A 2019-11-12 2019-11-12 Focusing control method and device, electronic equipment and computer readable storage medium Active CN112866545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911101407.2A CN112866545B (en) 2019-11-12 2019-11-12 Focusing control method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911101407.2A CN112866545B (en) 2019-11-12 2019-11-12 Focusing control method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112866545A CN112866545A (en) 2021-05-28
CN112866545B true CN112866545B (en) 2022-11-15

Family

ID=75984615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911101407.2A Active CN112866545B (en) 2019-11-12 2019-11-12 Focusing control method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112866545B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259596B (en) * 2021-07-14 2021-10-08 北京小米移动软件有限公司 Image generation method, phase detection focusing method and device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5387856B2 (en) * 2010-02-16 2014-01-15 ソニー株式会社 Image processing apparatus, image processing method, image processing program, and imaging apparatus
EP2690875A4 (en) * 2011-03-24 2014-11-05 Fujifilm Corp Color image sensor, imaging device, and control program for imaging device
CN103493484B (en) * 2011-03-31 2015-09-02 富士胶片株式会社 Imaging device and formation method
KR102261728B1 (en) * 2014-12-17 2021-06-08 엘지이노텍 주식회사 Image pick-up apparatus, and portable terminal including the same
JP2018141908A (en) * 2017-02-28 2018-09-13 キヤノン株式会社 Focus detection device, focus control device, imaging device, focus detection method, and focus detection program
CN106973206B (en) * 2017-04-28 2020-06-05 Oppo广东移动通信有限公司 Camera shooting module group camera shooting processing method and device and terminal equipment
CN109905600A (en) * 2019-03-21 2019-06-18 上海创功通讯技术有限公司 Imaging method, imaging device and computer readable storage medium
CN110278376A (en) * 2019-07-03 2019-09-24 Oppo广东移动通信有限公司 Focusing method, complementary metal oxide image sensor, terminal and storage medium
CN110392211B (en) * 2019-07-22 2021-04-23 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN112866545A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN110248096B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866542B (en) Focus tracking method and apparatus, electronic device, and computer-readable storage medium
CN112866549B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110881103B (en) Focusing control method and device, electronic equipment and computer readable storage medium
KR20150107571A (en) Image pickup apparatus and image pickup method of generating image having depth information
CN112866675B (en) Depth map generation method and device, electronic equipment and computer-readable storage medium
US20190052793A1 (en) Apparatus for generating a synthetic 2d image with an enhanced depth of field of an object
CN110490196A (en) Subject detection method and apparatus, electronic equipment, computer readable storage medium
JP6234401B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
WO2021093312A1 (en) Imaging assembly, focusing method and apparatus, and electronic device
CN112866510B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866545B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN112866655B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110689007B (en) Subject recognition method and device, electronic equipment and computer-readable storage medium
CN112866548B (en) Phase difference acquisition method and device and electronic equipment
CN112866547B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866546B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866543B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN112862880A (en) Depth information acquisition method and device, electronic equipment and storage medium
CN112866554B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866674B (en) Depth map acquisition method and device, electronic equipment and computer readable storage medium
CN112866544B (en) Phase difference acquisition method, device, equipment and storage medium
CN112866551B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866552B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866550B (en) Phase difference acquisition method and apparatus, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant