WO2018082604A1 - 视差与距离参数计算方法及双摄像头模组和电子设备 - Google Patents

视差与距离参数计算方法及双摄像头模组和电子设备 Download PDF

Info

Publication number
WO2018082604A1
WO2018082604A1 PCT/CN2017/109086 CN2017109086W WO2018082604A1 WO 2018082604 A1 WO2018082604 A1 WO 2018082604A1 CN 2017109086 W CN2017109086 W CN 2017109086W WO 2018082604 A1 WO2018082604 A1 WO 2018082604A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
value
camera module
disparity
dual camera
Prior art date
Application number
PCT/CN2017/109086
Other languages
English (en)
French (fr)
Inventor
陈玮逸夫
蔡赞赞
史慧波
Original Assignee
宁波舜宇光电信息有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201610971031.0A external-priority patent/CN108024051B/zh
Priority claimed from CN201610971337.6A external-priority patent/CN108377376B/zh
Application filed by 宁波舜宇光电信息有限公司 filed Critical 宁波舜宇光电信息有限公司
Publication of WO2018082604A1 publication Critical patent/WO2018082604A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Definitions

  • the invention relates to the field of image processing, in particular to a parallax and distance parameter calculation method of a dual camera module, and a dual camera module and an electronic device applying the parallax and distance parameter calculation method.
  • Dual cameras offer more possibilities for shooting without increasing the thickness of the module.
  • two parallel cameras are not exactly the same, generally one is a wide-angle lens and the other is an optical zoom lens.
  • digital zoom is generally used, and the middle part of the screenshot is enlarged from the original image, and the picture quality is obviously degraded, and the optical zoom can keep the picture sharpness while zooming in, which is the lossless zoom.
  • the dual camera settings can better meet the user's shooting needs to switch between lenses of different focal lengths, achieving lossless zoom for the best picture quality.
  • the dual camera can also effectively improve the shooting effect under low light.
  • the two camera images with different aperture parameters are compared and adjusted to the value closest to the real scene, effectively suppressing noise.
  • the two small cameras can achieve close to the effect of a large camera. Due to the limitation of the thickness of the mobile phone, it is impossible to accommodate the high-end lens.
  • the dual camera can balance the contradiction between the effect and the thickness of the module.
  • a more popular feature of the dual camera is 3D shooting, and the two sets of images can be combined to obtain a better depth of field effect, capturing fast moving objects.
  • the object of the present invention is to provide an image that can be misaligned with respect to the above-mentioned defects and deficiencies in the prior art.
  • Another object of the present invention is to provide a distance parameter calculation method capable of realizing fast ranging or fast focusing, and a dual camera module and an electronic device applying the distance parameter calculation method, in view of the above-mentioned drawbacks and deficiencies in the prior art.
  • a parallax calculation method for calculating a disparity value between pixels of a first image and a second image includes: a) selecting a first region in the first image, establishing a a first grayscale histogram of the first region, the first region being centered on the first pixel; b) adding a reference disparity value to the coordinate value of the first region in the first direction to obtain the a second grayscale histogram of the second region in the coordinates of the second region in the second image; c) calculating a difference between each row or column of the first grayscale histogram and the second grayscale histogram a first mean square error; d) increasing the reference disparity value by a predetermined step size, and repeating steps b and c until the current first mean square error is greater than the previous first mean square error, and the previous first mean
  • the variance is determined as a first minimum mean squared difference; e) determining a first disparity value corresponding to the first minimum mean square difference value; and f)
  • parallax calculation method further comprising: repeating the steps a, b, c, d, e, and f for each pixel in the first image to obtain each pixel in the first image And a disparity table between the first image and the second image based on a disparity value of each pixel in the first image.
  • the method further includes: scaling the first region to a predetermined size as a third region; repeating the steps a, b, c, and d based on the third region to obtain a second minimum mean squared difference; comparing the first minimum mean squared difference with the second minimum mean squared difference; and, wherein the second minimum mean squared difference is less than the first minimum mean squared difference In the case, the second mean squared difference is determined as the first minimum mean squared difference.
  • the method further includes: scaling the first region to a predetermined size, wherein the size of the fourth region is larger than a size of the first region, and The size of the third area is smaller than the size of the first area; the steps a, b, c and d are repeated based on the fourth area to obtain a third minimum mean square error value; comparing the first minimum mean square difference value, Determining a second minimum mean square difference value and the third minimum mean square difference value; and, the first minimum mean square difference value, the second minimum mean square difference value, and the third minimum mean square error The smallest one of the values is determined as the first minimum mean squared difference.
  • the first direction is a row direction or a column direction of an image.
  • step a further comprising: scaling the first image and the second image to the same size.
  • the method before step a, the method further includes converting the first image and the second image into a map image of the same color format.
  • step a further comprising: acquiring original image data information of each camera from the dual camera; and converting the acquired original image into the first suitable for display processing using a difference algorithm An image and the second image.
  • step a further comprising: converting the first image and the second image into a first grayscale image and a second grayscale image; and, according to a required disparity map size, The first grayscale image and the second grayscale image are respectively scaled to the disparity map size.
  • the method further includes synthesizing the first image and the second image into a three-dimensional image based on the parallax table.
  • a dual camera module including: a first camera for acquiring a first image; a second camera for acquiring a second image; and a processing unit for calculating The disparity value between the pixels of an image and the second image specifically includes: a) selecting a first region in the first image, and establishing a first grayscale histogram of the first region, wherein the first region is The first pixel is centered; b) adding a reference disparity value to the coordinate value of the first region in the first direction to obtain coordinates of the second region in the second image, establishing the second region a second grayscale histogram; c) calculating a first mean square error of the difference between each row or column of the first grayscale histogram and the second grayscale histogram; d) increasing the reference view by a predetermined step size Difference, and repeat steps b and c until the current first mean square error is greater than the previous first mean square error, and the previous first mean square error is determined as the first minimum mean square error;
  • the processing unit is further configured to: repeat, for each pixel in the first image, the steps a, b, c, d, e, and f to obtain the first a disparity value of each pixel in the image; and, based on a disparity value of each pixel in the first image, a disparity table between the first image and the second image is obtained.
  • the processing unit is further used after the step d, before the step e, to: scale the first area to a predetermined size as a third area; repeat the step a based on the third area, b, c and d, to obtain a second minimum mean squared difference; comparing the first minimum mean squared difference with the second minimum mean squared difference; and, wherein the second minimum mean squared difference is less than In the case of the first minimum mean square error value, the second mean squared difference value is determined as the first minimum mean squared difference value.
  • the processing unit is further used after the step d, before the step e, to: scale the first area to a predetermined size, wherein the size of the fourth area is larger than the first area. Dimensions, and the size of the third region is smaller than the size of the first region; repeating the steps a, b, c, and d based on the fourth region to obtain a third minimum mean square error value; comparing the first minimum Mean variance value, the second minimum mean square difference value, and the third minimum mean square difference value; and, the first minimum mean square difference value, the second minimum mean square difference value, and the The smallest one of the third minimum mean square differences is determined as the first minimum mean squared difference.
  • the first direction is a row direction or a column direction of an image.
  • the processing unit is further used before step a to: scale the first image and the second image to the same size.
  • the processing unit is further configured to convert the first image and the second image into images of the same color format before step a.
  • the processing unit is further used before step a: acquiring original image data information of each camera from the dual camera; and converting the acquired original image into a suitable one using a difference algorithm The processed first image and the second image are displayed.
  • the processing unit is further configured to convert the first image and the second image into a first grayscale image and a second grayscale image before the step a;
  • the required disparity map size is used to scale the first grayscale image and the second grayscale image to the disparity map size, respectively.
  • the processing unit is further configured to synthesize the first image and the second image into a three-dimensional image based on the parallax table.
  • an electronic device comprising a dual camera module as described above.
  • the parallax calculation method can be quickly calculated without correcting the image.
  • the parallax calculation method according to the present invention and the dual camera module and the electronic device applying the parallax calculation method can perform calculation in a state where the luminance difference between the two images is large, the colors are inconsistent, and the images of the two images are not relatively flat. And get relatively stable results.
  • the parallax calculation method according to the present invention, and the dual camera module and the electronic device applying the parallax calculation method have strong compatibility, the test result is good, and the correction time of one of the cameras of the dual camera module can be saved, and the user is convenient. use.
  • a distance parameter calculation method for calculating a disparity value between a first image and a second image captured by a dual camera module to calculate a dual camera module.
  • a distance parameter the method comprising: establishing a relationship between the distance parameter and the disparity value, the relationship being a sum of products of at least two disparity terms and at least two corresponding coefficients, and the disparity term is a power of the disparity value; capturing a subject with the dual camera module at at least two predetermined distances, and calculating at least two disparity values of the subject between the first image and the second image And determining the relationship based on the at least two predetermined distances and the at least two disparity values to determine the relationship.
  • the method further includes: capturing a subject with the dual camera module at a first distance, and calculating a first disparity value of the subject between the first image and the second image. And, the first disparity value is brought into the relationship to determine the value of the first distance.
  • the at least two predetermined distances are 15 cm and 35 cm, respectively.
  • the at least two predetermined distances are respectively n+1 distances, and the n+1 distances range from 7 cm to 200 cm.
  • an interval between two adjacent distances of the n+1 distances is 10 cm.
  • the step of determining the relationship specifically includes: using a quadratic fitting method to fit a binary curve of a sum of products of the at least two disparity terms and at least two corresponding coefficients, to Determine the relationship.
  • the at least two predetermined distances are 15 cm and 35 cm, respectively.
  • a dual camera module including: a first camera for acquiring a first image; a second camera for acquiring a second image; and a processing unit for Deriving a distance parameter between the first image and the second image to calculate a distance parameter related to the dual camera module, the processing unit is specifically configured to: establish the distance parameter and the disparity value a relationship, wherein the relationship is a sum of products of at least two disparity terms and at least two corresponding coefficients, and the disparity term is a power of the disparity value; and the dual camera mode is at least two predetermined distances Grouping a subject, and calculating at least two disparity values between the first image and the second image of the subject; and, based on the at least two predetermined distances and the at least two disparity values The at least two respective coefficients are calculated to determine the relationship.
  • the first camera and the second camera capture a subject at a first distance; and the processing unit is further configured to: calculate the subject in the first image and the first a first disparity value between the two images; and, introducing the first disparity value into the relationship to determine a value of the first distance.
  • the at least two predetermined distances are 15 cm and 35 cm, respectively.
  • the at least two predetermined distances are respectively n+1 distances, and the n+1 distances range from 7 cm to 200 cm.
  • the interval between two adjacent distances of the n+1 distances is 10 cm.
  • the determining, by the processing unit, the relationship includes: using a quadratic fitting method to fit a binary curve of a sum of products of the at least two disparity terms and at least two corresponding coefficients, to Determine the relationship.
  • the at least two predetermined distances are 15 cm and 35 cm, respectively.
  • the method further includes: a control unit, configured to drive the motor of the dual camera module based on the motor code value to move the first camera and the second camera.
  • a storage unit is configured to store the at least two corresponding coefficients.
  • an electronic device including the above dual camera module is provided.
  • the distance parameter calculation method according to the present invention and the dual camera module and the electronic device applying the distance parameter calculation method can calculate the distance parameter based on the disparity value, the process is simple, saves time, and has relatively good dark state focus stability. Sex.
  • FIG. 1 is a schematic flow chart of a parallax calculation method according to a first preferred embodiment of the present invention
  • FIG. 2 is a schematic flow chart of another example of a parallax calculation method according to a first preferred embodiment of the present invention
  • FIG. 3 is a schematic diagram of a parallax table according to a first preferred embodiment and a second preferred embodiment of the present invention
  • FIG. 4 is a schematic flow chart of still another example of a parallax calculation method according to a first preferred embodiment of the present invention.
  • Figure 5 is a schematic block diagram of a dual camera module in accordance with a first preferred embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of a working process of a dual camera module according to a first preferred embodiment of the present invention
  • Figure 7 is a schematic block diagram of an electronic device in accordance with a first preferred embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of a distance parameter calculation method according to a second preferred embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of an example of a method of calculating a disparity value according to a second preferred embodiment of the present invention.
  • FIG. 10 is a schematic flowchart of another example of a method of calculating a disparity value according to a second preferred embodiment of the present invention.
  • FIG. 11 is a schematic block diagram of a dual camera module in accordance with a second preferred embodiment of the present invention.
  • Figure 13 is a schematic block diagram of an electronic device in accordance with a second preferred embodiment of the present invention.
  • the term “a” is understood to mean “at least one” or “one or more”, that is, in one embodiment, the number of one element may be one, and in other embodiments, the element The number can be multiple, and the term “a” cannot be construed as limiting the quantity.
  • ordinal numbers such as “first”, “second”, etc. will be used to describe various components, those components are not limited herein. This term is only used to distinguish one component from another. For example, a first component could be termed a second component, and as such, a second component could also be termed a first component without departing from the teachings of the inventive concept.
  • the term "and/or" used herein includes any and all combinations of one or more of the associated listed items.
  • the current popular disparity value algorithm is the Absolute Difference Sum (SAD) algorithm, which calculates the difference for a single pixel point in a region of interest (ROI) in an image.
  • SAD Absolute Difference Sum
  • ROI region of interest
  • a parallax calculation method for calculating a disparity value between pixels of a first image and a second image comprising: a) at the first image Selecting a first region, establishing a first grayscale histogram of the first region, the first region being centered on the first pixel; b) adding a reference parallax to the coordinate value of the first region in the first direction Calculating a second grayscale histogram of the second region by obtaining a coordinate of the second region in the second image; c) calculating each row or column of the first grayscale histogram and the second grayscale histogram a first mean square error of the difference; d) increasing the reference disparity value by a predetermined step size, and repeating steps b and c until the current first mean square error is greater than the previous first mean square error, and the previous Determining a mean square error as a first minimum mean square error; e) determining a first disparity value corresponding
  • a disparity calculation method includes: S101, selecting a first region in a first image, and establishing a first gray histogram of the first region, wherein the first The area is centered on the first pixel; S102, adding a reference disparity value to the coordinate value of the first area in the first direction to obtain coordinates of the second area in the second image, and establishing the second area a second grayscale histogram; S103, calculating a first mean square error of each row or column of the first grayscale histogram and the second grayscale histogram; S104, increasing the reference parallax by a predetermined step size And repeating steps S102 and S103 until the current first mean square error is greater than the previous first mean square error, and determining the previous first mean square error as the first minimum mean squared difference; S105, determining the first minimum mean square a
  • the second image has a parallax in the horizontal direction with respect to the first image.
  • the first pixel in the first image assuming its coordinate is (x, y)
  • the coordinate position of the same pixel in the second image is horizontally translated by the pixel in the first image.
  • Distance, ie (x + ⁇ x, y) The distance ⁇ x is the disparity value of the first pixel between the first image and the second image, also referred to as the disparity value of the first pixel.
  • a gray histogram of a specific region centered on the pixel is scanned in the first image and the second image, and two gray histograms are compared. The difference between the pixels to determine the disparity value of the pixel.
  • the difference between the two is the smallest. That is, by scanning from the point of the same coordinate in the second image, the closer the scanning area is to the position of the pixel in the second image, the smaller the difference between the corresponding areas in the two images, and the minimum value is Appears at the location of the pixel in the second image. As the scanned area passes the position of the pixel in the second image, the difference between the corresponding areas in the two images will continue to increase.
  • the lowest point of the curve represents the disparity value of the pixel between the first image and the second image.
  • the gray histogram of the corresponding region in the first image and the second image is established, and the gray histogram is compared.
  • the difference between the corresponding regions in the two images is determined, but other methods can be used to compare the differences between the corresponding regions in the two images.
  • the resulting difference value versus distance curve follows a shape that decreases first and then increases as the distance increases, thereby determining the disparity value of the pixel based on the lowest point of the curve.
  • each row or column of the first gray histogram and the second gray histogram is calculated.
  • the mean squared difference of the differences can also be employed in the case where the computing power is sufficiently strong.
  • the disparity calculation method according to the first preferred embodiment of the present invention includes: S201, selecting a first region in the first image, and establishing a first grayscale histogram of the first region, wherein the first The area is centered on the first pixel; S202, the coordinate value of the first area is added to the first direction to add a reference disparity value x i to obtain coordinates of the second area in the second image, and the second area is established.
  • ⁇ x i+1 > ⁇ x i it means that the inflection point of the mean square error has passed, and ⁇ x i is the first a minimum mean square error value; S206, determining a first disparity value x i corresponding to the first minimum mean square difference value; and S 207. Use the first disparity value x i as a disparity value of the first pixel.
  • the parallax calculation method shown in FIG. 2 is basically the same as the parallax calculation method of FIG. 1, except that step S104 in FIG. 1 is specifically implemented in steps S204 and S205.
  • the parallax calculation method it is possible to perform calculations in a state where the luminance difference between the two images is large, the colors are inconsistent, and the pictures of the two images are not relatively flat, and relatively stable results are obtained. .
  • the method further includes repeating the steps a, b, c, d, e, and f for each pixel in the first image to obtain a parallax of each pixel in the first image. And a disparity table between the first image and the second image is obtained based on a disparity value of each pixel in the first image.
  • the disparity calculation method according to the first preferred embodiment of the present invention, after the disparity value of a certain pixel point is obtained, all the pixel points in the entire image are calculated by the same method, thereby obtaining the first The disparity value of each pixel in an image.
  • steps S101 to S106 may be repeated for each pixel in a pre-scanning and re-sorting manner, and for the parallax calculation method as shown in FIG. Steps S201 to S207.
  • a disparity table between the first image and the second image can be established.
  • FIG. 3 is a schematic diagram of a parallax table in accordance with a first preferred embodiment of the present invention.
  • the gradation of the pixel in the disparity table is used to represent the disparity of the pixel, and the larger the gradation value, the higher the disparity of the pixel.
  • the largest gray value in FIG. 3 may indicate that the parallax of the pixel is infinity, and the smallest gray value may indicate that the parallax of the pixel is zero.
  • the disparity table shown in FIG. 3 is a schematic diagram for visually expressing the parallax, which is actually not accurate enough.
  • the disparity table established according to the first preferred embodiment of the present invention should be the first specific corresponding to each pixel
  • the table of disparity values is in the form of a representation to accurately represent the disparity value corresponding to each pixel.
  • both the initial disparity value x i and the step size d for increasing the disparity value can be selected by the user.
  • the initial disparity value x i can be set to a certain ratio of the spacing between the two cameras, such as 50%, 60%. , 80%, and so on.
  • the step size d is usually set to one pixel.
  • the method of coarse scan and fine scan first may also be adopted.
  • the step size d can be first set to a larger value, for example, 10 pixels, and scanned in the step size to find the inflection point of the calculated mean square error.
  • the minimum value of the actual mean square error may appear to the left of the minimum value of the mean square error obtained at this time, or may appear on the right side. Therefore, it is possible to perform fine scanning in steps of 20 pixels in the interval of 20 pixels from the previous disparity value of the disparity value corresponding to the minimum mean square error obtained at this time, thereby determining the accuracy of the occurrence of the minimum mean square error. position.
  • the method further includes: scaling the first region to a predetermined size to the third region; repeating the steps a, b, c, and d based on the third region to obtain the second minimum a mean squared difference; comparing the first minimum mean squared difference with the second minimum mean squared difference; and, if the second minimum mean squared difference is less than the first minimum mean squared difference, the second The mean squared difference is determined as the first minimum mean squared difference.
  • the method further includes: scaling the first area to a fourth area, wherein the size of the fourth area is larger than the size of the first area, and the third The size of the region is smaller than the size of the first region; the steps a, b, c and d are repeated based on the fourth region to obtain a third minimum mean square difference value; comparing the first minimum mean square difference value, the second minimum average a variance value and the third minimum mean square difference value; and, determining a minimum one of the first minimum mean square difference value, the second minimum mean square difference value, and the third minimum mean square difference value as the first A minimum mean squared difference.
  • the disparity calculation method according to the first preferred embodiment of the present invention includes: S301, selecting a first region in the first image, and establishing a first grayscale histogram of the first region, wherein the first The area is centered on the first pixel; S302, the coordinate value of the first area is added to the first direction to obtain a coordinate of the second area in the second image, and the second area is established.
  • a grayscale histogram S303, calculating a first mean square difference of a difference between each row or column of the first grayscale histogram and the second grayscale histogram; S304, increasing the disparity value by a predetermined step size, and Steps S302 and S303 are repeated until the obtained mean square difference is increased to obtain a first minimum mean squared difference D 1 ; S305, the first area is reduced by a predetermined size to a third area, and the predetermined size is enlarged.
  • the window size of the region is scaled and then calculated, and the smallest is selected.
  • the mean squared difference is used as the final result for determining the disparity value.
  • the accuracy of the disparity value calculation in the parallax calculation method according to the first preferred embodiment of the present invention is improved.
  • the step of scaling the window size may be omitted, thereby realizing fast calculation of the disparity value.
  • the first direction is the row direction or the column direction of the image.
  • the two cameras are usually arranged horizontally, and thus the parallax between the two images is usually in the horizontal direction.
  • the first preferred embodiment of the present invention is not limited thereto.
  • the parallax between the two images will be in the vertical direction, and thus the scanning direction is also It should be the column direction of the image. Except for the different scanning directions, the specific calculation process is the same in the case of the row direction and the column direction, and thus redundancy is not repeated and will not be described again.
  • step a further comprising: scaling the first image and the second image to the same size.
  • the first image and the second image are preferably scaled to the same size before the specific calculation process. For example, if the size of the first image is larger than the second image, the second image may be enlarged to the size of the first image and then calculated.
  • the parallax calculation method according to the first preferred embodiment of the present invention is not affected by the specific size of the image, in the case where the sizes of the first image and the second image are the same, it is not necessary to scale the image. Thereby speeding up the processing rate.
  • the coordinates need to be converted.
  • the first image has a width W 1 and a height H 1
  • the second image has a width W 2 and a height H 2 .
  • the coordinates (x 1 , y 1 ) in the first image the coordinates (x 2 , y 2 ) of the corresponding pixels in the second image should satisfy:
  • the method before step a, the method further includes converting the first image and the second image into images of the same color format.
  • the first image is a color image and the second image is a black and white image, and so on.
  • the first image and the second image are preferably converted into the same color before the specific calculation process. Formatted images, such as images in RGB color format.
  • first image and the second image are themselves images of the same color format, such as images of the RGB color format, it is not necessary to convert both the first image and the second image into grayscale images, but can be directly followed.
  • the calculation process to speed up the processing rate is not necessary.
  • step a further comprising: acquiring original image data information of each camera from the dual camera; and converting the acquired original image into the first suitable for display processing using a difference algorithm An image and the second image.
  • original image data information can be acquired by image processing software, and the original image data information is transmitted from the bottom layer of the image sensor of each camera of the dual camera. And in frames.
  • the original image data information can accurately reproduce the image information acquired by the image sensor of the camera, but may not be suitable for image processing.
  • the acquired original image is converted into an image suitable for display processing using a difference calculation method, for example, a 32-bit BMP map suitable for display processing of a computer. .
  • step a further comprising: converting the first image and the second image into the first grayscale image and the second grayscale image; and, according to the required disparity map size, the first gray Degree The image and the second grayscale image are respectively scaled to a parallax map size.
  • the disparity calculation method it is necessary to first convert the first grayscale image according to the size of the required disparity map.
  • the second grayscale image is scaled to the desired size and then the disparity value is calculated.
  • the first grayscale image L1 and the second grayscale image R1 are first reduced to a small-sized first grayscale image L2 and a second grayscale image R2, for the first
  • the calculation of the disparity value is performed for each of the grayscale image L2 and the second grayscale image R2. This is because the image will have an effect on the disparity value after scaling, so the disparity value of the scaled image cannot be applied to the original size image.
  • the method further includes synthesizing the first image and the second image into a three-dimensional image based on the parallax table.
  • the first image, the second image, and the disparity table as single channel data may be integrated into a three-channel image data output for further processing by the processor.
  • image synthesis is performed on the basis of this to synthesize a three-dimensional image or the like.
  • the further processing is not limited to performing image synthesis into a three-dimensional image, but also performing other image processing based on the first image, the second image, and the parallax table.
  • the first preferred embodiment of the present invention is not intended. There are any restrictions on this.
  • the parallax can be quickly calculated without correcting the image.
  • the parallax calculation method according to the first preferred embodiment of the present invention can perform calculations in a state where the luminance difference between the two images is large, the colors are inconsistent, and the pictures of the two images are not relatively flat, and relatively stable results are obtained. .
  • the parallax calculation method according to the first preferred embodiment of the present invention has strong compatibility, and the test result is good, and the correction time of one of the cameras of the dual camera module can be saved, which is convenient for the user to use.
  • a dual camera module including: a first camera for acquiring a first image; a second camera for acquiring a second image; and, processing
  • the unit is configured to calculate a disparity value between the pixels of the first image and the second image, and specifically includes: a) selecting a first region in the first image, and establishing a first grayscale histogram of the first region, where The first area is centered on the first pixel; b) the coordinate value of the first area is added to the first direction to obtain a coordinate of the second area in the second image, and the second area is established a second grayscale histogram; c) calculating a first mean square error of each row or column of the first grayscale histogram and the second grayscale histogram; d) by a predetermined step Increasing the disparity value and repeating steps b and c until the obtained mean square difference is increased to obtain a first minimum mean square error value; e) determining a first corresponding to the first minimum
  • FIG. 5 is a schematic block diagram of a dual camera module in accordance with a first preferred embodiment of the present invention.
  • the dual camera module 100 includes: a first camera 110 for acquiring a first image; a second camera 120 for acquiring a second image; and, processing
  • the unit 130 is configured to calculate a disparity value between the first image acquired by the first camera 110 and the pixel of the second image acquired by the second camera 120, and specifically includes: a) selecting the first region in the first image Establishing a first grayscale histogram of the first region, the first region being centered on the first pixel; b) adding a disparity value of the coordinate value of the first region to the first direction to obtain the second a second grayscale histogram of the second region in the coordinates of the second region; c) calculating a first difference of each row or column of the first grayscale histogram and the second grayscale histogram Mean variance; d) increase the disparity value by a predetermined step size, and
  • the processing unit is further configured to: repeat, for each pixel in the first image, steps a, b, c, d, e, and f to obtain each of the first images a disparity value of one pixel; and, based on a disparity value of each pixel in the first image, a disparity table between the first image and the second image is obtained.
  • the processing unit is further configured to: scale the first area to a predetermined size to be a third area; and repeat the steps a, b, and c based on the third area. And d, obtaining a second minimum mean square error; comparing the first minimum mean squared difference with the second minimum mean squared difference; and, wherein the second minimum mean squared difference is less than the first minimum mean squared difference In the case of a value, the second mean squared difference is determined as the first minimum mean squared difference.
  • the processing unit is further configured to: scale the first area to a fourth area, wherein the size of the fourth area is greater than the size of the first area Dimensions, and the size of the third area is smaller than the size of the first area; repeating the above steps a, b, c and d based on the fourth area to obtain a third minimum mean square error value; comparing the first minimum mean square error value The second minimum mean square difference value and the third minimum mean square difference value; and, the smallest of the first minimum mean square difference value, the second minimum mean square difference value, and the third minimum mean square difference value One of the determinations is the first minimum mean squared difference.
  • the first direction is the row direction or the column direction of the image.
  • the processing unit is further used to scale the first image and the second image to the same size before the step a.
  • the processing unit is further configured to convert the first image and the second image into images of the same color format before the step a.
  • the processing unit is further configured to: obtain original image data information of each camera from the dual camera before the step a; and convert the acquired original image into a suitable one using a difference algorithm The processed first image and the second image are displayed.
  • the processing unit is further configured to convert the first image and the second image into a first gray image and a second gray image before the step a; and, according to the required The first grayscale image and the second grayscale image are respectively scaled to the parallax map size by the parallax map size.
  • the processing unit is further configured to synthesize the first image and the second image into a three-dimensional image based on the parallax table.
  • FIG. 6 is a schematic flow chart showing the operation of the dual camera module according to the first preferred embodiment of the present invention.
  • the motor code and the distance parameter are first corrected.
  • original image data information that is, a RAW map
  • the image is scaled to the target size.
  • the disparity value of each pixel is calculated.
  • a BMP depth map is established based on the disparity value of each pixel.
  • the left and right images are synthesized according to the depth map, thereby completing image synthesis.
  • an electronic device includes a dual camera module, and the dual camera module includes: a first camera for acquiring a first image; and a second camera for acquiring a second image; and a processing unit, configured to calculate a disparity value between pixels of the first image and the second image, specifically comprising: a) selecting a first region in the first image, establishing a first region a gray histogram, the first region being centered on the first pixel; b) adding a disparity value to the coordinate value of the first region in the first direction to obtain a coordinate of the second region in the second image Establishing a second grayscale histogram of the second region; c) calculating a first mean square error of each row or column of the first grayscale histogram and the second grayscale histogram; d) taking a predetermined step Increase the disparity value by length and repeat steps b and c until the obtained The mean squared difference is increased to obtain a first minimum mean squared difference; e) determining a first disparity value between pixels of the first image and
  • the processing unit is further configured to: repeat, for each pixel in the first image, steps a, b, c, d, e, and f to obtain each pixel in the first image And a disparity table between the first image and the second image based on a disparity value of each pixel in the first image.
  • the processing unit is further configured to: scale the first area to a predetermined size to be a third area; and repeat the steps a, b, c, and d based on the third area. Obtaining a second minimum mean squared difference; comparing the first minimum mean squared difference with the second minimum mean squared difference; and, wherein the second minimum mean squared difference is less than the first minimum mean squared difference In the case, the second mean squared difference is determined as the first minimum mean squared difference.
  • the processing unit is further configured to: scale the first area to a fourth area, wherein the size of the fourth area is larger than the size of the first area, And the size of the third area is smaller than the size of the first area; repeating the above steps a, b, c and d based on the fourth area to obtain a third minimum mean square difference value; comparing the first minimum mean square difference value, the a second minimum mean squared difference and the third minimum mean squared difference; and, a smallest one of the first minimum mean squared difference, the second minimum mean squared difference, and the third minimum mean squared difference Determined as the first minimum mean squared difference.
  • the first direction is a row direction or a column direction of an image.
  • the processing unit is further configured to scale the first image and the second image to the same size before the step a.
  • the processing unit is further configured to convert the first image and the second image into images of the same color format before the step a.
  • the processing unit is further configured to: obtain the original image data information of each camera from the dual camera before the step a; and convert the acquired original image into a display processing using the difference algorithm The first image and the second image.
  • the processing unit is further configured to convert the first image and the second image into a first grayscale image and a second grayscale image before the step a; and, according to the required disparity map The size, the first grayscale image and the second grayscale image are respectively scaled to the parallax map size.
  • the processing unit is further configured to synthesize the first image and the second image into a three-dimensional image based on the disparity table.
  • FIG. 7 is a schematic block diagram of an electronic device in accordance with a first preferred embodiment of the present invention.
  • the electronic device 200 according to the first preferred embodiment of the present invention includes a dual camera module 210 that can acquire a first image and a second image.
  • the electronic device 200 can include a processor 220, configured to calculate a disparity value between pixels of the first image and the second image, and perform image synthesis based on the disparity value, that is, the device capable of integrating the dual camera module
  • the function of the processing unit 130 is described.
  • the processor 220 includes, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device.
  • the electronic device 200 may further include a memory 230 for storing original image data or processed image data.
  • the memory 230 can include volatile memory such as static random access memory (S-RAM) and dynamic random access memory (D-RAM), and non-volatile memory such as flash memory, read only memory (ROM). And erasable programmable read only memory (EPROM) and electrically erasable programmable read only memory (EEPROM).
  • volatile memory such as static random access memory (S-RAM) and dynamic random access memory (D-RAM)
  • non-volatile memory such as flash memory, read only memory (ROM).
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • EEPROM electrically erasable programmable read only memory
  • the electronic device electronic device of the first preferred embodiment of the present invention may be various electronic devices including a dual camera module, including but not limited to a smart phone, a tablet personal computer (PC), a mobile phone, a video phone, and an e-book reader. , desktop PC, laptop PC, netbook PC, personal digital assistant (PDA), portable multimedia player (PMP), MP3 player, mobile medical device, camera, wearable device (eg, head mounted device (HMD), Electronic clothes, electronic bracelets, electronic necklaces, electronic accessories, electronic tattoos or smart watches), and so on.
  • a dual camera module including but not limited to a smart phone, a tablet personal computer (PC), a mobile phone, a video phone, and an e-book reader.
  • desktop PC laptop PC
  • netbook PC personal digital assistant
  • PMP portable multimedia player
  • MP3 player portable multimedia player
  • mobile medical device camera
  • wearable device eg, head mounted device (HMD), Electronic clothes, electronic bracelets, electronic necklaces, electronic accessories, electronic tattoos or smart watches
  • the parallax can be quickly calculated without correcting the image.
  • the parallax calculation method according to the first preferred embodiment of the present invention and the dual camera module and the electronic device to which the parallax calculation method is applied can have a large difference in brightness between two images, inconsistent colors, and images of two images are not The calculation is performed in a relatively flat state and a relatively stable result is obtained.
  • the parallax calculation method according to the first preferred embodiment of the present invention and the dual camera module and the electronic device using the parallax calculation method have strong compatibility, the test result is good, and the dual camera module can be saved.
  • the correction time of a camera is convenient for the user.
  • FIG. 8 is a schematic flow chart of a distance parameter calculation method according to a second preferred embodiment of the present invention. As shown in FIG. 8, the distance parameter calculation method according to the second preferred embodiment of the present invention is used to calculate and calculate the disparity value between the first image and the second image captured by the dual camera module.
  • Group related a distance parameter specifically includes: S1010, establishing a relationship between the distance parameter and the disparity value, wherein the relationship is a sum of products of at least two disparity items and at least two corresponding coefficients, and the disparity item is the view a power of the difference; S1020, photographing the subject with the dual camera module at at least two predetermined distances, and calculating at least two disparity values between the first image and the second image of the subject; and S1030 And calculating the at least two corresponding coefficients based on the at least two predetermined distances and the at least two disparity values, thereby determining the relationship.
  • an absolute difference sum (SAD) algorithm can be employed that finds a difference for a single pixel point in a region of interest (ROI) in an image.
  • SAD absolute difference sum
  • ROI region of interest
  • the second image has a parallax in the horizontal direction with respect to the first image.
  • the first pixel in the first image assuming its coordinate is (x, y)
  • the coordinate position of the same pixel in the second image is horizontally translated by the pixel in the first image.
  • Distance, ie (x + ⁇ x, y) The distance ⁇ x is the disparity value of the first pixel between the first image and the second image, also referred to as the disparity value of the first pixel.
  • the gray histogram of a specific region centered on the pixel is scanned in the first image and the second image, and the difference between the two gray histograms is compared to determine the difference.
  • the disparity value of the pixel is compared to determine the difference.
  • the difference between the two is the smallest. That is, by scanning from the point of the same coordinate in the second image, the closer the scanning area is to the position of the pixel in the second image, the smaller the difference between the corresponding areas in the two images, and the minimum value is Appears at the location of the pixel in the second image. As the scanned area passes the position of the pixel in the second image, the difference between the corresponding areas in the two images will continue to increase.
  • the lowest point of the curve represents the disparity value of the pixel between the first image and the second image.
  • the mean square error of the difference between each row or column of the first gray histogram and the second gray histogram is calculated, and It is not the difference of a single pixel.
  • the pixel-by-pixel difference calculation method can also be employed in the case where the computing power is sufficiently strong.
  • the above exemplary parallax calculation method can perform calculation in a state where the luminance difference between the two images is large, the colors are inconsistent, and the pictures of the two images are not relatively flat, and a relatively stable result is obtained.
  • Figure 3 is a schematic illustration of a parallax table in accordance with the present invention.
  • the gradation of the pixel in the disparity table is used to represent the disparity of the pixel, and the larger the gradation value, the higher the disparity of the pixel.
  • the largest gray value in FIG. 3 may indicate that the parallax of the pixel is infinity, and the smallest gray value may indicate that the parallax of the pixel is zero.
  • the disparity table shown in FIG. 3 is a schematic diagram for visually expressing the parallax, which is actually not accurate enough.
  • the disparity table established according to the second preferred embodiment of the present invention should be in the form of a table corresponding to the first specific disparity value of each pixel, thereby accurately indicating the disparity value corresponding to each pixel.
  • both the initial disparity value x i and the step size d for increasing the disparity value can be selected by the user.
  • the initial disparity value x i can be set to a certain ratio of the spacing between the two cameras, such as 50%, 60%. , 80%, and so on.
  • the step size d is usually set to one pixel.
  • the step size d can be first set to a larger value, for example, 10 pixels, and scanned in the step size to find the inflection point of the calculated mean square error.
  • the minimum value of the actual mean square error may appear to the left of the minimum value of the mean square error obtained at this time, or may appear on the right side. Therefore, it is possible to perform fine scanning in steps of 20 pixels in the interval of 20 pixels from the previous disparity value of the disparity value corresponding to the minimum mean square error obtained at this time, thereby determining the accuracy of the occurrence of the minimum mean square error. position.
  • FIG. 10 is a schematic flow chart of another example of a method of calculating a disparity value according to a second preferred embodiment of the present invention.
  • a method for calculating a disparity value according to a second preferred embodiment of the present invention includes: S3010, selecting a first region in a first image, and establishing a first gray histogram of the first region, wherein The first area is centered on the first pixel; S3020, the coordinate value of the first area is added to the first direction to obtain a coordinate of the second area in the second image, and the second area is established.
  • a second grayscale histogram S3030, calculating a first mean square error of each row or column of the first grayscale histogram and the second grayscale histogram; S3040, increasing the parallax by a predetermined step size And repeating steps S3020 and S3030 until the obtained mean squared difference is increased to obtain a first minimum mean squared difference D 1 ; S3050, reducing the first area to a predetermined size to a third area, and amplifying the predetermined The size is the fourth area; S3060, repeating steps S3010 to S3040 based on the third area and the fourth area, respectively, to obtain a second minimum mean squared difference D 2 and a third minimum mean squared difference D 3 ; S3070, comparing the first a minimum MSE value D 1, the second minimum MSE value and D 2 Third minimum MSE value D 3; S3080, the first minimum MSE value D 1, the second minimum MSE value D 2 and the third minimum MSE value D determined in a minimum of 3 a first first mean square
  • the window size of the region is scaled and then calculated, and the smallest mean square difference value is selected as the determination.
  • the final result of the disparity value In this way, the accuracy of the disparity value calculation is improved.
  • the step of scaling the window size may be omitted, thereby realizing fast calculation of the disparity value.
  • the first direction is the row direction or the column direction of the image.
  • the two cameras are usually arranged horizontally, and thus the parallax between the two images is usually in the horizontal direction.
  • the second preferred embodiment of the present invention is not limited thereto.
  • the parallax between the two images will be in the vertical direction, and thus the scanning direction is also It should be the column direction of the image. Except for the different scanning directions, the specific calculation process is the same in the case of the row direction and the column direction, and thus redundancy is not repeated and will not be described again.
  • the method before the calculation, further includes: scaling the first image and the second image to the same size.
  • the first image and the second image are preferably scaled to the same size prior to a particular calculation process. For example, if the size of the first image is larger than the second image, the second image may be enlarged to the size of the first image and then calculated.
  • the above-described exemplary parallax calculation method is not affected by the specific size of the image, in the case where the sizes of the first image and the second image are the same, it is not necessary to scale the image, thereby speeding up the processing rate.
  • the coordinates need to be converted.
  • the first image has a width W 1 and a height H 1
  • the second image has a width W 2 and a height H 2 .
  • the coordinates (x 1 , y 1 ) in the first image the coordinates (x 2 , y 2 ) of the corresponding pixels in the second image should satisfy:
  • the method before the calculation, the method further includes converting the first image and the second image into an image of the same color format.
  • the first image is a color image and the second image is a black and white image, and so on.
  • the first image and the second image are preferably converted into the same color before the specific calculation process. Formatted images, such as images in RGB color format.
  • first image and the second image are themselves images in the same color format,
  • an image in the RGB color format does not necessarily have to convert both the first image and the second image into a grayscale image, but can directly perform a subsequent calculation process to speed up the processing rate.
  • original image data information can be acquired by image processing software, which is transmitted from the bottom layer of the image sensor of each camera of the dual camera, and is in units of frames.
  • the original image data information can accurately reproduce the image information acquired by the image sensor of the camera, but may not be suitable for image processing.
  • the acquired original image is converted into an image suitable for display processing using a difference calculation method, for example, a 32-bit BMP map suitable for display processing of a computer.
  • the calculating before the calculating, further comprising: converting the first image and the second image into the first grayscale image and the second grayscale image; and, according to the required disparity map size, the first The grayscale image and the second grayscale image are respectively scaled to a parallax map size.
  • the first grayscale image L1 and the second grayscale image R1 are first reduced to a small-sized first grayscale image L2 and a second grayscale image R2, for the first
  • the calculation of the disparity value is performed for each of the grayscale image L2 and the second grayscale image R2. This is because the image will have an effect on the disparity value after scaling, so the disparity value of the scaled image cannot be applied to the original size image.
  • the parallax can be quickly calculated without correcting the image.
  • the above exemplary parallax calculation method can perform calculation in a state where the luminance difference between the two images is large, the colors are inconsistent, and the pictures of the two images are not relatively flat, and a relatively stable result is obtained.
  • the above exemplary parallax calculation method has strong compatibility, good test results, and can save the correction time of one of the cameras of the dual camera module, and is convenient for the user to use.
  • the method further includes: capturing a subject with the dual camera module at a first distance, and calculating a first disparity value of the subject between the first image and the second image; and And introducing the first disparity value into the relationship to obtain the value of the first distance.
  • the first image and the second image are also calculated by employing the above-described exemplary disparity calculation method.
  • the parallax value of the medium subject can be obtained according to the relationship, and the specific value of the distance parameter of the dual camera module can be obtained.
  • the distance parameter is the depth of field of the object, and the relationship is:
  • Y is the distance parameter
  • X is the disparity value
  • a and B are coefficients.
  • the depth of field of the subject that is, the distance between the subject and the dual camera module and the disparity value are inversely related:
  • Z is the distance from the subject to the dual camera module
  • f is the focal length of the dual camera module
  • T is the distance between the optical centers of the two images
  • x l and x r are in the left and right images, respectively The coordinates of the subject.
  • the depth of field of the object has an inverse relationship with the disparity value, so the relationship between the depth of field of the object and the disparity value can be expressed by the expression (1). It is indicated that the coefficient A represents f ⁇ T in the expression (2), and B is corrected as the deviation value.
  • the subjects are photographed at 15 cm and 35 cm, respectively, and the corresponding two disparity values are calculated with the focus clear. After that, the two distance values and the two disparity values are respectively taken into the expression (1), thereby solving the coefficients A and B.
  • the depth of field of the subject can be calculated based on the disparity value of the subject between the first image and the second image.
  • the focus f changes as the motor focuses at different depths of field, and thus the calculated value at the far focus may have a certain error.
  • the distance parameter is the depth of field of the object, and the relationship is
  • Y is the distance parameter
  • X is the disparity value
  • a 1 , A 2 , ..., A n and B are coefficients
  • n is a natural number greater than or equal to 2.
  • the distance parameter calculation method establishes a polynomial of a plurality of terms of the disparity value when calculating the depth of field, as shown in the above expression (3).
  • the index n in the expression (3) is preferably smaller than 7, because it has been experimentally proved that the polynomial of the seventh power of the disparity value can more accurately represent the depth of field value of the object.
  • the coefficients are A 1 , A 2 , ..., A 7 and B. Therefore, it is necessary to photograph the subject with the dual camera module at 8 distances, and calculate the corresponding 8 views. The difference is thus brought into the expression (4) by 8 distance values and 8 disparity values, and the coefficients A 1 , A 2 , ..., A 7 and B are calculated.
  • the at least two predetermined distances are respectively n+1 distances, and the n+1 distances range from 7 cm to 200 cm.
  • the interval between two adjacent distances of the n+1 distances is 10 cm.
  • the step of determining the relationship specifically includes: fitting a binary curve of a sum of products of the at least two disparity terms and at least two corresponding coefficients using a quadratic fitting method to determine a relationship .
  • the depth of field value of the subject is calculated based on the polynomial of the multiple-degree term of the disparity value
  • the range of the distance at which the subject is photographed is determined to be between 7 cm and 200 cm, and photographing is performed every 10 cm between the two distances.
  • a quadratic fitting method is used to fit a binary curve of a plurality of squares, thereby accurately expressing the relationship between the depth value of the object and the parallax value in a curve.
  • the distance parameter is a motor code value of the dual camera module, and the relationship is:
  • Y is the distance parameter
  • X is the disparity value
  • a and B are coefficients.
  • the at least two predetermined distances are 15 cm and 35 cm, respectively.
  • the motor code value can also be calculated.
  • the motor code value is a value for controlling the driving of the motor, that is, the distance the motor is moved from the initial position.
  • the motor code value is centered on zero, and the positive value and the negative value respectively indicate the distances moved toward the subject and the direction away from the subject.
  • the motor code value is inversely proportional to the distance of the object.
  • the distance of the object is inversely proportional to the disparity value.
  • the two disparity values between the first image and the second image of the subject are calculated, and the disparity value and the distance value are brought into the expression ( 5), thereby obtaining a relationship between the motor code value and the disparity value.
  • the motor code value can be calculated according to the disparity value of the subject between the first image and the second image, and the motor is moved based on the motor code value. For fast focus.
  • the moving distance of the motor is very limited. Therefore, in the specific focusing process, the expression (5) can be called at the near focus to calculate, and the far focus can be directly written at the far focus. value.
  • the coefficient value may be stored in an operation processor or a storage unit.
  • the entire expression containing the coefficients may be stored in the storage unit, and the expression is called from the storage unit for calculation when the distance parameter needs to be calculated.
  • the distance parameter calculation method according to the second preferred embodiment of the present invention calculates the distance parameter based on the disparity value, the process is simple, saves time, and has relatively good dark state focus stability.
  • the fast focus technique according to the second preferred embodiment of the present invention has better dark state focus stability than the phase detection autofocus technology (PDAF) of the high pass platform end.
  • PDAF phase detection autofocus technology
  • a dual camera module including: a first camera for acquiring a first image; a second camera for acquiring a second image; and, processing a unit, configured to calculate a distance parameter related to the dual camera module based on a disparity value between the first image and the second image, where the processing unit is specifically configured to: establish the distance parameter and the disparity value Relationship
  • the relationship is a sum of products of at least two disparity terms and at least two corresponding coefficients, and the disparity term is a power of the disparity value; the subject is photographed by the dual camera module at at least two predetermined distances, and is calculated At least two disparity values between the first image and the second image; and calculating the at least two corresponding coefficients based on the at least two predetermined distances and the at least two disparity values, thereby determining the Relationship.
  • a dual camera module 1000 includes: a first camera 1100 for acquiring a first image; a second camera 1200 for acquiring a second image; and, processing The unit 1300 is configured to calculate a distance parameter related to the dual camera module 1000 based on a disparity value between the first image acquired by the first camera 1100 and the second image acquired by the second camera 1200, where the processing unit 1300 is specific.
  • the at least two corresponding coefficients are calculated for at least two disparity values to determine the relationship.
  • the first camera and the second camera capture a subject at a first distance; and the processing unit is further configured to: calculate the subject between the first image and the second image The first disparity value; and, the first disparity value is brought into the relationship to obtain the value of the first distance.
  • the at least two predetermined distances are 15 cm and 35 cm, respectively.
  • the at least two predetermined distances are respectively n+1 distances, and the n+1 distances range from 7 cm to 200 cm.
  • the interval between two adjacent distances of the n+1 distances is 10 cm.
  • the determining, by the processing unit, the relationship includes: using a quadratic fitting method to fit a binary curve of a sum of products of the at least two disparity terms and at least two corresponding coefficients to determine the relationship. formula.
  • the at least two predetermined distances are 15 cm and 35 cm, respectively.
  • the method further includes: a control unit, configured to drive the motor of the dual camera module based on the motor code value to move the first camera and the second camera.
  • a storage unit is configured to store the at least two corresponding coefficients.
  • FIG. 12 is a schematic flow chart showing the operation of a dual camera module according to a second preferred embodiment of the present invention.
  • the motor code and the distance parameter are first corrected.
  • original image data information that is, a RAW map
  • BMP map suitable for computer processing.
  • the disparity value of the subject is calculated.
  • the depth of field value of the subject is calculated.
  • the position of the motor is calculated.
  • an electronic device includes a dual camera module, and the dual camera module includes: a first camera for acquiring a first image; and a second camera for acquiring a second image; and a processing unit, configured to calculate a distance parameter related to the dual camera module based on a disparity value between the first image and the second image, where the processing unit is specifically configured to: establish the distance a relationship between the parameter and the disparity value, the relationship being a sum of products of at least two disparity terms and at least two corresponding coefficients, and the disparity term is a power of the disparity value;
  • the dual camera module captures a subject and calculates at least two disparity values between the first image and the second image of the subject; and, based on the at least two predetermined distances and the at least two disparity values The at least two corresponding coefficients are calculated to determine the relationship.
  • the first camera and the second camera capture a subject at a first distance; and the processing unit is further configured to: calculate the subject between the first image and the second image The first disparity value; and, the first disparity value is brought into the relationship to obtain the value of the first distance.
  • the at least two predetermined distances are 15 cm and 35 cm, respectively.
  • the at least two predetermined distances are respectively n+1 distances, and the n+1 distances range from 7 cm to 200 cm.
  • the interval between two adjacent distances of the n+1 distances is 10 cm.
  • the determining, by the processing unit, the relationship includes: using a quadratic fitting method to fit a binary curve of a sum of products of the at least two disparity terms and at least two corresponding coefficients to determine the relationship. formula.
  • the at least two predetermined distances are 15 cm and 35 cm, respectively.
  • the method further includes: a control unit, configured to drive the motor of the dual camera module based on the motor code value to move the first camera and the second camera.
  • a storage unit is configured to store the at least two corresponding coefficients.
  • FIG. 13 is a schematic block diagram of an electronic device in accordance with a second preferred embodiment of the present invention.
  • an electronic device 2000 according to a second preferred embodiment of the present invention includes a dual camera module 2100 that can acquire a first image and a second image.
  • the electronic device 2000 can include a processor 2200, configured to calculate a distance parameter related to the dual camera module based on a disparity value between the first image and the second image, that is, to integrate the dual camera mode The function of the processing unit 1300 of the group.
  • the processor 2200 includes, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device.
  • the electronic device 2000 may further include a memory 2300 for storing a coefficient value or a relation itself expressing a relationship between the distance parameter and the disparity value.
  • the memory 2300 can include volatile memory such as static random access memory (S-RAM) and dynamic random access memory (D-RAM), and non-volatile memory such as flash memory, read only memory (ROM). And erasable programmable read only memory (EPROM) and electrically erasable programmable read only memory (EEPROM).
  • the electronic device electronic device of the second preferred embodiment of the present invention may be various electronic devices including a dual camera module, including but not limited to a smart phone, a tablet personal computer (PC), a mobile phone, and a video.
  • Telephone e-book reader, desktop PC, laptop PC, netbook PC, personal digital assistant (PDA), portable multimedia player (PMP), MP3 player, mobile medical device, camera, wearable device (eg, head Wearing devices (HMD), electronic clothes, electronic bracelets, electronic necklaces, electronic accessories, electronic tattoos or smart watches, etc.
  • HMD head Wearing devices
  • the processor and the memory in the electronic device and the processing unit and the storage unit in the dual camera module can be used complementarily to complete the distance parameter calculation process according to the second preferred embodiment of the present invention.
  • the distance parameter calculation process according to the second preferred embodiment of the present invention may also be completely performed by the dual camera module, or completely by the processor and the memory of the electronic device, and the second preferred embodiment of the present invention is not intended to be This is subject to any restrictions.
  • the dual camera module according to the second preferred embodiment of the present invention may not perform image processing after acquiring the first image through the first camera and acquiring the second image through the second camera, but The data is transferred to a processor of the electronic device for processing.
  • the distance parameter calculation method according to the present invention and the dual camera module and the electronic device applying the distance parameter calculation method can calculate the distance parameter based on the disparity value, the process is simple, saves time, and has relatively good dark state focus stability. Sex.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of Optical Distance (AREA)
  • Image Processing (AREA)

Abstract

本发明提供了视差计算和距离参数计算方法以及应用该视差计算方法和距离参数计算方法的双摄像头模组和电子设备。该视差计算方法包括:a)在第一图像中选择以第一像素为中心的第一区域,建立第一灰度直方图;b)将第一区域的坐标值在第一方向上加上视差值以得到第二图像中的第二区域的坐标,建立第二灰度直方图;c)计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差;d)以预定步长增大该视差值,并重复步骤b和c,以获得第一最小均方差值;e)确定与第一最小均方差值对应的第一视差值;和f)将第一视差值作为第一像素的景深值。通过根据本发明的视差计算方法,双摄像头模组和电子设备,可以在不对图像进行校正的情况下快速计算景深。

Description

视差与距离参数计算方法及双摄像头模组和电子设备 技术领域
本发明涉及图像处理领域,特别涉及双摄像头模组的视差和距离参数计算方法,以及应用该视差和距离参数计算方法的双摄像头模组和电子设备。
背景技术
目前,越来越多的手机开始应用双摄像头配置。双摄像头可以在不增加模组厚度的状况下提供更多拍摄的可能性。
通常,双摄像头配置中,两个并列的摄像头并不是完全一样的,一般一个为广角镜头、一个则是光学变焦镜头。对于智能手机摄像头来说,一般采用数码变焦,从原有成像中截图中间部分放大,画面品质下降明显,而光学变焦可以在拉近取景区域的同时保持画面的清晰度,也就是无损变焦。双摄像头的设置可以更好地满足用户的拍摄需求在不同焦距的镜头之间切换,实现无损变焦以达到最好的画质。
此外,双摄像头还可以有效提升弱光下的拍摄效果,不同光圈参数的两个摄像头图像进行对比,调整至最接近真实场景的数值,有效抑制噪点。另外,两个小摄像头可以做到接近一个大摄像头的拍摄效果,由于手机厚度的限制不可能容纳高端的镜头,双摄像头可以平衡效果和模组厚度之间的矛盾。
并且,双摄像头的一个更为普及的功能是3D拍摄,两组图片进行合成还可以获得更好的景深效果,捕捉快速移动的物体。
但是,由于双摄像头配置对于算法的要求较高,目前实际能应用到手机上的效果好的算法仍然屈指可数,而且前期进行的矫正比较复杂。另外,由于需要对抓拍下来的两幅图像进行校正等处理,会导致处理速度大幅下降。
因此,需要改进的应用于双摄像头配置的图像处理算法。
发明内容
本发明的目的在于针对上述现有技术中的缺陷和不足,提供可以在不对图像 进行校正的情况下快速计算视差的视差计算方法,以及应用该视差计算方法的双摄像头模组和电子设备。
本发明的另一目的在于针对上述现有技术中的缺陷和不足,提供可以实现快速测距或者快速对焦的距离参数计算方法,以及应用该距离参数计算方法的双摄像头模组和电子设备。
根据本发明的一方面,提供了一种视差计算方法,用于计算第一图像和第二图像的像素之间的视差值,包括:a)在第一图像中选择第一区域,建立所述第一区域的第一灰度直方图,所述第一区域以第一像素为中心;b)将所述第一区域的坐标值在第一方向上加上参考视差值以得到所述第二图像中的第二区域的坐标,建立所述第二区域的第二灰度直方图;c)计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差;d)以预定步长增大所述参考视差值,并重复步骤b和c,直到当前第一均方差大于前一第一均方差为止,并将前一第一均方差确定为第一最小均方差值;e)确定与第一最小均方差值对应的第一视差值;和f)将所述第一视差值作为所述第一像素的视差值。
在上述视差计算方法中,进一步包括:对于所述第一图像中的每一像素,重复所述步骤a,b,c,d,e和f,以获得所述第一图像中的每一像素的视差值;和,基于所述第一图像中的每一像素的视差值,得到所述第一图像和所述第二图像之间的视差表。
在上述视差计算方法中,在步骤d之后,步骤e之前进一步包括:将所述第一区域缩放预定尺寸为第三区域;基于第三区域重复所述步骤a,b,c和d,以获得第二最小均方差值;比较所述第一最小均方差值与所述第二最小均方差值;和,在所述第二最小均方差值小于第一最小均方差值的情况下,将所述第二均方差值确定为所述第一最小均方差值。
在上述视差计算方法中,在步骤d之后,步骤e之前进一步包括:将所述第一区域缩放预定尺寸为第四区域,其中所述第四区域的尺寸大于第一区域的尺寸,且所述第三区域的尺寸小于第一区域的尺寸;基于第四区域重复所述步骤a,b,c和d,以获得第三最小均方差值;比较所述第一最小均方差值、所述第二最小均方差值和所述第三最小均方差值;和,将所述第一最小均方差值、所述第二最小均方差值和所述第三最小均方差值中最小的一个确定为所述第一最小均方差值。
在上述视差计算方法中,所述第一方向是图像的行方向或者列方向。
在上述视差计算方法中,在步骤a之前进一步包括:将所述第一图像和所述第二图像缩放为相同尺寸。
在上述视差计算方法中,在步骤a之前进一步包括:将所述第一图像和所述第二图像转换为同一彩色格式的图图像。
在上述视差计算方法中,在步骤a之前进一步包括:从双摄像头获取每一摄像头的原始图像数据信息;和,使用差值运算法将所获取的原始图像转换为适于显示处理的所述第一图像和所述第二图像。
在上述视差计算方法中,在步骤a之前进一步包括:将所述第一图像和所述第二图像转换为第一灰度图像和第二灰度图像;和,根据所需的视差图尺寸,将第一灰度图像和第二灰度图像分别缩放为所述视差图尺寸。
在上述视差计算方法中,进一步包括:基于所述视差表将所述第一图像和所述第二图像合成为三维图像。
根据本发明的另一方面,提供了一种双摄像头模组,包括:第一摄像头,用于获取第一图像;第二摄像头,用于获取第二图像;和,处理单元,用于计算第一图像和第二图像的像素之间的视差值,具体包括:a)在第一图像中选择第一区域,建立所述第一区域的第一灰度直方图,所述第一区域以第一像素为中心;b)将所述第一区域的坐标值在第一方向上加上参考视差值以得到所述第二图像中的第二区域的坐标,建立所述第二区域的第二灰度直方图;c)计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差;d)以预定步长增大所述参考视差值,并重复步骤b和c,直到当前第一均方差大于前一第一均方差为止,并将前一第一均方差确定为第一最小均方差值;e)确定与第一最小均方差值对应的第一视差值;和f)将所述第一视差值作为所述第一像素的视差值。
在上述双摄像头模组中,所述处理单元进一步用于:对于所述第一图像中的每一像素,重复所述步骤a,b,c,d,e和f,以获得所述第一图像中的每一像素的视差值;和,基于所述第一图像中的每一像素的视差值,得到所述第一图像和所述第二图像之间的视差表。
在上述双摄像头模组中,所述处理单元在步骤d之后,步骤e之前进一步用于:将所述第一区域缩放预定尺寸为第三区域;基于第三区域重复所述步骤a, b,c和d,以获得第二最小均方差值;比较所述第一最小均方差值与所述第二最小均方差值;和,在所述第二最小均方差值小于第一最小均方差值的情况下,将所述第二均方差值确定为所述第一最小均方差值。
在上述双摄像头模组中,所述处理单元在步骤d之后,步骤e之前进一步用于:将所述第一区域缩放预定尺寸为第四区域,其中所述第四区域的尺寸大于第一区域的尺寸,且所述第三区域的尺寸小于第一区域的尺寸;基于第四区域重复所述步骤a,b,c和d,以获得第三最小均方差值;比较所述第一最小均方差值、所述第二最小均方差值和所述第三最小均方差值;和,将所述第一最小均方差值、所述第二最小均方差值和所述第三最小均方差值中最小的一个确定为所述第一最小均方差值。
在上述双摄像头模组中,所述第一方向是图像的行方向或者列方向。
在上述双摄像头模组中,所述处理单元在步骤a之前进一步用于:将所述第一图像和所述第二图像缩放为相同尺寸。
在上述双摄像头模组中,所述处理单元在步骤a之前进一步用于:将所述第一图像和所述第二图像转换为同一彩色格式的图像。
在上述双摄像头模组中,所述处理单元在步骤a之前进一步用于:从双摄像头获取每一摄像头的原始图像数据信息;和,使用差值运算法将所获取的原始图像转换为适于显示处理的所述第一图像和所述第二图像。
在上述双摄像头模组中,所述处理单元在步骤a之前进一步用于:将所述第一图像和所述第二图像转换为第一灰度图像和第二灰度图像;和,根据所需的视差图尺寸,将第一灰度图像和第二灰度图像分别缩放为所述视差图尺寸。
在上述双摄像头模组中,所述处理单元进一步用于:基于所述视差表将所述第一图像和所述第二图像合成为三维图像。
根据本发明的又一方面,提供了一种电子设备,包括如上所述的双摄像头模组。
通过根据本发明的视差计算方法,以及应用该视差计算方法的双摄像头模组和电子设备,可以在不对图像进行校正的情况下快速计算视差。
根据本发明的视差计算方法,以及应用该视差计算方法的双摄像头模组和电子设备可以在两个图像的亮度差距较大、颜色不一致以及两个图像的画面不是很相对平整的状态下进行计算,并得到相对稳定的结果。
根据本发明的视差计算方法,以及应用该视差计算方法的双摄像头模组和电子设备的兼容性强,测试结果较好,且可以节省对双摄像头模组的其中一个摄像头的校正时间,便于用户使用。
根据本发明的一方面,提供了一种距离参数计算方法,用于基于双摄像头模组所拍摄的第一图像和第二图像之间的视差值来计算与所述双摄像头模组有关的距离参数,所述方法包括:建立所述距离参数与所述视差值的关系式,所述关系式是至少两个视差项与至少两个相应系数的乘积之和,且所述视差项为所述视差值的幂;在至少两个预定距离以所述双摄像头模组拍摄被摄体,并计算所述被摄体在第一图像和第二图像之间的至少两个视差值;和,基于所述至少两个预定距离和所述至少两个视差值计算所述至少两个相应系数,从而确定所述关系式。
在上述距离参数计算方法中,进一步包括:在第一距离以所述双摄像头模组拍摄被摄体,并计算所述被摄体在第一图像和第二图像之间的第一视差值;和,将所述第一视差值带入所述关系式,以求得所述第一距离的数值。
在上述距离参数计算方法中,所述距离参数是所述被摄体的景深,且所述关系式为Y=A×X-1+B;其中,Y是所述距离参数,X是所述视差值,且A和B是所述系数。
在上述距离参数计算方法中,所述至少两个预定距离分别为15cm和35cm。
在上述距离参数计算方法中,所述距离参数是所述被摄体的景深,且所述关系式为Y=A1×Xn+A2×Xn-1+…+An-1×X2+An×X+B;其中,Y是所述距离参数,X是所述视差值,A1,A2,…,An和B是所述系数,且n是大于等于2的自然数。
在上述距离参数计算方法中,所述至少两个预定距离分别为n+1个距离,且所述n+1个距离的范围在7cm到200cm之间。
在上述距离参数计算方法中,所述n+1个距离中相邻两个距离之间的间隔为10cm。
在上述距离参数计算方法中,所述确定所述关系式的步骤具体包括:使用二次拟合法拟合所述至少两个视差项与至少两个相应系数的乘积之和的二元曲线,以确定所述关系式。
在上述距离参数计算方法中,所述距离参数是所述双摄像头模组的马达代码值,且所述关系式为Y=A×X+B;其中,Y是所述距离参数,X是所述视差值,且A和B是所述系数。
在上述距离参数计算方法中,所述至少两个预定距离分别为15cm和35cm。
根据本发明的另一方面,提供了一种双摄像头模组,包括:第一摄像头,用于获取第一图像;第二摄像头,用于获取第二图像;和,处理单元,用于基于所述第一图像和所述第二图像之间的视差值来计算与所述双摄像头模组有关的距离参数,所述处理单元具体用于:建立所述距离参数与所述视差值的关系式,所述关系式是至少两个视差项与至少两个相应系数的乘积之和,且所述视差项为所述视差值的幂;在至少两个预定距离以所述双摄像头模组拍摄被摄体,并计算所述被摄体在第一图像和第二图像之间的至少两个视差值;和,基于所述至少两个预定距离和所述至少两个视差值计算所述至少两个相应系数,从而确定所述关系式。
在上述双摄像头模组中,所述第一摄像头和所述第二摄像头在第一距离拍摄被摄体;和,所述处理单元进一步用于:计算所述被摄体在第一图像和第二图像之间的第一视差值;和,将所述第一视差值带入所述关系式,以求得所述第一距离的数值。
在上述双摄像头模组中,所述距离参数是所述被摄体的景深,且所述关系式为Y=A×X-1+B;其中,Y是所述距离参数,X是所述视差值,且A和B是所述系数。
在上述双摄像头模组中,所述至少两个预定距离分别为15cm和35cm。
在上述双摄像头模组中,所述距离参数是所述被摄体的景深,且所述关系式为Y=A1×Xn+A2×Xn-1+…+An-1×X2+An×X+B;其中,Y是所述距离参数,X是所述视差值,A1,A2,…,An和B是所述系数,且n是大于等于2的自然数。
在上述双摄像头模组中,所述至少两个预定距离分别为n+1个距离,且所述n+1个距离的范围在7cm到200cm之间。
在上述双摄像头模组中,所述n+1个距离中相邻两个距离之间的间隔为10cm。
在上述双摄像头模组中,所述处理单元确定所述关系式具体包括:使用二次拟合法拟合所述至少两个视差项与至少两个相应系数的乘积之和的二元曲线,以确定所述关系式。
在上述双摄像头模组中,所述距离参数是所述双摄像头模组的马达代码值,且所述关系式为Y=A×X+B;其中,Y是所述距离参数,X是所述视差值,且A 和B是所述系数。
在上述双摄像头模组中,所述至少两个预定距离分别为15cm和35cm。
在上述双摄像头模组中,进一步包括:控制单元,用于基于所述马达代码值驱动所述双摄像头模组的马达,以移动所述第一摄像头和所述第二摄像头。
在上述双摄像头模组中,存储单元,用于存储所述至少两个相应系数。
根据本发明的又一方面,提供了一种电子设备,包括上述双摄像头模组。
通过根据本发明的距离参数计算方法,以及应用该距离参数计算方法的双摄像头模组和电子设备,可以实现快速测距或者快速对焦。
根据本发明的距离参数计算方法,以及应用该距离参数计算方法的双摄像头模组和电子设备可以基于视差值来计算距离参数,过程简单,节省时间,并具有相对较好的暗态对焦稳定性。
附图说明
图1是根据本发明第一较佳实施例的视差计算方法的示意性流程图;
图2是根据本发明第一较佳实施例的视差计算方法的另一实例的示意性流程图;
图3是根据本发明第一较佳实施例和第二较佳实施例的视差表的示意图;
图4是根据本发明第一较佳实施例的视差计算方法的又一实例的示意性流程图;
图5是根据本发明第一较佳实施例的双摄像头模组的示意性框图;
图6是根据本发明第一较佳实施例的双摄像头模组的工作过程的示意性流程图;
图7是根据本发明第一较佳实施例的电子设备的示意性框图。
图8是根据本发明第二较佳实施例的距离参数计算方法的示意性流程图;
图9是根据本发明第二较佳实施例的计算视差值的方法的实例的示意性流程图;
图10是根据本发明第二较佳实施例的计算视差值的方法的另一实例的示意性流程图;
图11是根据本发明第二较佳实施例的双摄像头模组的示意性框图;
图12是根据本发明第二较佳实施例的双摄像头模组的工作过程的示意性流 程图;
图13是根据本发明第二较佳实施例的电子设备的示意性框图。
具体实施方式
以下描述用于公开本发明以使本领域技术人员能够实现本发明。以下描述中的优选实施例只作为举例,本领域技术人员可以想到其他显而易见的变型。在以下描述中界定的本发明的基本原理可以应用于其他实施方案、变形方案、改进方案、等同方案以及没有背离本发明的精神和范围的其他技术方案。
本领域技术人员应理解的是,在本发明的公开中,术语“纵向”、“横向”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”等指示的方位或位置关系是基于附图所示的方位或位置关系,其仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此上述术语不能理解为对本发明的限制。
可以理解的是,术语“一”应理解为“至少一”或“一个或多个”,即在一个实施例中,一个元件的数量可以为一个,而在另外的实施例中,该元件的数量可以为多个,术语“一”不能理解为对数量的限制。
以下说明书和权利要求中使用的术语和词不限于字面的含义,而是仅由本发明人使用以使得能够清楚和一致地理解本发明。因此,对本领域技术人员很明显仅为了说明的目的而不是为了如所附权利要求和它们的等效物所定义的限制本发明的目的而提供本发明的各种实施例的以下描述。
虽然比如“第一”、“第二”等的序数将用于描述各种组件,但是在这里不限制那些组件。该术语仅用于区分一个组件与另一组件。例如,第一组件可以被称为第二组件,且同样地,第二组件也可以被称为第一组件,而不脱离发明构思的教导。在此使用的术语“和/或”包括一个或多个关联的列出的项目的任何和全部组合。
在这里使用的术语仅用于描述各种实施例的目的且不意在限制。如在此使用的,单数形式意在也包括复数形式,除非上下文清楚地指示例外。另外将理解术语“包括”和/或“具有”当在该说明书中使用时指定所述的特征、数目、步骤、操作、组件、元件或其组合的存在,而不排除一个或多个其它特征、数目、步骤、操作、组件、元件或其组的存在或者附加。
包括技术和科学术语的在这里使用的术语具有与本领域技术人员通常理解的术语相同的含义,只要不是不同地限定该术语。应当理解在通常使用的词典中限定的术语具有与现有技术中的术语的含义一致的含义。
下面结合附图和具体实施方式对本发明作进一步详细的说明:
在双摄像头领域中,涉及对每个主图像中的像素在对应的子图像中的偏差值,即视差值的计算。目前流行的视差值的算法是绝对差值和(SAD)算法,其针对图像中感兴趣区域(ROI)中的单个像素点求差值。但是,这类方法对于图像的要求较高,需要两个画面相对于另外一个轴平整,且在两个画面的亮度不一致的情况下效果较差。
因此,根据本发明第一较佳实施例的一方面,提供了一种视差计算方法,用于计算第一图像和第二图像的像素之间的视差值,包括:a)在第一图像中选择第一区域,建立该第一区域的第一灰度直方图,该第一区域以第一像素为中心;b)将该第一区域的坐标值在第一方向上加上参考视差值以得到该第二图像中的第二区域的坐标,建立该第二区域的第二灰度直方图;c)计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差;d)以预定步长增大该参考视差值,并重复步骤b和c,直到当前第一均方差大于前一第一均方差为止,并将前一第一均方差确定为第一最小均方差值;e)确定与第一最小均方差值对应的第一视差值;和f)将该第一视差值作为该第一像素的视差值。
图1是根据本发明第一较佳实施例的视差计算方法的示意性流程图。如图1所示,根据本发明第一较佳实施例的视差计算方法包括:S101,在第一图像中选择第一区域,建立该第一区域的第一灰度直方图,其中该第一区域以第一像素为中心;S102,将该第一区域的坐标值在第一方向上加上参考视差值以得到该第二图像中的第二区域的坐标,建立该第二区域的第二灰度直方图;S103,计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差;S104,以预定步长增大该参考视差值,并重复步骤S102和S103,直到当前第一均方差大于前一第一均方差为止,并将前一第一均方差确定为第一最小均方差值;S105,确定与第一最小均方差值对应的第一视差值;和S106,将该第一视差值作为该第一像素的视差值。
对于双摄像头模组来说,由于两个摄像头之间均在间距,因而在所拍摄出的第一图像和第二图像之间存在视差。通常来说,当两个摄像头在水平方向并排排 列时,第二图像相对于第一图像具有在水平方向的视差。举例来说,对于第一图像中的第一像素,假设其坐标为(x,y),则同一像素在第二图像中的坐标位置是该像素在第一图像中的坐标位置水平平移一特定距离,即(x+Δx,y)。该距离Δx就是第一像素在第一图像和第二图像之间的视差值,也被称为该第一像素的视差值。在根据本发明第一较佳实施例的视差计算方法中,是通过在第一图像和第二图像中扫描以该像素为中心的特定区域的灰度直方图,并比较两个灰度直方图之间的差异来确定该像素的视差值。
这里,当计算第一图像和第二图像中某一区域的差异时,当第一图像中的区域与第二图像中的区域相对应时,两者之间的差异最小。也就是说,通过从第二图像中相同坐标的点开始进行扫描,则扫描区域越接近第二图像中的该像素的位置,两个图像中相应区域之间的差异越小,而最小值就出现在第二图像中该像素的位置处。随着扫描区域经过了该像素在第二图像中的位置,两个图像中相应区域之间的差异又会继续变大。因而,通过在增大扫描区域的横坐标的值的同时,计算两个图像中相应区域之间的差异,可以得到一条差异值随着距离的增大先减小后增大的曲线。这样,曲线的最低点就表示该像素在第一图像和第二图像之间的视差值。
另外,本领域技术人员可以理解,虽然在本发明第一较佳实施例中,通过建立第一图像和第二图像中的相应区域的灰度直方图,并将灰度直方图进行比较的方式来确定两个图像中相应区域之间的差异,但是也可以采用其它方式来比较两个图像中相应区域之间的差异。而且,无论比较的方式如何,所得到的差异值相对于距离的曲线都会遵循随着距离的增大而先减小后增大的形状,从而基于曲线的最低点确定像素的视差值。
在根据本发明第一较佳实施例的视差计算方法中,为了降低图像的比较过程中对于图像质量的要求,计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的均方差,而不是计算单个像素的差值。但是,本领域技术人员可以理解,在计算能力足够强大的情况下,也可以采用逐像素差值计算的方式。
图2是根据本发明第一较佳实施例的视差计算方法的另一实例的示意性流程图。如图2所示,根据本发明第一较佳实施例的视差计算方法包括:S201,在第一图像中选择第一区域,建立该第一区域的第一灰度直方图,其中该第一区域以第一像素为中心;S202,将该第一区域的坐标值在第一方向上加上参考视差值 xi以得到该第二图像中的第二区域的坐标,建立该第二区域的第二灰度直方图;S203,计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差Δxi;S204,以预定步长增大该参考视差值,即xi+1=xi+d,并重复步骤S202和S203;S205,确定所得到的均方差是否小于前一均方差,即确定是否Δxi+1<Δxi,当Δxi+1<Δxi时,说明均方差曲线仍处于下降阶段,还未获得最小均方差,而如果Δxi+1>Δxi,则说明已经经过了均方差的拐点,Δxi即为第一最小均方差值;S206,确定与第一最小均方差值对应的第一视差值xi;和S207,将该第一视差值xi作为该第一像素的视差值。
也就是说,图2所示的视差计算方法与图1的视差计算方法基本相同,只是以步骤S204和S205来具体实现图1中的步骤S104。
通过根据本发明第一较佳实施例的视差计算方法,可以在两个图像的亮度差距较大、颜色不一致以及两个图像的画面不是很相对平整的状态下进行计算,并得到相对稳定的结果。
在上述视差计算方法中,进一步包括:对于该第一图像中的每一像素,重复该步骤a,b,c,d,e和f,以获得该第一图像中的每一像素的视差值;和,基于该第一图像中的每一像素的视差值,得到该第一图像和该第二图像之间的视差表。
就是说,在根据本发明第一较佳实施例的视差计算方法中,在得到了某一像素点的视差值之后,通过相同方法对整幅图像中的所有像素点进行计算,从而获得第一图像中的每一像素的视差值。具体地说,对于如图1所示的视差计算方法,可以按照先行扫描再列扫描的方式,对于每一像素重复步骤S101到S106,而对于如图2所示的视差计算方法,则是重复步骤S201到S207。这样,通过计算得到第一图像中的每一像素的视差值,就可以建立第一图像和第二图像之间的视差表。
图3是根据本发明第一较佳实施例的视差表的示意图。如图3所示,对于第一图像中的每一像素,使用视差表中像素的灰度表示该像素的视差,并且灰度值越大,表明该像素的视差越高。例如,图3中最大的灰度值可以表示该像素的视差为无穷远,而最小的灰度值可以表示该像素的视差为零。当然,本领域技术人员可以理解,图3所示的视差表是为了直观表示视差的示意图,实际上并不够精确。根据本发明第一较佳实施例建立的视差表应该是对应于每个像素的第一特定 视差值的表格形式,从而精确地表示出每一像素所对应的视差值。
在根据本发明第一较佳实施例的视差计算方法中,初始视差值xi和用于增大视差值的步长d都可以由用户选择。例如,由于双摄像头模组中的视差值通常基于两个摄像头之间的间距,可以将初始视差值xi设定为两个摄像头之间的间距的一定比例,比如50%,60%,80%,等等。另外,为了保证扫描的精确性,通常将步长d设置为一个像素。
当然,根据本发明第一较佳实施例的视差计算方法中,也可以采用先粗略扫描再精细扫描的方式。具体来说,可以首先将步长d设置为较大值,例如10个像素,并以该步长来进行扫描,从而找到所计算出的均方差的拐点。但是此时,由于步长的间隔为10个像素,在曲线上,实际的均方差的最小值可能出现在此时得到的均方差最小值的左侧,也可能出现在右侧。因而,可以从此时得到的最小均方差所对应的视差值的前一视差值开始,在20个像素的间隔内以1个像素的步长进行精细扫描,从而确定最小均方差出现的精确位置。
在上述视差计算方法中,在步骤d之后,步骤e之前进一步包括:将第一区域缩放预定尺寸为第三区域;基于第三区域重复该步骤a,b,c和d,以获得第二最小均方差值;比较该第一最小均方差值与该第二最小均方差值;和,在第二最小均方差值小于第一最小均方差值的情况下,将该第二均方差值确定为该第一最小均方差值。
在上述视差计算方法中,在步骤d之后,步骤e之前还可以进一步包括:将第一区域缩放预定尺寸为第四区域,其中该第四区域的尺寸大于第一区域的尺寸,且该第三区域的尺寸小于第一区域的尺寸;基于第四区域重复该步骤a,b,c和d,以获得第三最小均方差值;比较该第一最小均方差值、该第二最小均方差值和该第三最小均方差值;和,将该第一最小均方差值、该第二最小均方差值和该第三最小均方差值中最小的一个确定为该第一最小均方差值。
图4是根据本发明第一较佳实施例的视差计算方法的又一实例的示意性流程图。如图4所示,根据本发明第一较佳实施例的视差计算方法包括:S301,在第一图像中选择第一区域,建立该第一区域的第一灰度直方图,其中该第一区域以第一像素为中心;S302,将该第一区域的坐标值在第一方向上加上视差值以得到该第二图像中的第二区域的坐标,建立该第二区域的第二灰度直方图;S303,计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差; S304,以预定步长增大该视差值,并重复步骤S302和S303,直到所获得的均方差值增大为止,以获得第一最小均方差值D1;S305,将第一区域缩小预定尺寸为第三区域,并放大预定尺寸为第四区域;S306,基于第三区域和第四区域分别重复步骤S301到S304,以获得第二最小均方差值D2和第三最小均方差值D3;S307,比较该第一最小均方差值D1、该第二最小均方差值D2和该第三最小均方差值D3;S308,将该第一最小均方差值D1、该第二最小均方差值D2和该第三最小均方差值D3中最小的一个确定为该第一最小均方差值,即D1=min(D1,D2,D3);S309,确定与第一最小均方差值对应的第一视差值;和S310,将该第一视差值作为该第一像素的视差值。
在如图4所示的根据本发明第一较佳实施例的视差计算方法中,为了提高所计算出的视差值的置信度,将区域的窗口大小进行缩放后进行计算,并选择最小的均方差值作为用于确定视差值的最终结果。这样,改进了根据本发明第一较佳实施例的视差计算方法中视差值计算的精确性。但是,由于增大了计算量,在对于视差值的精确性要求不高的情况下,也可以省略上述缩放区域窗口大小的步骤,从而实现视差值的快速计算。
在上述视差计算方法中,第一方向是图像的行方向或者列方向。
如上所述,在双摄像头模组中,两个摄像头通常为水平排列,因而两个图像之间的视差通常在水平方向上。但是,本发明第一较佳实施例并不仅限于此,在具有垂直方向上排列的两个摄像头的双摄像头模组中,两个图像之间的视差将是在垂直方向上,因而扫描方向也应该为图像的列方向。除了扫描的方向不同外,具体的计算过程在行方向和列方向的情况下均相同,因而为了避免冗余并不再赘述。
在上述视差计算方法中,在步骤a之前进一步包括:将第一图像和第二图像缩放为相同尺寸。
为了图像比较的准确度,在根据本发明第一较佳实施例的视差计算方法中,在具体的计算过程之前,优选地将第一图像和第二图像缩放为相同尺寸。例如,如果第一图像的尺寸大于第二图像,则可以将第二图像放大成第一图像的尺寸,然后进行计算。此外,由于根据本发明第一较佳实施例的视差计算方法并不会受到图像的具体尺寸的影响,在第一图像与第二图像的尺寸相同的情况下,不需要再对图像进行缩放,从而加快处理速率。
这里,本领域技术人员可以理解,在第一图像和第二图像的尺寸不同的情况下,如果不将第一图像和第二图像缩放为相同尺寸,则需要对坐标进行转换。例如,假设第一图像的宽度为W1,高度为H1,第二图像的宽度为W2,高度为H2。则对于第一图像中的像素坐标(x1,y1),在第二图像中的相应像素的坐标(x2,y2)应当满足:
x2=W2/W1×x1
y2=H2/H1×y1
在上述视差计算方法中,在步骤a之前进一步包括:将第一图像和第二图像转换为同一彩色格式的图像。
在双摄像头模组中,会存在两个摄像头拍摄的图像的颜色不一致的情况。例如,第一图像为彩色图像,而第二图像为黑白图像,等等。在这种情况下,为了图像比较的准确度,在根据本发明第一较佳实施例的视差计算方法中,在具体的计算过程之前,优选地将第一图像和第二图像转换为同一彩色格式的图像,例如RGB彩色格式的图像。当然,本领域技术人员可以理解,也可以将第一图像和第二图像都转换为灰度图像,例如,对于RGB图像,通过Y=(R+G+B)/3转换为灰度图像。当然,如果第一图像和第二图像本身就是同一彩色格式的图像,例如RGB彩色格式的图像,并不需要一定将第一图像和第二图像都转换为灰度图像,而是可以直接进行后续计算过程,以加快处理速率。
在上述视差计算方法中,在步骤a之前进一步包括:从双摄像头获取每一摄像头的原始图像数据信息;和,使用差值运算法将所获取的原始图像转换为适于显示处理的所述第一图像和所述第二图像。
具体来说,在根据本发明第一较佳实施例的视差计算方法中,可以通过图像处理软件获取原始图像数据信息,该原始图像数据信息是从双摄像头的每一摄像头的图像传感器底层传输的,并以帧为单位。该原始图像数据信息可以精确地再现摄像头的图像传感器所获取的图像信息,但是可能并不适于图像处理。因而,在根据本发明第一较佳实施例的视差计算方法中,使用差值运算法将所获取的原始图像转换为适于显示处理的图像,例如适于计算机的显示处理的32位BMP图。
在上述视差计算方法中,在步骤a之前进一步包括:将第一图像和第二图像转换为第一灰度图像和第二灰度图像;和,根据所需的视差图尺寸,将第一灰度 图像和第二灰度图像分别缩放为视差图尺寸。
也就是说,如果所需的视差图的尺寸与原始图像不同,在根据本发明第一较佳实施例的视差计算方法中,需要首先根据所需的视差图的尺寸将第一灰度图像和第二灰度图像缩放到所需的尺寸,然后在进行视差值的计算。例如,当需要较小的视差图尺寸时,首先将第一灰度图像L1和第二灰度图像R1缩小到小尺寸的第一灰度图像L2和第二灰度图像R2,在针对第一灰度图像L2和第二灰度图像R2中的每个像素进行视差值的计算。这是因为图像在进行缩放后,会对视差值产生影响,因而缩放后的图像的视差值不能应用于原尺寸的图像。
在上述视差计算方法中,进一步包括:基于该视差表将第一图像和第二图像合成为三维图像。
在获得第一图像和第二图像之间的视差表之后,可以将作为单通道数据的第一图像、第二图像和视差表整合成三通道的图像数据输出,以由处理器进行进一步的处理,比如在此基础上进行图像合成,以合成三维图像等。当然,本领域技术人员可以理解,该进一步的处理不仅限于进行图像合成为三维图像,也可以基于第一图像、第二图像和视差表进行其它图像处理,本发明第一较佳实施例并不意在对此进行任何限制。
这样,通过根据本发明第一较佳实施例的视差计算方法,可以在不对图像进行校正的情况下快速计算视差。
并且,根据本发明第一较佳实施例的视差计算方法可以在两个图像的亮度差距较大、颜色不一致以及两个图像的画面不是很相对平整的状态下进行计算,并得到相对稳定的结果。
此外,根据本发明第一较佳实施例的视差计算方法的兼容性强,测试结果较好,且可以节省对双摄像头模组的其中一个摄像头的校正时间,便于用户使用。
根据本发明第一较佳实施例的另一方面,提供了一种双摄像头模组,包括:第一摄像头,用于获取第一图像;第二摄像头,用于获取第二图像;和,处理单元,用于计算第一图像和第二图像的像素之间的视差值,具体包括:a)在第一图像中选择第一区域,建立该第一区域的第一灰度直方图,该第一区域以第一像素为中心;b)将该第一区域的坐标值在第一方向上加上视差值以得到该第二图像中的第二区域的坐标,建立该第二区域的第二灰度直方图;c)计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差;d)以预定步长 增大该视差值,并重复步骤b和c,直到所获得的均方差值增大为止,以获得第一最小均方差值;e)确定与第一最小均方差值对应的第一视差值;和f)将该第一视差值作为该第一像素的视差值。
图5是根据本发明第一较佳实施例的双摄像头模组的示意性框图。如图5所示,根据本发明第一较佳实施例的双摄像头模组100包括:第一摄像头110,用于获取第一图像;第二摄像头120,用于获取第二图像;和,处理单元130,用于计算第一摄像头110所获取的第一图像和第二摄像头120所获取的第二图像的像素之间的视差值,具体包括:a)在第一图像中选择第一区域,建立该第一区域的第一灰度直方图,该第一区域以第一像素为中心;b)将该第一区域的坐标值在第一方向上加上视差值以得到该第二图像中的第二区域的坐标,建立该第二区域的第二灰度直方图;c)计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差;d)以预定步长增大该视差值,并重复步骤b和c,直到所获得的均方差值增大为止,以获得第一最小均方差值;e)确定与第一最小均方差值对应的第一视差值;和f)将该第一视差值作为该第一像素的视差值。
在上述双摄像头模组中,该处理单元进一步用于:对于该第一图像中的每一像素,重复上述步骤a,b,c,d,e和f,以获得该第一图像中的每一像素的视差值;和,基于该第一图像中的每一像素的视差值,得到该第一图像和该第二图像之间的视差表。
在上述双摄像头模组中,该处理单元在上述步骤d之后,上述步骤e之前进一步用于:将该第一区域缩放预定尺寸为第三区域;基于第三区域重复上述步骤a,b,c和d,以获得第二最小均方差值;比较该第一最小均方差值与该第二最小均方差值;和,在该第二最小均方差值小于第一最小均方差值的情况下,将该第二均方差值确定为该第一最小均方差值。
在上述双摄像头模组中,该处理单元在上述步骤d之后,上述步骤e之前进一步用于:将该第一区域缩放预定尺寸为第四区域,其中该第四区域的尺寸大于第一区域的尺寸,且该第三区域的尺寸小于第一区域的尺寸;基于第四区域重复上述步骤a,b,c和d,以获得第三最小均方差值;比较该第一最小均方差值、该第二最小均方差值和该第三最小均方差值;和,将该第一最小均方差值、该第二最小均方差值和该第三最小均方差值中最小的一个确定为该第一最小均方差值。
在上述双摄像头模组中,该第一方向是图像的行方向或者列方向。
在上述双摄像头模组中,该处理单元在上述步骤a之前进一步用于:将该第一图像和该第二图像缩放为相同尺寸。
在上述双摄像头模组中,该处理单元在上述步骤a之前进一步用于:将该第一图像和该第二图像转换为同一彩色格式的图像。
在上述双摄像头模组中,该处理单元在上述步骤a之前进一步用于:从双摄像头获取每一摄像头的原始图像数据信息;和,使用差值运算法将所获取的原始图像转换为适于显示处理的所述第一图像和所述第二图像。
在上述双摄像头模组中,该处理单元在上述步骤a之前进一步用于:将该第一图像和该第二图像转换为第一灰度图像和第二灰度图像;和,根据所需的视差图尺寸,将第一灰度图像和第二灰度图像分别缩放为该视差图尺寸。
在上述双摄像头模组中,该处理单元进一步用于:基于该视差表将该第一图像和该第二图像合成为三维图像。
这里,本领域技术人员可以理解,上述根据本发明第一较佳实施例的双摄像头模组中的其他细节与之前所述的根据本发明第一较佳实施例的视差计算方法中的相应细节完全相同,为了避免冗余便不再赘述。
图6是根据本发明第一较佳实施例的双摄像头模组的工作过程的示意性流程图。如图6所示,在工作过程开始后,在S401,首先校正马达代码和距离参数。之后,在S402,获取原始图像数据信息,即RAW图,并转换为适于计算机处理的BMP图。之后,在S403,将图像缩放到目标尺寸。在S404,计算每个像素的视差值。在S405,基于每个像素的视差值建立BMP深度图。最后,在S406,根据深度图合成左右摄图像,从而完成图像合成。
根据本发明的又一方面,提供了一种电子设备,该电子设备包括双摄像头模组,且该双摄像头模组包括:第一摄像头,用于获取第一图像;第二摄像头,用于获取第二图像;和,处理单元,用于计算第一图像和第二图像的像素之间的视差值,具体包括:a)在第一图像中选择第一区域,建立该第一区域的第一灰度直方图,该第一区域以第一像素为中心;b)将该第一区域的坐标值在第一方向上加上视差值以得到该第二图像中的第二区域的坐标,建立该第二区域的第二灰度直方图;c)计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差;d)以预定步长增大该视差值,并重复步骤b和c,直到所获得的 均方差值增大为止,以获得第一最小均方差值;e)确定与第一最小均方差值对应的第一视差值;和f)将该第一视差值作为该第一像素的视差值。
在上述电子设备中,该处理单元进一步用于:对于该第一图像中的每一像素,重复上述步骤a,b,c,d,e和f,以获得该第一图像中的每一像素的视差值;和,基于该第一图像中的每一像素的视差值,得到该第一图像和该第二图像之间的视差表。
在上述电子设备中,该处理单元在上述步骤d之后,上述步骤e之前进一步用于:将该第一区域缩放预定尺寸为第三区域;基于第三区域重复上述步骤a,b,c和d,以获得第二最小均方差值;比较该第一最小均方差值与该第二最小均方差值;和,在该第二最小均方差值小于第一最小均方差值的情况下,将该第二均方差值确定为该第一最小均方差值。
在上述电子设备中,该处理单元在上述步骤d之后,上述步骤e之前进一步用于:将该第一区域缩放预定尺寸为第四区域,其中该第四区域的尺寸大于第一区域的尺寸,且该第三区域的尺寸小于第一区域的尺寸;基于第四区域重复上述步骤a,b,c和d,以获得第三最小均方差值;比较该第一最小均方差值、该第二最小均方差值和该第三最小均方差值;和,将该第一最小均方差值、该第二最小均方差值和该第三最小均方差值中最小的一个确定为该第一最小均方差值。
在上述电子设备中,该第一方向是图像的行方向或者列方向。
在上述电子设备中,该处理单元在上述步骤a之前进一步用于:将该第一图像和该第二图像缩放为相同尺寸。
在上述电子设备中,该处理单元在上述步骤a之前进一步用于:将该第一图像和该第二图像转换为同一彩色格式的图像。
在上述电子设备中,该处理单元在上述步骤a之前进一步用于:从双摄像头获取每一摄像头的原始图像数据信息;和,使用差值运算法将所获取的原始图像转换为适于显示处理的所述第一图像和所述第二图像。
在上述电子设备中,该处理单元在上述步骤a之前进一步用于:将该第一图像和该第二图像转换为第一灰度图像和第二灰度图像;和,根据所需的视差图尺寸,将第一灰度图像和第二灰度图像分别缩放为该视差图尺寸。
在上述电子设备中,该处理单元进一步用于:基于该视差表将该第一图像和该第二图像合成为三维图像。
图7是根据本发明第一较佳实施例的电子设备的示意性框图。如图7所示,根据本发明第一较佳实施例的电子设备200包括双摄像头模组210,该双摄像头模组210可以获取第一图像和第二图像。并且,电子设备200可以包括一处理器220,用于计算第一图像和第二图像的像素之间的视差值,并基于视差值进行图像合成,即能够集成上述双摄像模组的所述处理单元130的功能。该处理器220例如包括计算机、微处理器、集成电路或者可编程逻辑器件。此外,电子设备200还可以进一步包括一存储器230,用于存储原始图像数据或者经过处理之后的图像数据。该存储器230可以包括易失性存储器,比如静态随机存取存储器(S-RAM)和动态随机存取存储器(D-RAM),以及非易失性存储器,比如闪存存储器、只读存储器(ROM)和可擦可编程只读存储器(EPROM)和电可擦可编程只读存储器(EEPROM)。
这里,处理器所进行的图像处理的具体细节与之前所述的根据本发明第一较佳实施例的视差计算方法中的相应细节完全相同,为了避免冗余便不再赘述。
本发明第一较佳实施例的电子设备电子装置可以是包括双摄像头模组的各种电子设备,包括但不限于智能电话、平板个人计算机(PC)、移动电话、视频电话、电子书阅读器、桌面PC、膝上型PC、上网本PC、个人数字助理(PDA)、便携式多媒体播放器(PMP)、MP3播放器、移动医药装置、相机、可穿戴装置(例如,头戴装置(HMD)、电子衣服、电子手链、电子项链、电子配件、电子文身或者智能手表),等等。
通过根据本发明第一较佳实施例的视差计算方法,以及应用该视差计算方法的双摄像头模组和电子设备,可以在不对图像进行校正的情况下快速计算视差。
并且,根据本发明第一较佳实施例的视差计算方法,以及应用该视差计算方法的双摄像头模组和电子设备可以在两个图像的亮度差距较大、颜色不一致以及两个图像的画面不是很相对平整的状态下进行计算,并得到相对稳定的结果。
此外,根据本发明第一较佳实施例的视差计算方法,以及应用该视差计算方法的双摄像头模组和电子设备的兼容性强,测试结果较好,且可以节省对双摄像头模组的其中一个摄像头的校正时间,便于用户使用。
图8是根据本发明第二较佳实施例的距离参数计算方法的示意性流程图。如图8所示,根据本发明第二较佳实施例的距离参数计算方法用于基于双摄像头模组所拍摄的第一图像和第二图像之间的视差值来计算与该双摄像头模组有关的 距离参数,且该方法具体包括:S1010,建立距离参数与视差值的关系式,该关系式是至少两个视差项与至少两个相应系数的乘积之和,且该视差项为所述视差值的幂;S1020,在至少两个预定距离以该双摄像头模组拍摄被摄体,并计算该被摄体在第一图像和第二图像之间的至少两个视差值;和S1030,基于该至少两个预定距离和该至少两个视差值计算该至少两个相应系数,从而确定该关系式。
这里,本领域技术人员可以理解,可以使用多种方法来计算双摄像头模组所拍摄的第一图像和第二图像之间的视差值。例如,可以采用绝对差值和(SAD)算法,其针对图像中感兴趣区域(ROI)中的单个像素点求差值。但是,这类方法对于图像的要求较高,需要两个画面相对于另外一个轴平整,且在两个画面的亮度不一致的情况下效果较差。
图9是根据本发明第二较佳实施例的计算视差值的方法的实例的示意性流程图。如图9所示,根据本发明第二较佳实施例的计算视差值的方法包括:S2010,在第一图像中选择第一区域,建立该第一区域的第一灰度直方图,其中该第一区域以第一像素为中心;S2020,将该第一区域的坐标值在第一方向上加上参考视差值xi以得到该第二图像中的第二区域的坐标,建立该第二区域的第二灰度直方图;S2030,计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差Δxi;S2040,以预定步长增大该参考视差值,即xi+1=xi+d,并重复步骤S2020和S2030;S2050,确定所得到的均方差是否小于前一均方差,即确定是否Δxi+1<Δxi,当Δxi+1<Δxi时,说明均方差曲线仍处于下降阶段,还未获得最小均方差,而如果Δxi+1>Δxi,则说明已经经过了均方差的拐点,Δxi即为第一最小均方差值;S2060,确定与第一最小均方差值对应的第一视差值xi;和S2070,将该第一视差值xi作为该第一像素的视差值。
对于双摄像头模组来说,由于两个摄像头之间均在间距,因而在所拍摄出的第一图像和第二图像之间存在视差。通常来说,当两个摄像头在水平方向并排排列时,第二图像相对于第一图像具有在水平方向的视差。举例来说,对于第一图像中的第一像素,假设其坐标为(x,y),则同一像素在第二图像中的坐标位置是该像素在第一图像中的坐标位置水平平移一特定距离,即(x+Δx,y)。该距离Δx就是第一像素在第一图像和第二图像之间的视差值,也被称为该第一像素的视差值。在上述示例性视差计算方法中,是通过在第一图像和第二图像中扫描以该像素为中心的特定区域的灰度直方图,并比较两个灰度直方图之间的差异来确定该 像素的视差值。
这里,当计算第一图像和第二图像中某一区域的差异时,当第一图像中的区域与第二图像中的区域相对应时,两者之间的差异最小。也就是说,通过从第二图像中相同坐标的点开始进行扫描,则扫描区域越接近第二图像中的该像素的位置,两个图像中相应区域之间的差异越小,而最小值就出现在第二图像中该像素的位置处。随着扫描区域经过了该像素在第二图像中的位置,两个图像中相应区域之间的差异又会继续变大。因而,通过在增大扫描区域的横坐标的值的同时,计算两个图像中相应区域之间的差异,可以得到一条差异值随着距离的增大先减小后增大的曲线。这样,曲线的最低点就表示该像素在第一图像和第二图像之间的视差值。
在上述示例性视差计算方法中,为了降低图像的比较过程中对于图像质量的要求,计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的均方差,而不是计算单个像素的差值。但是,本领域技术人员可以理解,在计算能力足够强大的情况下,也可以采用逐像素差值计算的方式。
上述示例性视差计算方法可以在两个图像的亮度差距较大、颜色不一致以及两个图像的画面不是很相对平整的状态下进行计算,并得到相对稳定的结果。
在上述示例性视差计算方法中,在得到了某一像素点的视差值之后,通过相同方法对整幅图像中的所有像素点进行计算,从而获得第一图像中的每一像素的视差值。具体地说,对于如图9所示的视差计算方法,重复步骤S2010到S2070。这样,通过计算得到第一图像中的每一像素的视差值,就可以建立第一图像和第二图像之间的视差表。
图3是根据本发明视差表的示意图。如图3所示,对于第一图像中的每一像素,使用视差表中像素的灰度表示该像素的视差,并且灰度值越大,表明该像素的视差越高。例如,图3中最大的灰度值可以表示该像素的视差为无穷远,而最小的灰度值可以表示该像素的视差为零。当然,本领域技术人员可以理解,图3所示的视差表是为了直观表示视差的示意图,实际上并不够精确。根据本发明第二较佳实施例建立的视差表应该是对应于每个像素的第一特定视差值的表格形式,从而精确地表示出每一像素所对应的视差值。
在上述示例性视差计算方法中,初始视差值xi和用于增大视差值的步长d都可以由用户选择。例如,由于双摄像头模组中的视差值通常基于两个摄像头之 间的间距,可以将初始视差值xi设定为两个摄像头之间的间距的一定比例,比如50%,60%,80%,等等。另外,为了保证扫描的精确性,通常将步长d设置为一个像素。
当然,在上述示例性视差计算方法中,也可以采用先粗略扫描再精细扫描的方式。具体来说,可以首先将步长d设置为较大值,例如10个像素,并以该步长来进行扫描,从而找到所计算出的均方差的拐点。但是此时,由于步长的间隔为10个像素,在曲线上,实际的均方差的最小值可能出现在此时得到的均方差最小值的左侧,也可能出现在右侧。因而,可以从此时得到的最小均方差所对应的视差值的前一视差值开始,在20个像素的间隔内以1个像素的步长进行精细扫描,从而确定最小均方差出现的精确位置。
图10是根据本发明第二较佳实施例的计算视差值的方法的另一实例的示意性流程图。如图10所示,根据本发明第二较佳实施例的计算视差值的方法包括:S3010,在第一图像中选择第一区域,建立该第一区域的第一灰度直方图,其中该第一区域以第一像素为中心;S3020,将该第一区域的坐标值在第一方向上加上视差值以得到该第二图像中的第二区域的坐标,建立该第二区域的第二灰度直方图;S3030,计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差;S3040,以预定步长增大该视差值,并重复步骤S3020和S3030,直到所获得的均方差值增大为止,以获得第一最小均方差值D1;S3050,将第一区域缩小预定尺寸为第三区域,并放大预定尺寸为第四区域;S3060,基于第三区域和第四区域分别重复步骤S3010到S3040,以获得第二最小均方差值D2和第三最小均方差值D3;S3070,比较该第一最小均方差值D1、该第二最小均方差值D2和该第三最小均方差值D3;S3080,将该第一最小均方差值D1、该第二最小均方差值D2和该第三最小均方差值D3中最小的一个确定为该第一最小均方差值,即D1=min(D1,D2,D3);S3090,确定与第一最小均方差值对应的第一视差值;和S31000,将该第一视差值作为该第一像素的视差值。
在如图10所示的示例性视差计算方法中,为了提高所计算出的视差值的置信度,将区域的窗口大小进行缩放后进行计算,并选择最小的均方差值作为用于确定视差值的最终结果。这样,改进了视差值计算的精确性。但是,由于增大了计算量,在对于视差值的精确性要求不高的情况下,也可以省略上述缩放区域窗口大小的步骤,从而实现视差值的快速计算。
在上述示例性视差计算方法中,第一方向是图像的行方向或者列方向。
如上所述,在双摄像头模组中,两个摄像头通常为水平排列,因而两个图像之间的视差通常在水平方向上。但是,本发明第二较佳实施例并不仅限于此,在具有垂直方向上排列的两个摄像头的双摄像头模组中,两个图像之间的视差将是在垂直方向上,因而扫描方向也应该为图像的列方向。除了扫描的方向不同外,具体的计算过程在行方向和列方向的情况下均相同,因而为了避免冗余并不再赘述。
在上述示例性视差计算方法中,在计算之前进一步包括:将第一图像和第二图像缩放为相同尺寸。
为了图像比较的准确度,在具体的计算过程之前,优选地将第一图像和第二图像缩放为相同尺寸。例如,如果第一图像的尺寸大于第二图像,则可以将第二图像放大成第一图像的尺寸,然后进行计算。此外,由于上述示例性视差计算方法并不会受到图像的具体尺寸的影响,在第一图像与第二图像的尺寸相同的情况下,不需要再对图像进行缩放,从而加快处理速率。
这里,本领域技术人员可以理解,在第一图像和第二图像的尺寸不同的情况下,如果不将第一图像和第二图像缩放为相同尺寸,则需要对坐标进行转换。例如,假设第一图像的宽度为W1,高度为H1,第二图像的宽度为W2,高度为H2。则对于第一图像中的像素坐标(x1,y1),在第二图像中的相应像素的坐标(x2,y2)应当满足:
x2=W2/W1×x1
y2=H2/H1×y1
在上述示例性视差计算方法中,在计算之前进一步包括:将第一图像和第二图像转换为同一彩色格式的图像。
在双摄像头模组中,会存在两个摄像头拍摄的图像的颜色不一致的情况。例如,第一图像为彩色图像,而第二图像为黑白图像,等等。在这种情况下,为了图像比较的准确度,在根据本发明第二较佳实施例的视差计算方法中,在具体的计算过程之前,优选地将第一图像和第二图像转换为同一彩色格式的图像,例如RGB彩色格式的图像。当然,本领域技术人员可以理解,也可以将第一图像和第二图像都转换为灰度图像,例如,对于RGB图像,通过Y=(R+G+B)/3转换为灰度图像。当然,如果第一图像和第二图像本身就是同一彩色格式的图像, 例如RGB彩色格式的图像,并不需要一定将第一图像和第二图像都转换为灰度图像,而是可以直接进行后续计算过程,以加快处理速率。
在上述示例性视差计算方法中,在计算之前进一步包括:从双摄像头获取每一摄像头的原始图像数据信息;和,使用差值运算法将所获取的原始图像转换为适于显示处理的所述第一图像和所述第二图像。
具体来说,在上述示例性视差计算方法中,可以通过图像处理软件获取原始图像数据信息,该原始图像数据信息是从双摄像头的每一摄像头的图像传感器底层传输的,并以帧为单位。该原始图像数据信息可以精确地再现摄像头的图像传感器所获取的图像信息,但是可能并不适于图像处理。因而,在上述示例性视差计算方法中,使用差值运算法将所获取的原始图像转换为适于显示处理的图像,例如适于计算机的显示处理的32位BMP图。
在上述示例性视差计算方法中,在计算之前进一步包括:将第一图像和第二图像转换为第一灰度图像和第二灰度图像;和,根据所需的视差图尺寸,将第一灰度图像和第二灰度图像分别缩放为视差图尺寸。
也就是说,如果所需的视差图的尺寸与原始图像不同,在上述示例性视差计算方法中,需要首先根据所需的视差图的尺寸将第一灰度图像和第二灰度图像缩放到所需的尺寸,然后再进行视差值的计算。例如,当需要较小的视差图尺寸时,首先将第一灰度图像L1和第二灰度图像R1缩小到小尺寸的第一灰度图像L2和第二灰度图像R2,在针对第一灰度图像L2和第二灰度图像R2中的每个像素进行视差值的计算。这是因为图像在进行缩放后,会对视差值产生影响,因而缩放后的图像的视差值不能应用于原尺寸的图像。
这样,通过上述示例性视差计算方法,可以在不对图像进行校正的情况下快速计算视差。
并且,上述示例性视差计算方法可以在两个图像的亮度差距较大、颜色不一致以及两个图像的画面不是很相对平整的状态下进行计算,并得到相对稳定的结果。
此外,上述示例性视差计算方法的兼容性强,测试结果较好,且可以节省对双摄像头模组的其中一个摄像头的校正时间,便于用户使用。
这样,通过在预定距离拍摄被摄体并记录距离值,并通过采用上述示例性视差计算方法计算出第一图像和第二图像中被摄体的视差值,就可以推导出用于表 达距离参数与视差值之间的关系式的各个相应系数。
在上述距离参数计算方法中,进一步包括:在第一距离以该双摄像头模组拍摄被摄体,并计算该被摄体在第一图像和第二图像之间的第一视差值;和,将该第一视差值带入该关系式,以求得该第一距离的数值。
在确定了用于表达距离参数与视差值之间关系的关系式之后,当以双摄像头模组拍摄被摄体时,同样通过采用上述示例性视差计算方法计算出第一图像和第二图像中被摄体的视差值,就可以根据该关系式得到该双摄像头模组的距离参数的具体数值。
在上述距离参数计算方法中,该距离参数是该被摄体的景深,且该关系式为:
Y=A×X-1+B                         (1)
其中,Y是距离参数,X是视差值,且A和B是系数。
根据双摄像头三角公式,被摄体的景深,即被摄体到双摄像头模组的距离和视差值之间具有反比关系:
Z=(f×T)/(xl-xr)                   (2)
其中,Z是被摄体到双摄像头模组的距离,f是双摄像头模组的焦距,T是两个图像的光心之间的距离,xl和xr分别是左图像和右图像中被摄体的坐标。
因此,通过表达式(2)可以看出,被摄体的景深与视差值之间具有反比关系,所以可以将被摄体的景深与视差值之间的关系以表达式(1)来表示,系数A表示表达式(2)中的f×T,而B作为偏差值来对结果进行校正。
这样,在确定了上述表达式(1)之后,分别在15cm和35cm处拍摄被摄体,在对焦清楚的情况下计算相应的两个视差值。之后,将两个距离值和两个视差值分别带入表达式(1),从而求解出系数A和B。
在确定表达式(1)中的系数A和B之后,在后续拍摄过程中,就可以基于被摄体在第一图像和第二图像之间的视差值来计算被摄体的景深。
但是,上述方法虽然计算简单,但是随着在不同景深时马达对焦,聚焦f会发生变化,因而在远焦时的计算值会有一定误差。
在上述距离参数计算方法中,该距离参数是被摄体的景深,且该关系式为
Y=A1×Xn+A2×Xn-1+…+An-1×X2+An×X+B              (3)
其中,Y是距离参数,X是视差值,A1,A2,…,An和B是系数,且n是大于等于2的自然数。
针对上述误差,根据本发明第二较佳实施例的距离参数计算方法在计算景深时,建立视差值的多次方项的多项式,如上述表达式(3)所示。这里,表达式(3)中的指数n优选地小于7,因为实验证明,视差值的7次方项的多项式能够比较精确地表示被摄体的景深值。
在n=7的情况下,上述表达式(3)转换为:
Y=A1×X7+A2×X6+A3×X5+A4×X4+A5×X+A6×X2+A7×X+B        (4)
在上述表达式(4)中,系数为A1,A2,…,A7和B,因而,需要分别在8个距离以双摄像头模组拍摄被摄体,并计算出相应的8个视差值,从而将8个距离值和8个视差值带入表达式(4),计算出系数A1,A2,…,A7和B。
在上述距离参数计算方法中,该至少两个预定距离分别为n+1个距离,且该n+1个距离的范围在7cm到200cm之间。
在上述距离参数计算方法中,该n+1个距离中相邻两个距离之间的间隔为10cm。
在上述距离参数计算方法中,所述确定关系式的步骤具体包括:使用二次拟合法拟合该至少两个视差项与至少两个相应系数的乘积之和的二元曲线,以确定关系式。
如上所述,在基于视差值的多次方项的多项式计算被摄体的景深值的情况下,需要在多于两个的距离拍摄被摄体,并计算相应的视差值。优选地,为了提高关系曲线的精度,将拍摄被摄体的距离的范围确定为在7cm到200cm之间,并且每两个距离之间相隔10cm进行拍摄。在记录拍摄的每个点的信息的情况下,使用二次拟合法拟合一条多次方的二元曲线,从而以曲线精确表示被摄体的景深值和视差值之间的关系。
上述方法虽然复杂度较高,但是由于可以兼容焦距的误差,可以显著提高景深值的计算精度。
在上述距离参数计算方法中,该距离参数是双摄像头模组的马达代码值,且关系式为:
Y=A×X+B                                        (5)
其中,Y是距离参数,X是视差值,且A和B是系数。
在上述距离参数计算方法中,该至少两个预定距离分别为15cm和35cm。
通过采用根据本发明第二较佳实施例的距离参数计算方法,除了可以计算被 摄体的景深值之外,还可以计算马达代码值。在双摄像头模组中,马达代码值是用于控制马达的驱动的值,即将马达从初始位置移动的距离。并且,马达代码值以零为中心,正值和负值分别表示向靠近被摄体的方向和远离被摄体的方向移动的距离。根据马达-距离曲线,马达代码值与被摄体的距离成反比关系,依据上述表达式(1)和(2),被摄体的距离与视差值成反比关系。因而,可以得到马达代码值与视差值的上述关系式,即可以通过视差值的一次曲线来计算马达代码值。
基于相同原理,在15cm和35cm处对被摄体成像之后,计算被摄体在第一图像和第二图像之间的两个视差值,并将视差值和距离值带入表达式(5),从而得到马达代码值和视差值之间的关系式。
接下来,当以双摄像头模组对被摄体成像时,就可以根据被摄体在第一图像和第二图像之间的视差值来计算马达代码值,并基于马达代码值移动马达,以实现快速对焦。
另外,由于双摄像头模组的尺寸限制,马达的移动距离非常有限,因而在具体的对焦过程中,可以在近焦处调用表达式(5)进行计算,而在远焦处直接写入远焦值。
在根据本发明第二较佳实施例的距离参数计算方法中,在确定相应系数之后,可以将系数值存储在操作处理器或者存储单元中。或者,也可以将包含系数的整个表达式存储在存储单元中,并在需要计算距离参数时从存储单元调用该表达式以进行计算。
这样,通过根据本发明第二较佳实施例的距离参数计算方法,可以实现快速测距或者快速对焦。
并且,根据本发明第二较佳实施例的距离参数计算方法通过基于视差值来计算距离参数,过程简单,节省时间,并具有相对较好的暗态对焦稳定性。例如,根据本发明第二较佳实施例的快速对焦技术与高通平台端的相位检测自动对焦技术(PDAF)对比,具有更好的暗态对焦稳定性。
根据本发明第二较佳实施例的另一方面,提供了一种双摄像头模组,包括:第一摄像头,用于获取第一图像;第二摄像头,用于获取第二图像;和,处理单元,用于基于该第一图像和该第二图像之间的视差值来计算与该双摄像头模组有关的距离参数,该处理单元具体用于:建立该距离参数与该视差值的关系式,该 关系式是至少两个视差项与至少两个相应系数的乘积之和,且该视差项为该视差值的幂;在至少两个预定距离以该双摄像头模组拍摄被摄体,并计算该被摄体在第一图像和第二图像之间的至少两个视差值;和,基于该至少两个预定距离和该至少两个视差值计算该至少两个相应系数,从而确定该关系式。
图11是根据本发明第二较佳实施例的双摄像头模组的示意性框图。如图11所示,根据本发明第二较佳实施例的双摄像头模组1000包括:第一摄像头1100,用于获取第一图像;第二摄像头1200,用于获取第二图像;和,处理单元1300,用于基于第一摄像头1100所获取的第一图像和第二摄像头1200所获取的第二图像之间的视差值来计算与双摄像头模组1000有关的距离参数,处理单元1300具体用于:建立该距离参数与该视差值的关系式,该关系式是至少两个视差项与至少两个相应系数的乘积之和,且该视差项为该视差值的幂;在至少两个预定距离以该双摄像头模组拍摄被摄体,并计算该被摄体在第一图像和第二图像之间的至少两个视差值;和,基于该至少两个预定距离和该至少两个视差值计算该至少两个相应系数,从而确定该关系式。
在上述双摄像头模组中,该第一摄像头和该第二摄像头在第一距离拍摄被摄体;和,该处理单元进一步用于:计算该被摄体在第一图像和第二图像之间的第一视差值;和,将该第一视差值带入该关系式,以求得该第一距离的数值。
在上述双摄像头模组中,该距离参数是该被摄体的景深,且该关系式为Y=A×X-1+B;其中,Y是该距离参数,X是该视差值,且A和B是该系数。
在上述双摄像头模组中,该至少两个预定距离分别为15cm和35cm。
在上述双摄像头模组中,该距离参数是该被摄体的景深,且该关系式为Y=A1×Xn+A2×Xn-1+…+An-1×X2+An×X+B;其中,Y是该距离参数,X是该视差值,A1,A2,…,An和B是该系数,且n是大于等于2的自然数。
在上述双摄像头模组中,该至少两个预定距离分别为n+1个距离,且该n+1个距离的范围在7cm到200cm之间。
在上述双摄像头模组中,该n+1个距离中相邻两个距离之间的间隔为10cm。
在上述双摄像头模组中,该处理单元确定该关系式具体包括:使用二次拟合法拟合该至少两个视差项与至少两个相应系数的乘积之和的二元曲线,以确定该关系式。
在上述双摄像头模组中,该距离参数是该双摄像头模组的马达代码值,且该 关系式为Y=A×X+B;其中,Y是该距离参数,X是该视差值,且A和B是该系数。
在上述双摄像头模组中,该至少两个预定距离分别为15cm和35cm。
在上述双摄像头模组中,进一步包括:控制单元,用于基于该马达代码值驱动该双摄像头模组的马达,以移动该第一摄像头和该第二摄像头。
在上述双摄像头模组中,存储单元,用于存储该至少两个相应系数。
这里,本领域技术人员可以理解,上述根据本发明第二较佳实施例的双摄像头模组中的其他细节与之前所述的根据本发明第二较佳实施例的距离参数计算方法中的相应细节完全相同,为了避免冗余便不再赘述。
图12是根据本发明第二较佳实施例的双摄像头模组的工作过程的示意性流程图。如图12所示,在工作过程开始后,在S4010,首先校正马达代码和距离参数。之后,在S4020,获取原始图像数据信息,即RAW图,并转换为适于计算机处理的BMP图。之后,在S4030,计算被摄体的视差值。在S4040,计算被摄体的景深值。在S4050,计算马达所在的位置。
根据本发明的又一方面,提供了一种电子设备,该电子设备包括双摄像头模组,且该双摄像头模组包括:第一摄像头,用于获取第一图像;第二摄像头,用于获取第二图像;和,处理单元,用于基于该第一图像和该第二图像之间的视差值来计算与该双摄像头模组有关的距离参数,该处理单元具体用于:建立该距离参数与该视差值的关系式,该关系式是至少两个视差项与至少两个相应系数的乘积之和,且该视差项为该视差值的幂;在至少两个预定距离以该双摄像头模组拍摄被摄体,并计算该被摄体在第一图像和第二图像之间的至少两个视差值;和,基于该至少两个预定距离和该至少两个视差值计算该至少两个相应系数,从而确定该关系式。
在上述双摄像头模组中,该第一摄像头和该第二摄像头在第一距离拍摄被摄体;和,该处理单元进一步用于:计算该被摄体在第一图像和第二图像之间的第一视差值;和,将该第一视差值带入该关系式,以求得该第一距离的数值。
在上述双摄像头模组中,该距离参数是该被摄体的景深,且该关系式为Y=A×X-1+B;其中,Y是该距离参数,X是该视差值,且A和B是该系数。
在上述双摄像头模组中,该至少两个预定距离分别为15cm和35cm。
在上述双摄像头模组中,该距离参数是该被摄体的景深,且该关系式为 Y=A1×Xn+A2×Xn-1+…+An-1×X2+An×X+B;其中,Y是该距离参数,X是该视差值,A1,A2,…,An和B是该系数,且n是大于等于2的自然数。
在上述双摄像头模组中,该至少两个预定距离分别为n+1个距离,且该n+1个距离的范围在7cm到200cm之间。
在上述双摄像头模组中,该n+1个距离中相邻两个距离之间的间隔为10cm。
在上述双摄像头模组中,该处理单元确定该关系式具体包括:使用二次拟合法拟合该至少两个视差项与至少两个相应系数的乘积之和的二元曲线,以确定该关系式。
在上述双摄像头模组中,该距离参数是该双摄像头模组的马达代码值,且该关系式为Y=A×X+B;其中,Y是该距离参数,X是该视差值,且A和B是该系数。
在上述双摄像头模组中,该至少两个预定距离分别为15cm和35cm。
在上述双摄像头模组中,进一步包括:控制单元,用于基于该马达代码值驱动该双摄像头模组的马达,以移动该第一摄像头和该第二摄像头。
在上述双摄像头模组中,存储单元,用于存储该至少两个相应系数。
图13是根据本发明第二较佳实施例的电子设备的示意性框图。如图13所示,根据本发明第二较佳实施例的电子设备2000包括双摄像头模组2100,该双摄像头模组2100可以获取第一图像和第二图像。并且,电子设备2000可以包括一处理器2200,用于基于该第一图像和该第二图像之间的视差值来计算与该双摄像头模组有关的距离参数,即能够集成上述双摄像模组的所述处理单元1300的功能。该处理器2200例如包括计算机、微处理器、集成电路或者可编程逻辑器件。此外,电子设备2000还可以进一步包括一存储器2300,用于存储表达距离参数和视差值之间的关系的关系式的系数值或者关系式本身。该存储器2300可以包括易失性存储器,比如静态随机存取存储器(S-RAM)和动态随机存取存储器(D-RAM),以及非易失性存储器,比如闪存存储器、只读存储器(ROM)和可擦可编程只读存储器(EPROM)和电可擦可编程只读存储器(EEPROM)。
这里,处理器所进行的图像处理的具体细节与之前该的根据本发明第二较佳实施例的视差计算方法中的相应细节完全相同,为了避免冗余便不再赘述。
本发明第二较佳实施例的电子设备电子装置可以是包括双摄像头模组的各种电子设备,包括但不限于智能电话、平板个人计算机(PC)、移动电话、视频 电话、电子书阅读器、桌面PC、膝上型PC、上网本PC、个人数字助理(PDA)、便携式多媒体播放器(PMP)、MP3播放器、移动医药装置、相机、可穿戴装置(例如,头戴装置(HMD)、电子衣服、电子手链、电子项链、电子配件、电子文身或者智能手表),等等。
这里,本领域技术人员可以理解,电子设备中的处理器和存储器以及双摄像头模组中的处理单元和存储单元可以互补地使用,以完成根据本发明第二较佳实施例的距离参数计算过程。此外,根据本发明第二较佳实施例的距离参数计算过程也可以完全由双摄像头模组完成,或者完全由电子设备的处理器和存储器完成,本发明第二较佳实施例并不意在对此进行任何限制。
也就是说,根据本发明第二较佳实施例的双摄像头模组可以在通过第一摄像头获取第一图像并通过第二摄像头获取第二图像之后,并不进行图像处理的过程,而是将数据传送到电子设备的处理器进行处理。
通过根据本发明的距离参数计算方法,以及应用该距离参数计算方法的双摄像头模组和电子设备,可以实现快速测距或者快速对焦。
根据本发明的距离参数计算方法,以及应用该距离参数计算方法的双摄像头模组和电子设备可以基于视差值来计算距离参数,过程简单,节省时间,并具有相对较好的暗态对焦稳定性。
本领域的技术人员应理解,上述描述及附图中所示的本发明的实施例只作为举例而并不限制本发明。本发明的目的已经完整并有效地实现。本发明的功能及结构原理已在实施例中展示和说明,在没有背离所述原理下,本发明的实施方式可以有任何变形或修改。

Claims (44)

  1. 一种视差计算方法,用于计算第一图像和第二图像的像素之间的视差值,包括:
    a)在第一图像中选择第一区域,建立所述第一区域的第一灰度直方图,所述第一区域以第一像素为中心;
    b)将所述第一区域的坐标值在第一方向上加上参考视差值以得到所述第二图像中的第二区域的坐标,建立所述第二区域的第二灰度直方图;
    c)计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差;
    d)以预定步长增大所述参考视差值,并重复步骤b和c,直到当前第一均方差大于前一第一均方差为止,并将前一第一均方差确定为第一最小均方差值;
    e)确定与第一最小均方差值对应的第一视差值;和
    f)将所述第一视差值作为所述第一像素的视差值。
  2. 根据权利要求1所述的视差计算方法,其特征在于,进一步包括:
    对于所述第一图像中的每一像素,重复所述步骤a,b,c,d,e和f,以获得所述第一图像中的每一像素的视差值;和
    基于所述第一图像中的每一像素的视差值,得到所述第一图像和所述第二图像之间的视差表。
  3. 根据权利要求1所述的视差计算方法,其特征在于,在步骤d之后,步骤e之前进一步包括:
    将所述第一区域缩放预定尺寸为第三区域;
    基于第三区域重复所述步骤a,b,c和d,以获得第二最小均方差值;
    比较所述第一最小均方差值与所述第二最小均方差值;和
    在所述第二最小均方差值小于第一最小均方差值的情况下,将所述第二均方差值确定为所述第一最小均方差值。
  4. 根据权利要求3所述的视差计算方法,其特征在于,在步骤d之后,步骤e之前进一步包括:
    将所述第一区域缩放预定尺寸为第四区域,其中所述第四区域的尺寸大于第一区域的尺寸,且所述第三区域的尺寸小于第一区域的尺寸;
    基于第四区域重复所述步骤a,b,c和d,以获得第三最小均方差值;
    比较所述第一最小均方差值、所述第二最小均方差值和所述第三最小均方差值;和
    将所述第一最小均方差值、所述第二最小均方差值和所述第三最小均方差值中最小的一个确定为所述第一最小均方差值。
  5. 根据权利要求1所述的视差计算方法,其特征在于,所述第一方向是图像的行方向或者列方向。
  6. 根据权利要求1所述的视差计算方法,其特征在于,在步骤a之前进一步包括:
    将所述第一图像和所述第二图像缩放为相同尺寸。
  7. 根据权利要求1所述的视差计算方法,其特征在于,在步骤a之前进一步包括:
    将所述第一图像和所述第二图像转换为同一彩色格式的图像。
  8. 根据权利要求1所述的视差计算方法,其特征在于,在步骤a之前进一步包括:
    从双摄像头获取每一摄像头的原始图像数据信息;和
    使用差值运算法将所获取的原始图像转换为适于显示处理的所述第一图像和所述第二图像。
  9. 根据权利要求1所述的视差计算方法,其特征在于,在步骤a之前进一步包括:
    将所述第一图像和所述第二图像转换为第一灰度图像和第二灰度图像; 和
    根据所需的视差图尺寸,将第一灰度图像和第二灰度图像分别缩放为所述视差图尺寸。
  10. 根据权利要求2所述的视差计算方法,其特征在于,进一步包括:
    基于所述视差表将所述第一图像和所述第二图像合成为三维图像。
  11. 一种距离参数计算方法,用于基于双摄像头模组所拍摄的第一图像和第二图像之间的视差值来计算与所述双摄像头模组有关的距离参数,所述方法包括:
    建立所述距离参数与所述视差值的关系式,所述关系式是至少两个视差项与至少两个相应系数的乘积之和,且所述视差项为所述视差值的幂;
    在至少两个预定距离以所述双摄像头模组拍摄被摄体,并计算所述被摄体在第一图像和第二图像之间的至少两个视差值;和
    基于所述至少两个预定距离和所述至少两个视差值计算所述至少两个相应系数,从而确定所述关系式。
  12. 根据权利要求11所述的距离参数计算方法,其特征在于,进一步包括:
    在第一距离以所述双摄像头模组拍摄被摄体,并计算所述被摄体在第一图像和第二图像之间的第一视差值;和
    将所述第一视差值带入所述关系式,以求得所述第一距离的数值。
  13. 根据权利要求12所述的距离参数计算方法,其特征在于,
    所述距离参数是所述被摄体的景深,且所述关系式为Y=A×X-1+B;
    其中,Y是所述距离参数,X是所述视差值,且A和B是所述系数。
  14. 根据权利要求13所述的距离参数计算方法,其特征在于,
    所述至少两个预定距离分别为15cm和35cm。
  15. 根据权利要求12所述的距离参数计算方法,其特征在于,
    所述距离参数是所述被摄体的景深,且所述关系式为Y=A1×Xn+A2×Xn-1+…+An-1×X2+An×X+B;
    其中,Y是所述距离参数,X是所述视差值,A1,A2,…,An和B是所述系数,且n是大于等于2的自然数。
  16. 根据权利要求15所述的距离参数计算方法,其特征在于,
    所述至少两个预定距离分别为n+1个距离,且所述n+1个距离的范围在7cm到200cm之间。
  17. 根据权利要求16所述的距离参数计算方法,其特征在于,
    所述n+1个距离中相邻两个距离之间的间隔为10cm。
  18. 根据权利要求17所述的距离参数计算方法,其特征在于,所述确定所述关系式的步骤具体包括:
    使用二次拟合法拟合所述至少两个视差项与至少两个相应系数的乘积之和的二元曲线,以确定所述关系式。
  19. 根据权利要求12所述的距离参数计算方法,其特征在于,
    所述距离参数是所述双摄像头模组的马达代码值,且所述关系式为Y=A×X+B;
    其中,Y是所述距离参数,X是所述视差值,且A和B是所述系数。
  20. 根据权利要求19所述的距离参数计算方法,其特征在于,
    所述至少两个预定距离分别为15cm和35cm。
  21. 一种双摄像头模组,包括:
    第一摄像头,用于获取第一图像;
    第二摄像头,用于获取第二图像;和
    处理单元,用于计算第一图像和第二图像的像素之间的视差值,具体包 括:
    a)在第一图像中选择第一区域,建立所述第一区域的第一灰度直方图,所述第一区域以第一像素为中心;
    b)将所述第一区域的坐标值在第一方向上加上参考视差值以得到所述第二图像中的第二区域的坐标,建立所述第二区域的第二灰度直方图;
    c)计算第一灰度直方图与第二灰度直方图的每行或者每列的差值的第一均方差;
    d)以预定步长增大所述参考视差值,并重复步骤b和c,直到当前第一均方差大于前一第一均方差为止,并将前一第一均方差确定为第一最小均方差值;
    e)确定与第一最小均方差值对应的第一视差值;和
    f)将所述第一视差值作为所述第一像素的视差值。
  22. 根据权利要求21所述的双摄像头模组,其特征在于,所述处理单元进一步用于:
    对于所述第一图像中的每一像素,重复所述步骤a,b,c,d,e和f,以获得所述第一图像中的每一像素的视差值;和
    基于所述第一图像中的每一像素的视差值,得到所述第一图像和所述第二图像之间的视差表。
  23. 根据权利要求21所述的双摄像头模组,其特征在于,所述处理单元在步骤d之后,步骤e之前进一步用于:
    将所述第一区域缩放预定尺寸为第三区域;
    基于第三区域重复所述步骤a,b,c和d,以获得第二最小均方差值;
    比较所述第一最小均方差值与所述第二最小均方差值;和
    在所述第二最小均方差值小于第一最小均方差值的情况下,将所述第二均方差值确定为所述第一最小均方差值。
  24. 根据权利要求23所述的双摄像头模组,其特征在于,所述处理单元在步骤d之后,步骤e之前进一步用于:
    将所述第一区域缩放预定尺寸为第四区域,其中所述第四区域的尺寸大于第一区域的尺寸,且所述第三区域的尺寸小于第一区域的尺寸;
    基于第四区域重复所述步骤a,b,c和d,以获得第三最小均方差值;
    比较所述第一最小均方差值、所述第二最小均方差值和所述第三最小均方差值;和
    将所述第一最小均方差值、所述第二最小均方差值和所述第三最小均方差值中最小的一个确定为所述第一最小均方差值。
  25. 根据权利要求21所述的双摄像头模组,其特征在于,所述第一方向是图像的行方向或者列方向。
  26. 根据权利要求21所述的双摄像头模组,其特征在于,所述处理单元在步骤a之前进一步用于:
    将所述第一图像和所述第二图像缩放为相同尺寸。
  27. 根据权利要求21所述的双摄像头模组,其特征在于,所述处理单元在步骤a之前进一步用于:
    将所述第一图像和所述第二图像转换为同一彩色格式的图像。
  28. 根据权利要求21所述的双摄像头模组,其特征在于,所述处理单元在步骤a之前进一步用于:
    从双摄像头获取每一摄像头的原始图像数据信息;和
    使用差值运算法将所获取的原始图像转换为适于显示处理的所述第一图像和所述第二图像。
  29. 根据权利要求21所述的双摄像头模组,其特征在于,所述处理单元在步骤a之前进一步用于:
    将所述第一图像和所述第二图像转换为第一灰度图像和第二灰度图像;和
    根据所需的视差图尺寸,将第一灰度图像和第二灰度图像分别缩放为所 述视差图尺寸。
  30. 根据权利要求22所述的双摄像头模组,其特征在于,所述处理单元进一步用于:
    基于所述视差表将所述第一图像和所述第二图像合成为三维图像。
  31. 一种电子设备,包括根据权利要求21-30中任意一项所述的双摄像头模组。
  32. 一种双摄像头模组,包括:
    第一摄像头,用于获取第一图像;
    第二摄像头,用于获取第二图像;和
    处理单元,用于基于所述第一图像和所述第二图像之间的视差值来计算与所述双摄像头模组有关的距离参数,所述处理单元具体用于:
    建立所述距离参数与所述视差值的关系式,所述关系式是至少两个视差项与至少两个相应系数的乘积之和,且所述视差项为所述视差值的幂;
    在至少两个预定距离以所述双摄像头模组拍摄被摄体,并计算所述被摄体在第一图像和第二图像之间的至少两个视差值;和
    基于所述至少两个预定距离和所述至少两个视差值计算所述至少两个相应系数,从而确定所述关系式。
  33. 根据权利要求32所述的双摄像头模组,其特征在于,
    所述第一摄像头和所述第二摄像头在第一距离拍摄被摄体;和
    所述处理单元进一步用于:
    计算所述被摄体在第一图像和第二图像之间的第一视差值;和
    将所述第一视差值带入所述关系式,以求得所述第一距离的数值。
  34. 根据权利要求33所述的双摄像头模组,其特征在于,
    所述距离参数是所述被摄体的景深,且所述关系式为Y=A×X-1+B;
    其中,Y是所述距离参数,X是所述视差值,且A和B是所述系数。
  35. 根据权利要求34所述的双摄像头模组,其特征在于,
    所述至少两个预定距离分别为15cm和35cm。
  36. 根据权利要求33所述的双摄像头模组,其特征在于,
    所述距离参数是所述被摄体的景深,且所述关系式为Y=A1×Xn+A2×Xn-1+…+An-1×X2+An×X+B;
    其中,Y是所述距离参数,X是所述视差值,A1,A2,…,An和B是所述系数,且n是大于等于2的自然数。
  37. 根据权利要求36所述的双摄像头模组,其特征在于,
    所述至少两个预定距离分别为n+1个距离,且所述n+1个距离的范围在7cm到200cm之间。
  38. 根据权利要求37所述的双摄像头模组,其特征在于,
    所述n+1个距离中相邻两个距离之间的间隔为10cm。
  39. 根据权利要求38所述的双摄像头模组,其特征在于,所述处理单元确定所述关系式具体包括:
    使用二次拟合法拟合所述至少两个视差项与至少两个相应系数的乘积之和的二元曲线,以确定所述关系式。
  40. 根据权利要求33所述的双摄像头模组,其特征在于,
    所述距离参数是所述双摄像头模组的马达代码值,且所述关系式为Y=A×X+B;
    其中,Y是所述距离参数,X是所述视差值,且A和B是所述系数。
  41. 根据权利要求40所述的双摄像头模组,其特征在于,
    所述至少两个预定距离分别为15cm和35cm。
  42. 根据权利要求40所述的双摄像头模组,其特征在于,进一步包括:
    控制单元,用于基于所述马达代码值驱动所述双摄像头模组的马达,以移动所述第一摄像头和所述第二摄像头。
  43. 根据权利要求33所述的双摄像头模组,其特征在于,进一步包括:
    存储单元,用于存储所述至少两个相应系数。
  44. 一种电子设备,包括根据权利要求32-43中任意一项所述的双摄像头模组。
PCT/CN2017/109086 2016-11-04 2017-11-02 视差与距离参数计算方法及双摄像头模组和电子设备 WO2018082604A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201610971031.0 2016-11-04
CN201610971337.6 2016-11-04
CN201610971031.0A CN108024051B (zh) 2016-11-04 2016-11-04 距离参数计算方法,双摄像头模组和电子设备
CN201610971337.6A CN108377376B (zh) 2016-11-04 2016-11-04 视差计算方法,双摄像头模组和电子设备

Publications (1)

Publication Number Publication Date
WO2018082604A1 true WO2018082604A1 (zh) 2018-05-11

Family

ID=62076725

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/109086 WO2018082604A1 (zh) 2016-11-04 2017-11-02 视差与距离参数计算方法及双摄像头模组和电子设备

Country Status (1)

Country Link
WO (1) WO2018082604A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012241A (zh) * 2021-04-28 2021-06-22 歌尔股份有限公司 双目摄像头的视差检测方法、装置、电子设备及存储介质
CN115082563A (zh) * 2021-03-15 2022-09-20 北京小米移动软件有限公司 一种图像处理方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262619A (zh) * 2008-03-30 2008-09-10 深圳华为通信技术有限公司 视差获取方法和装置
CN101710423A (zh) * 2009-12-07 2010-05-19 青岛海信网络科技股份有限公司 一种立体图像的匹配搜索方法
CN102333234A (zh) * 2011-10-28 2012-01-25 清华大学 一种双目立体视频状态信息的监测方法及装置
EP2482560A2 (en) * 2011-01-26 2012-08-01 Kabushiki Kaisha Toshiba Video display apparatus and video display method
CN103581650A (zh) * 2013-10-21 2014-02-12 四川长虹电器股份有限公司 双目3d视频转多目3d视频的方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262619A (zh) * 2008-03-30 2008-09-10 深圳华为通信技术有限公司 视差获取方法和装置
CN101710423A (zh) * 2009-12-07 2010-05-19 青岛海信网络科技股份有限公司 一种立体图像的匹配搜索方法
EP2482560A2 (en) * 2011-01-26 2012-08-01 Kabushiki Kaisha Toshiba Video display apparatus and video display method
CN102333234A (zh) * 2011-10-28 2012-01-25 清华大学 一种双目立体视频状态信息的监测方法及装置
CN103581650A (zh) * 2013-10-21 2014-02-12 四川长虹电器股份有限公司 双目3d视频转多目3d视频的方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082563A (zh) * 2021-03-15 2022-09-20 北京小米移动软件有限公司 一种图像处理方法、装置、电子设备及存储介质
CN113012241A (zh) * 2021-04-28 2021-06-22 歌尔股份有限公司 双目摄像头的视差检测方法、装置、电子设备及存储介质

Similar Documents

Publication Publication Date Title
KR102291081B1 (ko) 이미지 처리 방법 및 장치, 전자 장치 및 컴퓨터-판독 가능 저장 매체
JP7145208B2 (ja) デュアルカメラベースの撮像のための方法および装置ならびに記憶媒体
US9558543B2 (en) Image fusion method and image processing apparatus
JP4902562B2 (ja) 撮像装置、画像処理装置、制御方法およびプログラム
WO2019085603A1 (en) Method for image-processing and mobile terminal using dual cameras
TWI602152B (zh) 影像擷取裝置及其影像處理方法
CN107509031B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
CN110473159B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
TWI518436B (zh) 影像擷取裝置及影像處理方法
US7868915B2 (en) Photographing apparatus, method and computer program product
TWI433530B (zh) 具有立體影像攝影引導的攝影系統與方法及自動調整方法
JP2020533697A (ja) 画像処理のための方法および装置
WO2019105261A1 (zh) 背景虚化处理方法、装置及设备
WO2015030221A1 (en) Image processing apparatus, image processing method, and imaging system
WO2019011154A1 (zh) 白平衡处理方法和装置
KR20070121717A (ko) 컬러 디지털 이미지를 사용하여 선명도 변경과 같은 액션을제어하는 방법
RU2417545C2 (ru) Фотокамера для электронного устройства
CN105678736B (zh) 具有孔径改变深度估计的图像处理***及其操作方法
CN109951638A (zh) 摄像头防抖***、方法、电子设备和计算机可读存储介质
WO2018228466A1 (zh) 对焦区域显示方法、装置及终端设备
JP2012039591A (ja) 撮像装置
WO2018082604A1 (zh) 视差与距离参数计算方法及双摄像头模组和电子设备
WO2007075066A1 (en) Image processor, apparatus and method for lens shading compensation
KR20090064247A (ko) 디지털 영상 촬영 방법 및 촬영 장치
CN107547789B (zh) 影像获取装置及其摄影构图的方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17866975

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17866975

Country of ref document: EP

Kind code of ref document: A1