WO2020107995A1 - 成像方法和装置、电子设备、计算机可读存储介质 - Google Patents

成像方法和装置、电子设备、计算机可读存储介质 Download PDF

Info

Publication number
WO2020107995A1
WO2020107995A1 PCT/CN2019/104413 CN2019104413W WO2020107995A1 WO 2020107995 A1 WO2020107995 A1 WO 2020107995A1 CN 2019104413 W CN2019104413 W CN 2019104413W WO 2020107995 A1 WO2020107995 A1 WO 2020107995A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
point spread
spread function
function
acquiring
Prior art date
Application number
PCT/CN2019/104413
Other languages
English (en)
French (fr)
Inventor
王会朝
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020107995A1 publication Critical patent/WO2020107995A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Definitions

  • the present application relates to the field of imaging technology, and in particular, to an imaging method, device, electronic device, and computer-readable storage medium.
  • the optical imaging system in the electronic device projects the subject onto the photosensitive surface of the sensor, and obtains the RGB (Red Green Blue, red, green, blue) three-channel response value through the photoelectric effect.
  • the RGB three-channel response value is then subjected to autofocus, automatic exposure, After automatic white balance and color space conversion, the final image is obtained.
  • traditional imaging methods are affected by aberrations, resulting in blurred images.
  • Embodiments of the present application provide an imaging method, apparatus, electronic device, and computer-readable storage medium.
  • An imaging method including:
  • An imaging device including:
  • the image acquisition module is used to acquire the first image
  • An image conversion module configured to convert the first image into a second image including a luminance channel through a color space
  • An inverse transformation module is used to obtain a point spread function, and perform an inverse point spread function transformation on the second image by using the point spread function to obtain a target image.
  • An electronic device includes a memory and a processor.
  • a computer program is stored in the memory.
  • the processor is caused to perform the following steps:
  • the imaging method, apparatus, electronic device, and computer-readable storage medium in this embodiment convert the acquired first image into a second image containing a luminance channel through a color space, and then acquire a point spread function to inverse the second image Point diffusion function conversion to obtain the target image, the image inverse point diffusion function conversion, effectively remove the image edge blur, improve the image clarity.
  • FIG. 1 is an application environment diagram of an imaging method in an embodiment
  • FIG. 2 is a flowchart of an imaging method in an embodiment
  • FIG. 3 is a schematic diagram of the principle that an original image undergoes diffusion to form a circle of confusion in an embodiment
  • FIG. 5 is a structural block diagram of an imaging device in an embodiment
  • FIG. 6 is a structural block diagram of an imaging device in another embodiment
  • FIG. 7 is a block diagram of an internal structure of an electronic device in an embodiment
  • FIG. 8 is a schematic diagram of an image processing circuit in an embodiment.
  • first”, “second”, etc. used in this application may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element.
  • the first image may be referred to as the second image, and similarly, the second image may be referred to as the first image. Both the first image and the second image are images, but they are not the same image.
  • FIG. 1 is a schematic diagram of an application environment of an imaging method in an embodiment.
  • the application environment includes an electronic device 110.
  • An electronic imaging system is installed on the electronic device 110, and an object is photographed through the optical imaging system and projected onto the photosensitive surface of the image sensor, and the image sensor outputs RGB (Red Green Blue) values through the photoelectric effect.
  • RGB Red Green Blue
  • the RGB image is converted into a color space conversion matrix to obtain a YUV image, and the YUV image is inverted Point spread function transformation to get the target image.
  • Y in YUV images means brightness (Luminance or Luma), that is, grayscale value;
  • U and V mean chroma (Chrominance or Chroma), the role is to describe the image color and saturation, use The color of the specified pixel.
  • the inverse point spread function transformation is performed through the YUV image, which effectively removes the image edge blurring and improves the image clarity.
  • the electronic device 110 may be a smart phone with an optical imaging system such as a camera, a personal computer, a tablet computer, a personal digital assistant, a camera, a wearable device, and the like.
  • FIG. 2 is a flowchart of an imaging method in an embodiment.
  • the imaging method in this embodiment is described by taking the electronic device 110 in FIG. 1 as an example. As shown in FIG. 2, the imaging method includes steps 202 to 206.
  • Step 202 Acquire the first image.
  • the first image may be an RGB image.
  • the electronic device images the subject through an optical imaging system (such as a lens) and projects it onto the photosensitive surface of the image sensor.
  • the image sensor outputs RGB three-channel response values through the photoelectric effect. RGB images are obtained after processing such as exposure and automatic white balance.
  • Step 204 Convert the first image into a second image containing a luminance channel through a color space.
  • the color space conversion means that the image is converted from one color space to another color space.
  • the second image is an image containing a luminance channel.
  • the brightness channel refers to the brightness signal component contained in the image format.
  • the electronic device converts the first image into a second image containing a brightness channel through a color space.
  • the second image may be a YUV image, HSV image, YCrCb image, YIQ image, HIS image, or the like.
  • HSV H represents Hue
  • S represents saturation
  • V represents brightness.
  • YCrCb Y represents brightness (Luminance or Luma)
  • Cr represents hue
  • Cb saturation.
  • YIQ Y stands for Luminance brightness
  • I stands for In-phase
  • Q stands for Quadrature-phase.
  • HIS H means Hue
  • S means Saturation
  • I means Luminance.
  • the electronic device converts the RGB image into a YUV image through the color space.
  • Step 206 Obtain a point spread function, and use the point spread function to perform inverse point spread function transformation on the second image to obtain a target image.
  • the point light source after passing through the optical system, the point light source will form an expanded circle of confusion due to aberration and diffraction, which can be described by a point spread function (Point Spread Function, PSF).
  • Image edge blurring can be regarded as a convolution process of the two-dimensional point spread function and the original image.
  • the diffused blurred image is calculated using f(x, y) using the following formula (1).
  • g(x,y) is the original image data
  • PSF( ⁇ , ⁇ ) is the point spread function
  • n(x,y) is the noise
  • the value of ⁇ is 0 to x
  • the value of ⁇ is 0 to y
  • (x, y) is the coordinate value of pixels in the image.
  • the recovered image after removing the PSF function can be obtained, as shown in formula (3).
  • Inverse point spread function transformation refers to dividing the pixels in the image by the point spread function, or dividing the pixels in the image by the point spread function, and then multiplying by the coefficient.
  • the coefficient may be 0.5, 0.6, 0.9, 1, 1.1, etc., not limited to this.
  • the electronic device obtains a point spread function, and uses the point spread function to perform an inverse point spread function transformation on the second image to obtain a target image.
  • the acquired first image is converted into a second image containing a brightness channel through a color space, and then a point spread function is acquired, and the second image is converted into an inverse point spread function to obtain a target image.
  • the inverse point spread function conversion effectively removes the image edge blur and improves the image clarity.
  • the acquiring the first image includes receiving a shooting instruction, acquiring the RGB three-channel response value according to the shooting instruction, and performing auto-focus, automatic exposure, and automatic white balance processing on the RGB three-channel response value to obtain an RGB image, Let this RGB image be the first image.
  • the shooting instruction means that the electronic device receives an instruction generated by the user touching or clicking a shooting control on the camera shooting interface. After receiving the shooting instruction, the electronic device projects the subject to the photosensitive surface of the image sensor through the optical imaging system.
  • the image sensor outputs the RGB three-channel response value through the photoelectric effect.
  • the RGB three-channel response value undergoes autofocus, automatic exposure, and automatic After the white balance processing, an RGB image is obtained, and the RGB image is used as the first image. Obtain the first image directly according to the shooting instructions, and then process the first image to ensure the real-time nature of the image.
  • the above imaging method further includes: acquiring an image edge area in the second image.
  • Obtaining a point spread function, using the point spread function to perform an inverse point spread function transformation on the second image to obtain the target image including: acquiring a point spread function, and using the point spread function to perform an inverse point on the image edge area in the second image
  • the diffusion function is transformed to obtain the target image.
  • a point spread function may be used to perform inverse point spread function transformation on the image edge area in the second image, and the amount of processed data is small.
  • obtaining a point spread function, and using the point spread function to perform an inverse point spread function transformation on the second image to obtain a target image includes: dividing the second image into a preset number of blocks; acquiring Shooting the point spread function corresponding to each block in the second image; performing inverse point spread function transformation on the corresponding block according to the point spread function corresponding to each block to obtain the target sub-image corresponding to each block, The target sub-image is synthesized to obtain the target image.
  • the preset number can be set according to needs, such as 2*2 blocks or 3*3 blocks.
  • Each block can be calculated to obtain the corresponding point spread function.
  • the point spread function corresponding to each block After the electronic device obtains the point spread function corresponding to each block in the second image, the point spread function corresponding to each block performs inverse point spread function transformation on the block to obtain the target sub-image corresponding to each block.
  • the target sub-image is synthesized to obtain the target image.
  • obtaining the point spread function includes: obtaining a reference image for calibration, and a point-diffused sample image taken from the reference image; dividing the sample image and the reference image into corresponding presets respectively A number of blocks; using the blocks in the sample image and the corresponding blocks in the reference image to calculate the point spread function corresponding to the block.
  • the reference image used for calibration may be a standard resolution test card.
  • the camera of the electronic device shoots the reference image to obtain the diffused sample image.
  • the sample image is the convolution of the reference image and the point spread function.
  • equation (3) the sample image is divided by the reference image to obtain the point spread function.
  • the sample image and the reference image are divided into corresponding preset number of blocks, and each block of the sample image corresponds to a block of the reference image.
  • a block of the sample image and a corresponding block in the reference image are used to calculate the point spread function of the block.
  • a resolution standard test card used for calibration can be selected, and a single-lens reflex camera is used to obtain an image with a small degree of dispersion as a reference image.
  • the reference image can be considered to have almost no PSF diffusion, and then an electronic device with an optical imaging system is used to shoot the test card at the same distance to obtain a sample image with diffuse edges.
  • Compare a block in the reference image with a corresponding block in the sample image calculate the MTF function at different positions according to the spatial frequency and contrast of the line pair, and perform the two-dimensional inverse Fourier transform on the MTF(u,v) To get PSF(x,y).
  • obtaining a point spread function includes: acquiring a black and white hypotenuse image, fitting the black and white hypotenuse image to obtain a distribution function of the edge; calculating an edge spread function according to the distribution function of the edge Derive the diffusion function to obtain a linear diffusion function, and then perform a Fourier transform on the linear diffusion function to obtain a point diffusion function.
  • the PSF(u,v) obtained by performing the two-dimensional Fourier transform on the point spread function PSF(x,y) of the point light source imaging can be regarded as the modulation transfer function MTF(u,v) of the optical imaging system
  • MTF ModulationTransferFunction
  • MTF ModulationTransferFunction
  • the electronic device scans the selected black and white oblique-edge images progressively.
  • the average gray value of the low-reflectivity and high-reflectivity targets on both sides of the edge is equal to the gray value of the pixel or between two adjacent pixels
  • the position is the position of the blade edge
  • the distribution function of the blade edge is fitted
  • the edge diffusion function is fitted to the distribution function of the blade edge
  • the line diffusion function is derived by derivation of the edge diffusion function
  • Fourier transform the linear diffusion function to obtain a modulation transfer function, and use the modulation transfer function as a point diffusion function.
  • the above imaging method further includes: acquiring a reference image for calibration, and a point-diffused sample image obtained by shooting the reference image; and calculating a point spread function according to the sample image and the reference image.
  • the reference image used for calibration may be a standard resolution test card.
  • the camera of the electronic device shoots the reference image to obtain the diffused sample image.
  • the sample image is the convolution of the reference image and the point spread function. Using equation (3), the sample image is divided by the reference image to obtain the point spread function.
  • a resolution standard test card for calibration can be selected, and the camera is used for shooting to obtain an image with a small degree of dispersion as a reference image.
  • the reference image can be considered to have almost no PSF diffusion, and then an electronic device with an optical imaging system is used to shoot the test card at the same distance to obtain a sample image with diffuse edges.
  • Compare the reference image and the sample image calculate the MTF function at different positions according to the spatial frequency and contrast of the line pair, and perform the two-dimensional inverse Fourier transform on the MTF(u,v) to obtain PSF(x,y).
  • the above imaging method further includes: performing noise reduction and sharpening processing on the target image to obtain a corrected image; wherein, in the sharpening processing, the filtering radius is reduced.
  • a nonlinear de-drying algorithm can be used for noise reduction, such as a bilateral filter.
  • Sharpening refers to compensating the outline of the image, enhancing the edges and gray transitions of the image, and making the image clear.
  • Sharpening processing is generally divided into spatial processing and frequency processing. After the target image is transformed by the inverse point spread function, it can be processed by the sharpening algorithm. By reducing the filter radius, the intensity of the sharpening algorithm is reduced. Reduce the intensity of the sharpening algorithm, thereby effectively alleviating the white fringing phenomenon and improving the image quality.
  • an imaging method includes:
  • step 402 the object is projected onto the photosensitive surface of the image sensor through the optical imaging system.
  • Step 404 the image sensor outputs RGB three-channel response values through the photoelectric effect.
  • step 406 the RGB three-channel response value is processed through auto focus, auto exposure, and auto white balance to obtain an RGB image.
  • the RGB image is the first image.
  • Step 408 Convert the RGB image to a YUV image through the color space.
  • the YUV image is the second image.
  • Step 410 Perform inverse point spread function conversion on the YUV image to obtain the target image.
  • inverse point diffusion function conversion is performed on the Y channel in the YUV image to obtain the target image.
  • Step 412 Perform noise reduction and sharpening processing on the target image to obtain a corrected image.
  • the intensity can be reduced during sharpening.
  • the circle of confusion on the edge of the image can be effectively removed, the sharpness of the image can be improved, and the sharpening process It can effectively alleviate the white border phenomenon and improve the image quality.
  • the above imaging method can also be used to reduce the thickness of the optical module, such as reducing the distance between two lenses.
  • the optical module with reduced thickness adopts this imaging method to achieve the same imaging effect as the original module.
  • An imaging method includes (1) to (8).
  • the image sensor outputs the RGB three-channel response value through the photoelectric effect.
  • the RGB three-channel response value is processed by autofocus, autoexposure, and auto white balance to obtain an RGB image, and the RGB image is used as the first image.
  • the imaging method in this embodiment converts the acquired first image into a second image containing a luminance channel through a color space, and performs an inverse point diffusion function on the corresponding block in the second image according to the point diffusion function corresponding to each block Conversion to get the target image, inverse point diffusion function conversion for the image, and point diffusion function inverse conversion for the block is more accurate, effectively removing the image edge blur, improving the clarity of the image, by reducing the intensity of the sharpening process, you can Effectively alleviate the white border phenomenon and improve the image quality.
  • steps in the flowcharts of FIG. 2 and FIG. 4 are displayed in order according to the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless clearly stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least some of the steps in FIGS. 2 and 4 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. These sub-steps or The execution order of the stages is not necessarily sequential, but may be executed in turn or alternately with other steps or sub-steps of the other steps or at least a part of the stages.
  • an imaging device includes an image acquisition module 502, an image conversion module 504, and an inverse transform module 506. among them:
  • the image acquisition module 502 is used to acquire the first image.
  • the image conversion module 504 is used to convert the first image into a second image containing a luminance channel through a color space.
  • the inverse transform module 506 is used to obtain a point spread function, and use the point spread function to perform an inverse point spread function transform on the second image to obtain a target image.
  • the imaging device in this embodiment converts the acquired first image into a second image containing a luminance channel through a color space, then acquires a point spread function, and performs an inverse point spread function conversion on the second image to obtain a target image.
  • the inverse point spread function conversion effectively removes the image edge blur and improves the image clarity.
  • the imaging device in addition to the image acquisition module 502, the image conversion module 504, and the inverse transform module 506, the imaging device further includes an area determination module 508, a correction module 510, and a calculation module 512.
  • the area determination module 508 is used to obtain an image edge area in the second image.
  • the inverse transform module 506 is also used to obtain a point spread function, and use the point spread function to perform an inverse point spread function transform on the image edge region in the second image to obtain a target image.
  • the correction module 510 is used to perform noise reduction and sharpening processing on the target image to obtain a corrected image; wherein, in the sharpening processing, the filter radius is reduced.
  • the calculation module 512 is used to obtain a reference image for calibration, and a point-diffused sample image taken from the reference image, and calculate a point-spread function according to the sample image and the reference image.
  • the area determination module 508 is used to divide the second image into a preset number of blocks.
  • the inverse transform module 506 is also used to obtain the point spread function corresponding to each block in the second image; perform the inverse point spread function transformation on the corresponding block according to the point spread function corresponding to each block to obtain each area The target sub-image corresponding to the block, synthesizing the target sub-image to obtain the target image.
  • the above imaging device may further include a relationship establishing module.
  • the relationship establishing module is used to obtain a reference image used for calibration, and a point-spread sample image obtained by shooting the reference image; divide the sample image and the reference image into corresponding preset number of blocks respectively ; Use the blocks in the sample image and the corresponding blocks in the reference image to calculate the point spread function corresponding to the block, and establish the correspondence between each block and the point spread function.
  • each module in the above imaging device is for illustration only. In other embodiments, the imaging device may be divided into different modules as needed to complete all or part of the functions of the above imaging device.
  • the electronic device includes a processor and a memory connected by a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory may include a non-volatile storage medium and internal memory.
  • the non-volatile storage medium stores an operating system and computer programs.
  • the computer program can be executed by the processor to implement an imaging method provided by the following embodiments.
  • the internal memory provides a cached operating environment for the operating system computer programs in the non-volatile storage medium.
  • the electronic device may be a mobile phone, a tablet computer, a personal digital assistant or a wearable device.
  • each module in the imaging apparatus may be in the form of a computer program.
  • the computer program can be run on a terminal or a server.
  • the program module composed of the computer program may be stored in the memory of the terminal or the server.
  • An embodiment of the present application also provides an electronic device.
  • the above-mentioned electronic device includes an image processing circuit.
  • the image processing circuit may be implemented by hardware and/or software components, and may include various processing units that define an ISP (Image Signal Processing) image signal processing pipeline.
  • ISP Image Signal Processing
  • 8 is a schematic diagram of an image processing circuit in an embodiment. As shown in FIG. 8, for ease of explanation, only various aspects of the image processing technology related to the embodiments of the present application are shown.
  • the image processing circuit includes a first ISP processor 830, a second ISP processor 840 and a control logic 850.
  • the first camera 810 includes one or more first lenses 812 and a first image sensor 814.
  • the first image sensor 814 may include a color filter array (such as a Bayer filter), the first image sensor 814 may acquire light intensity and wavelength information captured with each imaging pixel of the first image sensor 814, and provide information that can be used by the first ISP A set of image data processed by the processor 830.
  • the second camera 820 includes one or more second lenses 822 and a second image sensor 824.
  • the second image sensor 824 may include a color filter array (such as a Bayer filter), and the second image sensor 824 may acquire the light intensity and wavelength information captured by each imaging pixel of the second image sensor 824 and provide information that can be used by the second ISP A set of image data processed by the processor 840.
  • a color filter array such as a Bayer filter
  • the first image collected by the first camera 810 is transmitted to the first ISP processor 830 for processing.
  • the statistical data of the first image (such as the brightness of the image, the contrast value of the image , The color of the image, etc.) is sent to the control logic 850, and the control logic 850 can determine the control parameters of the first camera 810 according to the statistical data, so that the first camera 810 can perform autofocus, automatic exposure, etc. according to the control parameters.
  • the first image can be stored in the image memory 860 after being processed by the first ISP processor 830, and the first ISP processor 830 can also read the image stored in the image memory 860 to process the image.
  • the first image after being processed by the ISP processor 830, the first image can be directly sent to the display 870 for display, and the display 870 can also read the image in the image memory 860 for display.
  • the first ISP processor 830 processes image data pixel by pixel in various formats.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 830 may perform one or more image processing operations on the image data and collect statistical information about the image data.
  • the image processing operation can be performed with the same or different bit depth accuracy.
  • the image memory 860 may be a part of a memory device, a storage device, or an independent dedicated memory in an electronic device, and may include a DMA (Direct Memory Access) feature.
  • DMA Direct Memory Access
  • the first ISP processor 830 may perform one or more image processing operations, such as time-domain filtering.
  • the processed image data can be sent to the image memory 860 for additional processing before being displayed.
  • the first ISP processor 830 receives the processed data from the image memory 860 and performs image data processing in the RGB and YCbCr color spaces on the processed data.
  • the image data processed by the first ISP processor 830 may be output to the display 870 for the user to view and/or be further processed by a graphics engine or GPU (Graphics Processing Unit, graphics processor).
  • the output of the first ISP processor 830 may also be sent to the image memory 860, and the display 870 may read image data from the image memory 860.
  • the image memory 860 may be configured to implement one or more frame buffers.
  • the statistical data determined by the first ISP processor 830 may be sent to the control logic 850.
  • the statistical data may include automatic exposure, automatic white balance, automatic focus, flicker detection, black level compensation, first lens 812 shading correction, and other first image sensor 814 statistical information.
  • the control logic 850 may include a processor and/or a microcontroller that executes one or more routines (such as firmware).
  • the one or more routines may determine the control parameters and the first parameters of the first camera 810 based on the received statistical data
  • An ISP processor 830 control parameters control parameters.
  • the control parameters of the first camera 810 may include gain, integration time of exposure control, anti-shake parameters, flash control parameters, first lens 812 control parameters (eg, focal length for focusing or zooming), or a combination of these parameters.
  • the ISP control parameters may include gain levels and color correction matrices used for automatic white balance and color adjustment (eg, during RGB processing), and the first lens 812 shading correction parameters.
  • the second image collected by the second camera 820 is transmitted to the second ISP processor 840 for processing.
  • the statistical data of the second image (such as the brightness of the image, the image The contrast value of the image, the color of the image, etc.) is sent to the control logic 850, and the control logic 850 can determine the control parameters of the second camera 820 according to the statistical data, so that the second camera 820 can perform autofocus, automatic exposure, etc. according to the control parameters .
  • the second image can be stored in the image memory 860 after being processed by the second ISP processor 840, and the second ISP processor 840 can also read the image stored in the image memory 860 to process it.
  • the second image may be directly sent to the display 870 for display, and the display 870 may also read the image in the image memory 860 for display.
  • the second camera 820 and the second ISP processor 840 may also implement the processing procedures described by the first camera 810 and the first ISP processor 830.
  • the embodiments of the present application also provide a computer-readable storage medium.
  • One or more non-volatile computer-readable storage media containing computer-executable instructions, which when executed by one or more processors, cause the processors to perform the steps of the imaging method.
  • a computer program product containing instructions that, when run on a computer, causes the computer to perform an imaging method.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which acts as external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR) SDRAM, enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

一种成像方法包括:获取第一图像;将所述第一图像通过颜色空间转换为包含亮度通道的第二图像;获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像。

Description

成像方法和装置、电子设备、计算机可读存储介质
相关申请的交叉引用
本申请要求于2018年11月26日提交中国专利局、申请号为2018114193949、发明名称为“成像方法和装置、电子设备、计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及影像技术领域,特别是涉及一种成像方法、装置、电子设备、计算机可读存储介质。
背景技术
随着光学成像技术的发展,出现了各种带有光学成像***的电子设备。电子设备中的光学成像***将拍摄对象投影到传感器感光面上,经过光电效应得到RGB(Red Green Blue,红绿蓝)三通道响应值,再对RGB三通道响应值经过自动对焦、自动曝光、自动白平衡、颜色空间变换等处理后得到最终的图像。然而,传统的成像方式受到像差的影响,导致图像模糊。
发明内容
本申请实施例提供一种成像方法、装置、电子设备、计算机可读存储介质。
一种成像方法,包括:
获取第一图像;
将所述第一图像通过颜色空间转换为包含亮度通道的第二图像;
获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像。
一种成像装置,包括:
图像获取模块,用于获取第一图像;
图像转换模块,用于将所述第一图像通过颜色空间转换为包含亮度通道的第二图像;
逆变换模块,用于获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像。
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行以下步骤:
获取第一图像;
将所述第一图像通过颜色空间转换为包含亮度通道的第二图像;
获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
获取第一图像;
将所述第一图像通过颜色空间转换为包含亮度通道的第二图像;
获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像。
本实施例中的成像方法、装置、电子设备和计算机可读存储介质,将获取的第一图像通过颜色空间转换为包含亮度通道的第二图像,然后获取点扩散函数,对第二图像进行逆点扩散函数转换,得到目标图像,对图像进行逆点扩散函数转换,有效地去除了图像边缘模糊,提升了图像的清晰度。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中成像方法的应用环境图;
图2为一个实施例中成像方法的流程图;
图3为一个实施例中原始图像经过扩散形成弥散圆的原理示意图;
图4为另一个实施例中成像方法的流程图;
图5为一个实施例中成像装置的结构框图;
图6为另一个实施例中成像装置的结构框图;
图7为一个实施例中电子设备的内部结构框图;
图8为一个实施例中图像处理电路的示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一图像称为第二图像,且类似地,可将第二图像称为第一图像。第一图像和第二图像两者都是图像,但其不是同一图像。
图1为一个实施例中成像方法的应用环境示意图。如图1所示,该应用环境包括电子设备110。电子设备110上安装有光学成像***,通过光学成像***拍摄对象,投影到图像传感器感光面上,图像传感器经过光电效应输出RGB(Red Green Blue,红绿蓝)值。该RGB值经过自动对焦、自动白平衡、去马赛克算法校正后,得到对焦正常、亮度和颜色符合人眼观感的RGB图像,对RGB图像进行颜色空间转换矩阵转换得到YUV图像,对YUV图像进行逆点扩散函数变换,得到目标图像。YUV图像中的“Y”表示亮度(Luminance或Luma),也就是灰阶值;“U”和“V”表示的则是色度(Chrominance或Chroma),作用是描述影像色彩及饱和度,用于指定像素的颜色。通过YUV图像进行逆点扩散函数变换,有效地去除了图像边缘模糊,提升了图像的清晰度。其中,电子设备110可为带有摄像头等光学成像***的智能手机、个人计算机、平板电脑、个人数字助理、相机、穿戴式设备等。
图2为一个实施例中成像方法的流程图。本实施例中的成像方法,以运行于图1中的电子设备110上为例进行描述。如图2所示,该成像方法包括步骤202至步骤206。
步骤202,获取第一图像。
其中,第一图像可为RGB图像。
具体地,电子设备通过光学成像***(如透镜)将拍摄对象成像后投影到图像传感器的感光面上,图像传感器经过光电效应输出RGB三通道响应值,对RGB三通道响应值经过 自动对焦、自动曝光、自动白平衡等处理后得到RGB图像。
步骤204,将该第一图像通过颜色空间转换为包含亮度通道的第二图像。
其中,颜色空间转换是指图像从一个颜色空间转换为另一个颜色空间。第二图像为包含亮度通道的图像。亮度通道是指图像格式中包含有亮度信号分量。
具体地,电子设备将该第一图像通过颜色空间转换为包含亮度通道的第二图像。第二图像可为YUV图像或HSV图像或YCrCb图像或YIQ图像或HIS图像等。HSV中H表示色调(Hue)、S表示饱和度(Saturation)、V表示亮度(Value)。YCrCb中Y表示亮度(Luminance或Luma),Cr表示色调,Cb表示饱和度。YIQ中Y代表Luminance亮度,I代表In-phase,色彩从橙色到青色,Q代表Quadrature-phase。HIS中H表示色调(Hue)、S表示饱和度(Saturation)、I表示亮度(luminance)。
可选地,电子设备将RGB图像通过颜色空间转换为YUV图像。
步骤206,获取点扩散函数,采用该点扩散函数对该第二图像进行逆点扩散函数变换,得到目标图像。
其中,点光源经过光学***后由于像差及衍射会形成一个扩大的弥散圆,该弥散圆可通过点扩散函数(Point Spread Function,PSF)描述。图像边缘模糊可被视为一个二维点扩散函数与原始图像的卷积过程。如图3所示,扩散后的模糊图像采用f(x,y)采用如下公式(1)计算。
f(x,y)=∫∫PSF(α,β)g(x-α,y-β)dαdβ+n(x,y)        公式(1)
其中,g(x,y)为原始图像数据,PSF(α,β)为点扩散函数,n(x,y)为噪声,α的取值为0至x,β的取值为0至y。(x,y)为图像中像素点坐标值。
对公式(1)两边取傅里叶变化,则可得到公式(2)
F(u,v)=PSF(u,v)G(u,v)+N(u,v)           公式(2)
对于有限的噪声做近似处理,则可得到去除PSF函数后的恢复图像,如公式(3)。
Figure PCTCN2019104413-appb-000001
逆点扩散函数变换是指将图像中的像素点除以点扩散函数,或者将图像中的像素点除以点扩散函数,再乘以系数。该系数可以为0.5、0.6、0.9、1、1.1等,不限于此。
具体地,电子设备获取点扩散函数,采用该点扩散函数对该第二图像进行逆点扩散函数变换,得到目标图像。
本实施例中的成像方法,将获取的第一图像通过颜色空间转换为包含亮度通道的第二图像,然后获取点扩散函数,对第二图像进行逆点扩散函数转换,得到目标图像,对图像进行逆点扩散函数转换,有效地去除了图像边缘模糊,提升了图像的清晰度。
在一个实施例中,该获取第一图像,包括:接收拍摄指令,根据该拍摄指令获取RGB三通道响应值,将RGB三通道响应值进行自动对焦、自动曝光和自动白平衡处理得到RGB图像,将该RGB图像作为第一图像。
具体地,拍摄指令是指电子设备接收到用户在相机拍摄界面上触控或点击拍摄控件等产生的指令。电子设备接收到拍摄指令后,将拍摄对象通过光学成像***投影到图像传感器的感光面上,图像传感器经过光电效应输出RGB三通道响应值,将RGB三通道响应值经过自动对焦、自动曝光、自动白平衡处理后得到RGB图像,将RGB图像作为第一图像。直 接根据拍摄指令获取第一图像,然后对第一图像进行处理,确保图像的实时性。
在一个实施例中,上述成像方法还包括:获取第二图像中的图像边缘区域。
该获取点扩散函数,采用该点扩散函数对该第二图像进行逆点扩散函数变换,得到目标图像,包括:获取点扩散函数,采用点扩散函数对第二图像中的图像边缘区域进行逆点扩散函数变换,得到目标图像。具体地,可采用点扩散函数对第二图像中的图像边缘区域进行逆点扩散函数变换,处理的数据量小。
在一个实施例中,获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像,包括:将该第二图像分成预设数量的区块;获取拍摄该第二图像中各个区块所对应的点扩散函数;根据该各个区块所对应的点扩散函数对对应的区块进行逆点扩散函数变换,得到各个区块所对应的目标子图像,将该目标子图像合成得到该目标图像。
其中,预设数量可根据需要设定,如2*2个区块或3*3个区块等。每个区块均可通过计算得到对应的点扩散函数。电子设备获取到第二图像中各个区块对应的点扩散函数后,各个区块所对应的点扩散函数对该区块进行逆点扩散函数变换,可以得到各个区块所对应的目标子图像,对目标子图像进行合成得到目标图像。
通过将第二图像分成多个区块,对每个区块采用对应的点扩散函数进行逆点扩散函数变换,计算更加准确,图像清晰度提升更高。
在一个实施例中,获取点扩散函数包括:获取用于标定的参考图像,以及对该参考图像拍摄得到的经过点扩散后的样本图像;将该样本图像和该参考图像分别分成对应的预设数量的区块;采用该样本图像中的区块与该参考图像中对应的区块计算得到所述区块对应的点扩散函数。
其中,用于标定的参考图像可为分辨率标准测试卡。电子设备的摄像头拍摄参考图像得到经过扩散后的样本图像。样本图像为参考图像与点扩散函数的卷积。利用公式(3),将样本图像除以参考图像可以得到点扩散函数。将样本图像和参考图像分别分成对应的预设数量的区块,样本图像的每个区块对应参考图像的一个区块。
采用样本图像的一个区块与参考图像中对应的一个区块计算得到该区块的点扩散函数。
具体地,可选择一张用于标定的分辨率标准测试卡,采用单反摄像头进行拍摄得到一张弥散程度很小的图像作为参考图像。该参考图像可认为几乎没有PSF扩散,然后采用带有光学成像***的电子设备在同样的距离下,对该测试卡拍摄,得到边缘弥散的样本图像。比较参考图像的一个区块和样本图像中对应的一个区块,根据线对的空间频率和对比度,计算出不同位置的MTF函数,对MTF(u,v)进行二维傅里叶反变换后,得到PSF(x,y)。
在一个实施例中,获取点扩散函数,包括:获取黑白斜边图像,对该黑白斜边图像拟合得到刃边的分布函数;根据该刃边的分布函数计算得到边缘扩散函数,对该边缘扩散函数求导得到线扩散函数,再对该线扩散函数进行傅里叶变换得到点扩散函数。
其中,对于点光源成像的点扩散函数PSF(x,y)进行二维傅里叶变换后得到的PSF(u,v),可认为是该光学成像***的调制传递函数MTF(u,v),其中,MTF(Modulation Transfer Function)为调制传递函数。故进行逆PSF变换时,需要求解出光学成像***的调制传递函数MTF。
具体地,电子设备对所选黑白斜边图像进行逐行扫描,当刃边两侧低反射率和高反射率目标的平均灰度值等于像元的灰度值或介于相邻2像元灰度值之间时,则该位置为刃边的位置,拟合得到刃边的分布函数,对刃边的分布函数进行拟合得到边缘扩散函数,对边缘扩散函数进行求导得到线扩散函数,再对线扩散函数进行傅里叶变换得到调制传递函数,将该调制传递函数作为点扩散函数。
在一个实施例中,上述成像方法还包括:获取用于标定的参考图像,以及对该参考图 像拍摄得到的经过点扩散后的样本图像;根据该样本图像和该参考图像计算得到点扩散函数。
其中,用于标定的参考图像可为分辨率标准测试卡。电子设备的摄像头拍摄参考图像得到经过扩散后的样本图像。样本图像为参考图像与点扩散函数的卷积。利用公式(3),将样本图像除以参考图像可以得到点扩散函数。
具体地,可选择一张用于标定的分辨率标准测试卡,采用摄像头进行拍摄得到一张弥散程度很小的图像作为参考图像。该参考图像可认为几乎没有PSF扩散,然后采用带有光学成像***的电子设备在同样的距离下,对该测试卡拍摄,得到边缘弥散的样本图像。比较参考图像和样本图像,根据线对的空间频率和对比度,计算出不同位置的MTF函数,对MTF(u,v)进行二维傅里叶反变换后,得到PSF(x,y)。
在一个实施例中,上述成像方法还包括:对该目标图像进行降噪和锐化处理,得到修正图像;其中,该锐化处理中降低滤波半径。
具体地,降噪可以采用非线性去燥算法进行降噪处理,如双边滤波器等。锐化处理是指补偿图像的轮廓,增强图像的边缘及灰度跳变部分,使得图像变得清晰。锐化处理一般分为空域处理和频域处理。当目标图像经过逆点扩散函数变换后,可采用锐化算法处理。通过降低滤波半径实现降低锐化算法的强度。降低锐化算法的强度,从而有效地缓解白边现象,提高图像质量。
图4为另一个实施例中成像方法的流程图。如图4所示,一种成像方法包括:
步骤402,通过光学成像***将拍摄对象投影到图像传感器感光面上。
步骤404,图像传感器经过光电效应输出RGB三通道响应值。
步骤406,将RGB三通道响应值经过自动对焦、自动曝光、自动白平衡处理得到RGB图像。
具体地,RGB图像为第一图像。
步骤408,将RGB图像通过颜色空间转换为YUV图像。
具体地,YUV图像为第二图像。
步骤410,对YUV图像进行逆点扩散函数转换,得到目标图像。
具体地,对YUV图像中的Y通道进行逆点扩散函数转换,得到目标图像。
步骤412,对目标图像进行降噪和锐化处理得到修正图像。
具体地,锐化处理时,可降低强度。
上述成像方法,通过将经过光学成像***成像得到的RGB图像转换为YUV图像,对YUV图像进行逆点扩散函数转换,可以有效地去除图像边缘的弥散圆,提升图像的清晰度,通过锐化处理可以有效缓解白边现象,提高图像质量。
上述成像方法还可用于减少光学模组的厚度,如缩减两个透镜之间的距离。减少厚度后的光学模组,采用该成像方法,可以达到和原模组一样的成像效果。
下面结合最详细的实施例描述成像方法。一种成像方法包括(1)至(8)。
(1)通过光学成像***将拍摄对象投影到图像传感器感光面上。
(2)图像传感器经过光电效应输出RGB三通道响应值。
(3)将RGB三通道响应值经过自动对焦、自动曝光、自动白平衡处理得到RGB图像,将RGB图像作为第一图像。
(4)将第一图像通过颜色空间转换为第二图像。
(5)将第二图像分成预设数量的区块。
(6)获取第二图像中的各个区块对应的点扩散函数。
(7)采用各个区块所对应的点扩散函数对第二图像中对应的区块进行逆点扩散函数变换,得到各个区块对应的目标子图像,将目标子图像合成得到目标图像。
(8)对目标图像进行降噪和锐化处理得到修正图像,其中,该锐化处理中降低滤波 半径。
本实施例中的成像方法,将获取的第一图像通过颜色空间转换为包含亮度通道的第二图像,根据各个区块对应的点扩散函数对第二图像中对应的区块进行逆点扩散函数转换,得到目标图像,对图像进行逆点扩散函数转换,分区块进行点扩散函数逆变换更加准确,有效地去除了图像边缘模糊,提升了图像的清晰度,通过降低锐化处理的强度,可以有效缓解白边现象,提高图像质量。
应该理解的是,虽然图2、图4的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2和图4中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图5为一个实施例中成像装置的结构框图。如图5所示,一种成像装置,包括图像获取模块502、图像转换模块504和逆变换模块506。其中:
图像获取模块502用于获取第一图像。
图像转换模块504用于将该第一图像通过颜色空间转换为包含亮度通道的第二图像。
逆变换模块506用于获取点扩散函数,采用该点扩散函数对该第二图像进行逆点扩散函数变换,得到目标图像。
本实施例中的成像装置,将获取的第一图像通过颜色空间转换为包含亮度通道的第二图像,然后获取点扩散函数,对第二图像进行逆点扩散函数转换,得到目标图像,对图像进行逆点扩散函数转换,有效地去除了图像边缘模糊,提升了图像的清晰度。
在一个实施例中,如图6所示,上述成像装置,除了包括图像获取模块502、图像转换模块504和逆变换模块506,还包括区域确定模块508、修正模块510、计算模块512。
区域确定模块508用于获取该第二图像中的图像边缘区域。
逆变换模块506还用于获取点扩散函数,采用该点扩散函数对该第二图像中的图像边缘区域进行逆点扩散函数变换,得到目标图像。
修正模块510用于对该目标图像进行降噪和锐化处理,得到修正图像;其中,该锐化处理中降低滤波半径。
计算模块512用于获取用于标定的参考图像,以及对该参考图像拍摄得到的经过点扩散后的样本图像,根据该样本图像和该参考图像计算得到点扩散函数。
在一个实施例中,区域确定模块508用于将该第二图像分成预设数量的区块。逆变换模块506还用于获取拍摄该第二图像中各个区块所对应的点扩散函数;根据该各个区块所对应的点扩散函数对对应的区块进行逆点扩散函数变换,得到各个区块所对应的目标子图像,将该目标子图像合成得到所述目标图像。
上述成像装置还可包括关系建立模块。关系建立模块用于获取用于标定的参考图像,以及对所述参考图像拍摄得到的经过点扩散后的样本图像;将所述样本图像和所述参考图像分别分成对应的预设数量的区块;采用所述样本图像中的区块与所述参考图像中对应的区块计算得到所述区块对应的点扩散函数,建立各个区块与点扩散函数的对应关系。
上述成像装置中各个模块的划分仅用于举例说明,在其他实施例中,可将成像装置按照需要划分为不同的模块,以完成上述成像装置的全部或部分功能。
图7为一个实施例中电子设备的内部结构示意图。如图7所示,该电子设备包括通过***总线连接的处理器和存储器。其中,该处理器用于提供计算和控制能力,支撑整个电 子设备的运行。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作***和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种成像方法。内存储器为非易失性存储介质中的操作***计算机程序提供高速缓存的运行环境。该电子设备可以是手机、平板电脑或者个人数字助理或穿戴式设备等。
本申请实施例中提供的成像装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在终端或服务器的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的步骤。
本申请实施例还提供一种电子设备。上述电子设备中包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图8为一个实施例中图像处理电路的示意图。如图8所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。
如图8所示,图像处理电路包括第一ISP处理器830、第二ISP处理器840和控制逻辑器850。第一摄像头810包括一个或多个第一透镜812和第一图像传感器814。第一图像传感器814可包括色彩滤镜阵列(如Bayer滤镜),第一图像传感器814可获取用第一图像传感器814的每个成像像素捕捉的光强度和波长信息,并提供可由第一ISP处理器830处理的一组图像数据。第二摄像头820包括一个或多个第二透镜822和第二图像传感器824。第二图像传感器824可包括色彩滤镜阵列(如Bayer滤镜),第二图像传感器824可获取用第二图像传感器824的每个成像像素捕捉的光强度和波长信息,并提供可由第二ISP处理器840处理的一组图像数据。
第一摄像头810采集的第一图像传输给第一ISP处理器830进行处理,第一ISP处理器830处理第一图像后,可将第一图像的统计数据(如图像的亮度、图像的反差值、图像的颜色等)发送给控制逻辑器850,控制逻辑器850可根据统计数据确定第一摄像头810的控制参数,从而第一摄像头810可根据控制参数进行自动对焦、自动曝光等操作。第一图像经过第一ISP处理器830进行处理后可存储至图像存储器860中,第一ISP处理器830也可以读取图像存储器860中存储的图像以对进行处理。另外,第一图像经过ISP处理器830进行处理后可直接发送至显示器870进行显示,显示器870也可以读取图像存储器860中的图像以进行显示。
其中,第一ISP处理器830按多种格式逐个像素地处理图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,第一ISP处理器830可对图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。
图像存储器860可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。
当接收到来自第一图像传感器814接口时,第一ISP处理器830可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器860,以便在被显示之前进行另外的处理。第一ISP处理器830从图像存储器860接收处理数据,并对所述处理数据进行RGB和YCbCr颜色空间中的图像数据处理。第一ISP处理器830处理后的图像数据可输出给显示器870,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,第一ISP处理器830的输出还可发送给图像存储器860,且显示器870可从图像存储器860读取图像数据。在一个实施例中,图像存储器860可被配置为实现一个或多个帧缓冲器。
第一ISP处理器830确定的统计数据可发送给控制逻辑器850。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、第一透镜812阴影校正等第一图像传感器814统计信息。控制逻辑器850可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定第一摄像头810的控 制参数及第一ISP处理器830的控制参数。例如,第一摄像头810的控制参数可包括增益、曝光控制的积分时间、防抖参数、闪光控制参数、第一透镜812控制参数(例如聚焦或变焦用焦距)、或这些参数的组合等。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及第一透镜812阴影校正参数。
同样地,第二摄像头820采集的第二图像传输给第二ISP处理器840进行处理,第二ISP处理器840处理第一图像后,可将第二图像的统计数据(如图像的亮度、图像的反差值、图像的颜色等)发送给控制逻辑器850,控制逻辑器850可根据统计数据确定第二摄像头820的控制参数,从而第二摄像头820可根据控制参数进行自动对焦、自动曝光等操作。第二图像经过第二ISP处理器840进行处理后可存储至图像存储器860中,第二ISP处理器840也可以读取图像存储器860中存储的图像以对进行处理。另外,第二图像经过ISP处理器840进行处理后可直接发送至显示器870进行显示,显示器870也可以读取图像存储器860中的图像以进行显示。第二摄像头820和第二ISP处理器840也可以实现如第一摄像头810和第一ISP处理器830所描述的处理过程。
以下为运用图8中图像处理技术实现成像方法的步骤:
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行成像方法的步骤。
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行成像方法。
本申请实施例所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (17)

  1. 一种成像方法,包括:
    获取第一图像;
    将所述第一图像通过颜色空间转换为包含亮度通道的第二图像;及
    获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述第二图像中的图像边缘区域;
    所述获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像,包括:
    获取点扩散函数,采用所述点扩散函数对所述第二图像中的图像边缘区域进行逆点扩散函数变换,得到目标图像。
  3. 根据权利要求1所述的方法,其特征在于,所述获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像,包括:
    将所述第二图像分成预设数量的区块;
    获取拍摄所述第二图像中各个区块所对应的点扩散函数;及
    根据所述各个区块所对应的点扩散函数对对应的区块进行逆点扩散函数变换,得到各个区块所对应的目标子图像,将所述目标子图像合成得到所述目标图像。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述获取点扩散函数,包括:
    获取黑白斜边图像,对所述黑白斜边图像拟合得到刃边的分布函数;及
    根据所述刃边的分布函数计算得到边缘扩散函数,对所述边缘扩散函数求导得到线扩散函数,再对所述线扩散函数进行傅里叶变换得到所述点扩散函数。
  5. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    对所述目标图像进行降噪和锐化处理,得到修正图像;
    其中,所述锐化处理中降低滤波半径。
  6. 根据权利要求1或2所述的方法,其特征在于,所述获取点扩散函数,包括:
    获取用于标定的参考图像,以及对所述参考图像拍摄得到的经过点扩散后的样本图像;
    根据所述样本图像和所述参考图像计算得到点扩散函数。
  7. 根据权利要求3所述的方法,其特征在于,所述获取点扩散函数,包括:
    获取用于标定的参考图像,以及对所述参考图像拍摄得到的经过点扩散后的样本图像;
    将所述样本图像和所述参考图像分别分成对应的预设数量的区块;及
    采用所述样本图像中的区块与所述参考图像中对应的区块计算得到所述区块对应的点扩散函数。
  8. 一种成像装置,其特征在于,包括:
    图像获取模块,用于获取第一图像;
    图像转换模块,用于将所述第一图像通过颜色空间转换为包含亮度通道的第二图像;及
    逆变换模块,用于获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像。
  9. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器实现以下步骤:
    获取第一图像;
    将所述第一图像通过颜色空间转换为包含亮度通道的第二图像;及
    获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像。
  10. 根据权利要求9所述的电子设备,其特征在于,所述处理器还用于实现:
    获取所述第二图像中的图像边缘区域;
    所述获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像,包括:
    获取点扩散函数,采用所述点扩散函数对所述第二图像中的图像边缘区域进行逆点扩散函数变换,得到目标图像。
  11. 根据权利要求9所述的电子设备,其特征在于,所述处理器还用于实现:
    获取所述第二图像中的图像边缘区域;
    所述获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像,包括:
    获取点扩散函数,采用所述点扩散函数对所述第二图像中的图像边缘区域进行逆点扩散函数变换,得到目标图像。
  12. 根据权利要求9所述的电子设备,其特征在于,所述获取点扩散函数,采用所述点扩散函数对所述第二图像进行逆点扩散函数变换,得到目标图像,包括:
    将所述第二图像分成预设数量的区块;
    获取拍摄所述第二图像中各个区块所对应的点扩散函数;及
    根据所述各个区块所对应的点扩散函数对对应的区块进行逆点扩散函数变换,得到各个区块所对应的目标子图像,将所述目标子图像合成得到所述目标图像。
  13. 根据权利要求9至12中任一项所述的电子设备,其特征在于,所述获取点扩散函数,包括:
    获取黑白斜边图像,对所述黑白斜边图像拟合得到刃边的分布函数;及
    根据所述刃边的分布函数计算得到边缘扩散函数,对所述边缘扩散函数求导得到线扩散函数,再对所述线扩散函数进行傅里叶变换得到所述点扩散函数。
  14. 根据权利要求9或10所述的电子设备,其特征在于,所述处理器还用于实现:
    对所述目标图像进行降噪和锐化处理,得到修正图像;
    其中,所述锐化处理中降低滤波半径。
  15. 根据权利要求9或10所述的电子设备,其特征在于,所述获取点扩散函数,包括:
    获取用于标定的参考图像,以及对所述参考图像拍摄得到的经过点扩散后的样本图像;及
    根据所述样本图像和所述参考图像计算得到点扩散函数。
  16. 根据权利要求11所述的电子设备,其特征在于,所述获取点扩散函数,包括:
    获取用于标定的参考图像,以及对所述参考图像拍摄得到的经过点扩散后的样本图像;
    将所述样本图像和所述参考图像分别分成对应的预设数量的区块;及
    采用所述样本图像中的区块与所述参考图像中对应的区块计算得到所述区块对应的点扩散函数。
  17. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的方法的步骤。
PCT/CN2019/104413 2018-11-26 2019-09-04 成像方法和装置、电子设备、计算机可读存储介质 WO2020107995A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811419394.9A CN109410152A (zh) 2018-11-26 2018-11-26 成像方法和装置、电子设备、计算机可读存储介质
CN201811419394.9 2018-11-26

Publications (1)

Publication Number Publication Date
WO2020107995A1 true WO2020107995A1 (zh) 2020-06-04

Family

ID=65455620

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/104413 WO2020107995A1 (zh) 2018-11-26 2019-09-04 成像方法和装置、电子设备、计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN109410152A (zh)
WO (1) WO2020107995A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115243017A (zh) * 2022-08-03 2022-10-25 上海研鼎信息技术有限公司 一种改善图像质量的方法及设备

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410152A (zh) * 2018-11-26 2019-03-01 Oppo广东移动通信有限公司 成像方法和装置、电子设备、计算机可读存储介质
CN111246053B (zh) * 2020-01-22 2022-07-12 维沃移动通信有限公司 图像处理方法及电子设备
CN112598594A (zh) * 2020-12-24 2021-04-02 Oppo(重庆)智能科技有限公司 颜色一致性矫正方法及相关装置
CN113132626B (zh) 2021-03-26 2022-05-31 联想(北京)有限公司 一种图像处理方法以及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080002909A1 (en) * 2006-07-03 2008-01-03 Siemens Corporate Research, Inc. Reconstructing Blurred High Resolution Images
CN101460975A (zh) * 2006-04-03 2009-06-17 全视Cdm光学有限公司 采用非线性和/或空间变化图像处理的光学成像***与方法
CN103430058A (zh) * 2011-03-22 2013-12-04 皇家飞利浦有限公司 包括相机的相机***、相机、操作相机的方法以及用于对记录的图像去卷积的方法
CN107979715A (zh) * 2016-10-21 2018-05-01 南昌欧菲光电技术有限公司 摄像装置
CN109410152A (zh) * 2018-11-26 2019-03-01 Oppo广东移动通信有限公司 成像方法和装置、电子设备、计算机可读存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101489035A (zh) * 2008-01-16 2009-07-22 三洋电机株式会社 摄像装置及抖动校正方法
WO2010104969A1 (en) * 2009-03-11 2010-09-16 Zoran Corporation Estimation of point spread functions from motion-blurred images
US20110044554A1 (en) * 2009-08-21 2011-02-24 Konica Minolta Systems Laboratory, Inc. Adaptive deblurring for camera-based document image processing
WO2012041492A1 (en) * 2010-09-28 2012-04-05 MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. Method and device for recovering a digital image from a sequence of observed digital images
WO2015064264A1 (ja) * 2013-10-31 2015-05-07 富士フイルム株式会社 画像処理装置、撮像装置、パラメータ生成方法、画像処理方法及びプログラム
CN104732578B (zh) * 2015-03-10 2019-01-29 山东科技大学 一种基于倾斜摄影技术的建筑物纹理优化方法
CN107392882A (zh) * 2017-07-30 2017-11-24 湖南鸣腾智能科技有限公司 一种基于角点检测的简单透镜psf迭代优化初始值的方法
CN108305230A (zh) * 2018-01-31 2018-07-20 上海康斐信息技术有限公司 一种模糊图像综合处理方法和***
CN108830805A (zh) * 2018-05-25 2018-11-16 北京小米移动软件有限公司 图像处理方法、装置及可读存储介质、电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101460975A (zh) * 2006-04-03 2009-06-17 全视Cdm光学有限公司 采用非线性和/或空间变化图像处理的光学成像***与方法
US20080002909A1 (en) * 2006-07-03 2008-01-03 Siemens Corporate Research, Inc. Reconstructing Blurred High Resolution Images
CN103430058A (zh) * 2011-03-22 2013-12-04 皇家飞利浦有限公司 包括相机的相机***、相机、操作相机的方法以及用于对记录的图像去卷积的方法
CN107979715A (zh) * 2016-10-21 2018-05-01 南昌欧菲光电技术有限公司 摄像装置
CN109410152A (zh) * 2018-11-26 2019-03-01 Oppo广东移动通信有限公司 成像方法和装置、电子设备、计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115243017A (zh) * 2022-08-03 2022-10-25 上海研鼎信息技术有限公司 一种改善图像质量的方法及设备
CN115243017B (zh) * 2022-08-03 2024-06-07 上海研鼎信息技术有限公司 一种改善图像质量的方法及设备

Also Published As

Publication number Publication date
CN109410152A (zh) 2019-03-01

Similar Documents

Publication Publication Date Title
WO2020107995A1 (zh) 成像方法和装置、电子设备、计算机可读存储介质
US8941762B2 (en) Image processing apparatus and image pickup apparatus using the same
CN111986129B (zh) 基于多摄图像融合的hdr图像生成方法、设备及存储介质
US11431915B2 (en) Image acquisition method, electronic device, and non-transitory computer readable storage medium
JP7369175B2 (ja) 画像処理装置およびその制御方法及びプログラム
JP6999802B2 (ja) ダブルカメラベースの撮像のための方法および装置
US11258960B2 (en) Imaging device, information processing system, program, image processing method
CN107395991B (zh) 图像合成方法、装置、计算机可读存储介质和计算机设备
US20170134634A1 (en) Photographing apparatus, method of controlling the same, and computer-readable recording medium
CN109685853B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
CN108989699B (zh) 图像合成方法、装置、成像设备、电子设备以及计算机可读存储介质
WO2019105254A1 (zh) 背景虚化处理方法、装置及设备
WO2018152977A1 (zh) 一种图像降噪方法及终端和计算机存储介质
WO2019029573A1 (zh) 图像虚化方法、计算机可读存储介质和计算机设备
WO2020029679A1 (zh) 控制方法、装置、成像设备、电子设备及可读存储介质
CN109559352B (zh) 摄像头标定方法、装置、电子设备和计算机可读存储介质
CN110852956A (zh) 一种高动态范围图像的增强方法
CN107682611B (zh) 对焦的方法、装置、计算机可读存储介质和电子设备
WO2012008116A1 (en) Image processing apparatus, image processing method, and program
CN109584311B (zh) 摄像头标定方法、装置、电子设备和计算机可读存储介质
CN110807735A (zh) 图像处理方法、装置、终端设备及计算机可读存储介质
CN109697737B (zh) 摄像头标定方法、装置、电子设备和计算机可读存储介质
JP5843599B2 (ja) 画像処理装置および撮像装置並びにその方法
CN110782400A (zh) 一种自适应的光照均匀实现方法和装置
CN109672810B (zh) 图像处理设备、图像处理方法和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19890012

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19890012

Country of ref document: EP

Kind code of ref document: A1