WO2022134718A1 - 图像处理方法、芯片及电子装置 - Google Patents

图像处理方法、芯片及电子装置 Download PDF

Info

Publication number
WO2022134718A1
WO2022134718A1 PCT/CN2021/121574 CN2021121574W WO2022134718A1 WO 2022134718 A1 WO2022134718 A1 WO 2022134718A1 CN 2021121574 W CN2021121574 W CN 2021121574W WO 2022134718 A1 WO2022134718 A1 WO 2022134718A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
edge
information
blurred
blurring
Prior art date
Application number
PCT/CN2021/121574
Other languages
English (en)
French (fr)
Inventor
朱文波
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP21908719.4A priority Critical patent/EP4266250A4/en
Publication of WO2022134718A1 publication Critical patent/WO2022134718A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image processing method, a chip and an electronic device.
  • Image bokeh also known as background bokeh, refers to making the depth of field shallower and focusing on the subject.
  • image blurring there are two ways of image blurring, namely, image blurring based on one camera and image blurring based on two cameras, but there is a problem that the blurring effect is not ideal in either method.
  • the present application aims to solve one of the technical problems in the related art at least to a certain extent. Therefore, the first objective of the present application is to propose an image processing method capable of improving image blurring effect.
  • the second purpose of this application is to provide an image processing chip.
  • the third objective of the present application is to provide an electronic device.
  • an embodiment of the first aspect of the present application proposes an image processing method, which includes the following steps: obtaining a RAW image; processing the RAW image to obtain depth information and edge information of the image content; Perform blurring processing and modify edge information; process the blurred image according to the corrected edge information to obtain a blurred image.
  • depth information and edge information of the image content are obtained by processing the acquired RAW image, and the RAW image is blurred according to the depth information and the edge information is corrected, and The edge information of the blurred image is processed to obtain a blurred image.
  • the edge detection of the image content is carried out by using the characteristics of the RAW image containing more image information to obtain the edge detection result, and the edge detection result is corrected in combination with the depth information to make the edge detection result more accurate, and then according to the corrected edge detection result Reprocessing the blurred image makes the blurring effect closer to the ideal effect and effectively improves the image blurring effect.
  • an embodiment of the second aspect of the present application proposes an image processing chip, including: a first chip, a second chip, and a connection interface connecting the first chip and the second chip, wherein the first chip is used to obtain RAW image, and process the RAW image to obtain depth information and edge information of the image content, and correct the edge information according to the depth information; the second chip is used to blur the RAW image according to the depth information, and according to the correction The obtained edge information is used to process the blurred image to obtain a blurred image.
  • a RAW image is obtained through the first chip, the RAW image is processed to obtain depth information and edge information of the image content, and the edge information is corrected according to the depth information, and the second chip is used to obtain a RAW image.
  • the RAW image is blurred according to the depth information, and the blurred image is processed according to the corrected edge information to obtain a blurred image.
  • the edge detection of the image content is carried out by using the characteristics of the RAW image containing more image information to obtain the edge detection result, and the edge detection result is corrected in combination with the depth information to make the edge detection result more accurate, and then according to the corrected edge detection result Reprocessing the blurred image makes the blurring effect closer to the ideal effect and effectively improves the image blurring effect.
  • the depth information is acquired by the first chip, which is beneficial for the first chip to obtain a better edge detection effect by using the depth information.
  • a third aspect of the present application provides an electronic device, including a display and the above-mentioned image processing chip, the display is connected to the image processing chip, and is used for displaying a blurred image processed by the image processing chip.
  • Fig. 1 is the application scene diagram of the image processing method in one embodiment
  • FIG. 2 is a schematic flowchart of an image processing method in one embodiment
  • 3 is a schematic flowchart of depth information acquisition in one embodiment
  • 5a-5b are images with noise and blurred images obtained by performing blurring processing on images with noise by using the prior art
  • 6 is a blurred image obtained by performing a blurring process on an image with a consistent color area and using the prior art to perform a blurring process on an image with a consistent color;
  • FIG. 8 is a schematic flowchart of an image processing method in another embodiment
  • FIG. 9 is a structural diagram of an image processing chip in one embodiment
  • FIG. 10 is a structural diagram of an image processing chip in another embodiment
  • FIG. 11 is a structural diagram of an image processing chip in yet another embodiment
  • FIG. 12 is a block diagram of an electronic device in one embodiment.
  • the image processing method provided in this application can be applied to the electronic device as shown in FIG. 1 .
  • the electronic device includes a front-end image signal processing chip and a back-end application processing chip, wherein the front-end image signal processing chip is used to obtain a RAW image, process the RAW image to obtain depth information and edge information of the image content, and adjust the depth information according to the depth information.
  • the RAW image is blurred and the edge information is corrected; the back-end application processing chip is mainly used to process the blurred image according to the corrected edge information to obtain a blurred image.
  • the electronic device can be a mobile phone, a tablet computer, a personal computer, a smart camera, a camera, or other devices with a photographing or video recording function.
  • an image processing method is provided. Taking the method applied to the electronic device shown in FIG. 1 as an example, the following steps may be included:
  • Step S202 acquiring a RAW image.
  • a RAW image refers to an image based on the RAW format.
  • the RAW format is a picture format, and its original meaning means unprocessed.
  • the RAW image can be understood as the image captured by an image collector such as a CMOS or CCD image sensor.
  • the received light source signal is converted into the original image data of the digital signal.
  • RAW images can be acquired through an image acquisition device such as a CMOS or CCD image sensor, and passed to the front-end image signal processing chip for processing.
  • Step S204 the RAW image is processed to obtain depth information and edge information of the image content.
  • the front-end signal processing chip After receiving the RAW image, the front-end signal processing chip performs depth information calculation based on the RAW image to obtain depth information.
  • the depth information is important information used to describe the three-dimensional image and the three-dimensional scene, and specifically refers to the distance information between the image collector and the shooting area.
  • one method is to calculate and obtain based on two images obtained by two image collectors at the same time; the other way is to calculate and obtain based on two images obtained by one image collector at different times. Or calculated based on an image acquired by an image collector.
  • the first method is used as an example for description below. Referring to FIG. 3 , the following steps may be included:
  • Step S302 acquiring the first RAW image and the second RAW image acquired by the two image collectors at the same moment.
  • the two image collectors can be binocular cameras.
  • step S302 two image collectors need to be calibrated, that is, the binocular camera is calibrated, so as to obtain the internal and external parameters of the binocular camera, and then determine the image coordinate system and camera coordinates
  • the internal parameters include focal length, imaging origin, etc.
  • the external parameters include the relative position relationship of the binocular camera, such as rotation matrix, translation vector, etc.
  • the calibration method based on a single plane checkerboard can be used to determine the image.
  • the mapping relationship between the coordinate system, the camera coordinate system and the three-dimensional space world coordinate system, this method not only has high calibration accuracy and stability, but also has a simple method and low requirements for the accuracy of the calibration object.
  • step S302 it is also necessary to calibrate the two image collectors, that is, to calibrate the binocular cameras, so as to improve the shooting performance.
  • the two image planes of the binocular camera are parallel to the optical axis, but due to installation errors, etc., the above requirements cannot be met, so the binocular camera needs to be corrected.
  • the two RAW images can be distorted according to the internal and external parameters of the binocular camera, so as to meet the above requirements as much as possible, so as to reduce the complexity of subsequent pixel point matching.
  • Step S304 Match each pixel in the first RAW image and the second RAW image according to a preset stereo matching algorithm, and calculate the visual difference between the matched pixels to obtain depth information.
  • Stereo matching of images is essentially to solve the problem that multiple images of the same object match pixels at different spatial positions or at different times.
  • Stereo matching algorithms are related algorithms used to solve this problem, including but not limited to SAD. (Sum of absolute differences, sum of absolute differences) algorithm, STAD (Sum of Truncated Absolute Differences, truncated sum of absolute differences) algorithm and SSD (Sum of squared Differences, sum of squared differences) algorithm.
  • the projection matrix of each pixel point in each pixel point group under the corresponding camera coordinate system can be determined according to the mapping relationship between the image coordinate system, the camera coordinate system and the three-dimensional space world coordinate system obtained by pre-calibration, and The coordinate position of each pixel point in each pixel point group in the corresponding camera coordinate system is determined according to the projection matrix, and the distance information of each pixel point in each pixel point group from the center point of the corresponding camera coordinate system is determined according to the coordinate position.
  • the depth information of the RAW image can be determined.
  • depth information focal length of binocular camera * center distance of binocular camera / The vertical distance from any point in space to the center line of the binocular camera. In this way, depth information can be obtained by calculation based on the two images obtained by the binocular camera through a certain algorithm.
  • the front-end signal processing chip After receiving the RAW image, the front-end signal processing chip also performs edge detection on the image content based on the RAW image to obtain edge information of the RAW image.
  • the so-called edge refers to the set of pixels around which the gray value of the pixels changes sharply, and is the most basic feature of the image.
  • the edge can be roughly divided into two types, one is a step edge, the gray value of the pixels on both sides of the edge is obviously different; the other is a roof edge, the edge is in a gray value change from small to large and then small. turning point.
  • Edges mainly exist between objects and objects, objects and backgrounds, regions and regions (including different colors).
  • the internal features or properties of the regions separated by edges are consistent, while the internal features or properties of different regions are different.
  • Edge detection is realized by using the difference between the object and the background in certain image features, these differences include gray value, color or texture features, and edge detection is actually detecting the position where the image features change.
  • a preset edge detection operator and algorithm can be used to perform edge detection on the image content of the RAW image to obtain edge information of the RAW image.
  • edge detection operator For example, Laplacian operator, Roberts operator, Sobel operator, log(Laplacian-Gauss) operator, Kirsch operator or Prewitt operator, as well as multi-scale wavelet edge detection algorithm, etc. detection.
  • processing the RAW image to obtain edge information of the image content includes: performing image component extraction on the RAW image to obtain a single component image; and performing edge detection on the image content on the single component image to obtain edge information.
  • the front-end image signal processing chip can first extract a certain image component from the RAW image (such as extracting grayscale components, etc.) to obtain a single component image (such as grayscale image, etc.), and then according to the single component image
  • the edge detection of the image content is performed to determine the edge information of the image content in the RAW image.
  • Laplacian operator Roberts operator, Sobel operator, log(Laplacian-Gauss) operator, Kirsch operator or Prewitt operator, as well as multi-scale wavelet edge detection algorithm, etc. detection.
  • edge information may be obtained by performing edge detection on a RAW image, or edge information may be obtained by performing edge detection on a single component image processed on a RAW image.
  • edge detection effect is better, especially the edge detection of the same color, but since the edge detection result, that is, the edge information, will be corrected later, the latter can finally achieve the same effect. And the latter requires less computation, making image processing faster.
  • Step S206 performing blurring processing on the RAW image and correcting edge information according to the depth information.
  • the front-end image signal processing chip calculates the depth information of the RAW image to obtain the depth information
  • the RAW image can be blurred according to the depth information.
  • the former is mostly by blurring the area outside the focus area to achieve blurring, or using deep learning for background segmentation, for example, using a dataset to train a neural network to obtain a neural network model, through the neural network.
  • the model realizes background segmentation, and then determines the objects that need to be clearly focused and the background that needs to be blurred, so as to achieve blurring.
  • the latter mostly uses two images captured by the binocular camera to construct a depth image, and infers the front-to-back relationship of the image content based on the depth image, and then controls the degree of blurring.
  • the visual difference of the binocular camera can be used to calculate each pixel point.
  • any method in the prior art can be used to perform blurring processing on the RAW image, which is not specifically limited here.
  • the front-end image signal processing chip calculates the depth information of the RAW image to obtain the depth information, it also corrects the edge information according to the depth information, so as to obtain more accurate edge information of the image content.
  • edge detection effect when the image has noise, the edge detection effect will become very poor, and even erroneous edge detection results will appear, because the noise in the image will become Similar to the result after the object edge conversion, the edge cannot be detected; as shown in Figure 6, for the area with the same color, even if there is an edge, it will be indistinguishable due to the similar results after the color conversion, resulting in the edge cannot be detected.
  • depth information is also used to correct the edge detection result, that is, edge information, so as to obtain more accurate edge information of the image content.
  • modifying the edge information according to the depth information includes: acquiring the depth information of the pixel point corresponding to the edge information and the depth information of the surrounding pixels of the pixel point; according to the depth information of the pixel point corresponding to the edge information and The depth information of the surrounding pixels is used to correct the edge information based on the similarity and gradient of the edge depth information.
  • the edge of the object is corrected by utilizing the similarity and gradient characteristics of the depth information of the edge of the object, so as to obtain more accurate edge information of the image content.
  • the edge information is modified, including: Pixels whose depth information is inconsistent with the depth information of other pixels are culled. That is to say, the pixels corresponding to the edge information whose depth information is too large or too small are removed from the edge information, so that the corrected edge information is more accurate and closer to the ideal state.
  • the depth information of the pixels corresponding to the edge of the object can be obtained first, and then it can be judged whether the depth information of these pixels is consistent (for example, the same or the difference is less than a small value). Pixels are eliminated from the edge information, so that the corrected edge information is more perfect and closer to the ideal state. For example, for an image with noise, after the noise is transformed, it will become similar to the result of the object edge transformation. At this time, it may be used as edge information, resulting in false edge detection, but the depth information of the noise is the same as the depth of the object edge. The information is not the same, so it can be eliminated from the edge information based on the difference of depth information, so as to obtain more accurate edge information.
  • the edge information is modified, including: determining the depth information of the surrounding pixels.
  • the edge information is supplemented according to the surrounding pixels. That is to say, the pixels whose depth information in the surrounding pixels is consistent with the depth information of the pixels corresponding to the edge information are supplemented to the edge information, so that the corrected edge information is more accurate and closer to the ideal state.
  • the depth information of the pixels corresponding to the edge of the object and the depth information of the pixels around the edge of the object can be obtained first, and then it is determined whether the depth information of the pixels corresponding to the edge of the object is consistent with the depth information of the pixels around the edge of the object ( If the same or the difference is less than a smaller value), if they are consistent, the pixels around the edge of the object will be added to the edge information; if they are inconsistent, they will not be added, so that the corrected edge information will be more complete and closer to the edge. ideal state.
  • the traditional edge detection is based on color (distinguished by the change of gray value)
  • the depth information of the real object edge is Consistent or gradual, different from the non-edge area, that is, the depth information near the edge (especially the extension area at both ends of the edge that is properly handled by traditional methods) is slowly changing or relatively close, and will not appear like areas other than the edge. Therefore, it is possible to modify the traditional edge detection results combined with the depth information to obtain more accurate edge information and make the edge detection results more complete.
  • edge detection results when revising edge detection results based on depth information, you can pay attention to the symmetrical edge detection results of the same depth or focal plane, and correct the edge detection results through the depth information of the entire image.
  • Edge completion is performed with the principle of gradient. For example, for the same brightness area, if edge recognition is performed only based on a single component, some edges will fail to be recognized, but when combined with depth information, edge recognition can be supplemented.
  • the blurring processing of the RAW image can also be implemented by the back-end application processing chip, and the setting can be selected according to actual needs, which is not limited here.
  • the front-end image signal processing chip it needs to be performed in the front-end image signal processing chip, so that the front-end image signal processing chip can use the depth information to correct the edge information and obtain a better edge detection effect.
  • step S208 the blurred image is processed according to the corrected edge information to obtain a blurred image.
  • the back-end application processing chip can process the blurred image according to the corrected edge information to obtain a blurred image, so as to reduce the leakage and false blurring, and at the same time make the blurring
  • the image is more natural, effectively improving the image blur effect.
  • processing the blurred image according to the corrected edge information to obtain a blurred image includes: performing a blurring result detection on the blurred image according to the corrected edge information to obtain a blurred image.
  • the abnormal area is blurred; the abnormal blurred area is corrected to obtain a blurred image.
  • edge recognition can be performed on the blurred image according to the corrected edge information, and the blurring results of the edge area can be compared to determine whether blurring has been performed and whether it has been blurred incorrectly.
  • Correct the blurred area for example, adjust the area with too high or too low blurring degree, and perform the blurring processing again on the area that is leaking blurred, so as to finally obtain a more accurate blurred image.
  • performing the blurring result detection on the blurred image according to the corrected edge information to obtain the blurred abnormal area including: based on the edge information, by comparing the RAW image with the blurred image Whether the value of the corresponding pixel has changed, determine the blurring area; based on the edge information, by comparing the change range of the value of the object pixel between the RAW image and the blurred image, determine the false blurring area and insufficient blurring area.
  • edge information can be combined with edge information to compare whether the value of the corresponding pixel of the image before and after the blurring has changed to detect whether there is a leakage of blurring; Incorrect blurring or blurring is not in place.
  • the detection of image blurring results is mainly based on the principle that the blurring degree of the same depth is the same.
  • the theoretical blurring degree is the same, which is reflected in the image that can detect the blurring.
  • the change of the blurring degree before and after the blurring can be specifically inferred by comparing the degree of change of the value of the pixel point before and after the blurring, so as to determine whether there is a false blurring.
  • leakage blurring that is, no blurring processing is performed on the image area that should be blurred.
  • this situation often occurs in some small segmentation areas (that is, areas with large differences in depth information), such as object gaps or corners, so It is also possible to use the principle of similarity of depth information to detect whether there is a leakage blur by comparing the depth information on the left and right sides of the edge and whether the image has been processed, and then perform corresponding follow-up processing. It should be noted that the edge judgment can determine the key judgment area based on the focal plane, such as the gap between the human arm (the depth information of the background and the plane where the person is located is quite different, whether it is reflected in the blurring process) and so on.
  • the blurring result detection can be performed according to the change ratio of the value of the corresponding pixel point of the image after the blurring process to the value of the corresponding pixel point of the RAW image, and it is determined according to the detection result that the blurred image needs to be virtualized.
  • the image area in the same focal plane is subjected to equal-scale pixel point weakening processing according to the change ratio, that is, re-blur processing, so as to make the blurring effect more ideal.
  • the image processing method may include the following steps:
  • Step S802 start the camera.
  • Step S804 the image collector outputs a RAW image.
  • Step S806 depth information is obtained by calculating depth information on the RAW image.
  • Step S808 performing blurring processing on the RAW image according to the depth information.
  • step S810 the image components are separated to obtain a single component image.
  • Step S812 performing edge detection on the image content of the single component image to obtain edge information.
  • Step S814 modifying the edge information according to the depth information.
  • Step S816 performing a blurring result detection on the image after the blurring process according to the corrected edge information.
  • Step S818, performing blur correction on the blurred image to obtain a blurred image.
  • step S820 post-processing is performed on the blurred image.
  • Step S822 output the processed image.
  • the front-end image signal processing chip can preferentially obtain image data and the characteristics of fast calculation speed are used to quickly identify and analyze the RAW image, perform edge detection, obtain edge information of the image content, and perform blurring processing according to the edge information. Perform edge recognition on the resulting image, and compare the blurring results of the edge area (whether it has been blurred, and whether it has been blurred incorrectly), and based on the comparison results, the abnormal blurred areas (such as leakage blurring and false blurring) are corrected.
  • the image blur effect is closer to the ideal blur effect, thereby improving the user's shooting experience.
  • the depth information is used to supplement the edge detection results, the edge detection results are also more accurate than those obtained by traditional edge detection.
  • steps in the flowcharts of FIGS. 2-3 and 8 are sequentially displayed in accordance with the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIGS. 2-3 and 8 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. These sub-steps or The order of execution of the stages is also not necessarily sequential, but may be performed alternately or alternately with other steps or sub-steps of other steps or at least a portion of a stage.
  • depth information and edge information of the image content are obtained by processing the acquired RAW image, and the RAW image is blurred according to the depth information and the edge information is corrected, and The edge information of the blurred image is processed to obtain a blurred image.
  • the edge detection of the image content is carried out by using the characteristics of the RAW image containing more image information to obtain the edge detection result, and the edge detection result is corrected in combination with the depth information to make the edge detection result more accurate, and then according to the corrected edge detection result Reprocessing the blurred image makes the blurring effect closer to the ideal effect and effectively improves the image blurring effect.
  • an image processing chip includes: a first chip 10 , a second chip 20 and a connection interface 30 connecting the first chip 10 and the second chip 20 .
  • the first chip 10 is used for acquiring a RAW image, processing the RAW image to acquire depth information and edge information of the image content, and correcting the edge information according to the depth information;
  • the second chip 20 is used for processing the RAW image according to the depth information
  • the image is blurred, and the blurred image is processed according to the corrected edge information to obtain a blurred image.
  • connection interface 30 may include one or more of MIPI (Mobile Industry Processor Interface, mobile industry processor interface) and PCIE (Peripheral Component Interconnect Express, high-speed serial computer expansion bus standard).
  • MIPI Mobile Industry Processor Interface, mobile industry processor interface
  • PCIE Peripheral Component Interconnect Express, high-speed serial computer expansion bus standard
  • the first chip 10 includes: a depth information calculation module 11 and an image edge recognition and correction module 12 , wherein the depth information calculation module 11 is used for processing RAW images to obtain depth information ;
  • the image edge recognition and correction module 12 is used for processing the RAW image to obtain the edge information of the image content, and corrects the edge information according to the depth information.
  • the first chip 10 further includes: an image component extraction module 13, which is used to extract image components from the RAW image to obtain a single component image; image edge recognition and correction
  • the module 12 is used to perform edge detection on the image content of the single component image to obtain edge information.
  • the image edge recognition and correction module 12 is specifically configured to: obtain the depth information of the pixels corresponding to the edge information and the depth information of the surrounding pixels of the pixel, and according to the depth information of the pixels corresponding to the edge information and The depth information of the surrounding pixels is used to correct the edge information based on the similarity and gradient of the edge depth information.
  • the second chip 20 includes: an image processing module 21, configured to perform a blurring process on the RAW image according to the depth information, and process the blurred image according to the corrected edge information to obtain a blurred image. image.
  • the image processing module 21 is specifically configured to: perform a blurring result detection on the blurred image according to the corrected edge information, obtain a blurred abnormal area, and correct the blurred abnormal area to obtain a virtual blurred abnormal area. image.
  • the image processing module 21 is specifically configured to: based on the edge information, by comparing whether the value of the corresponding pixel has changed between the RAW image and the blurred image, to determine the blurred area; based on the edge information, By comparing the change range of the value of the object pixel point between the RAW image and the blurred image, the false blurred area and the insufficient blurred area are determined.
  • the first chip 11 further includes: an image blurring processing module 14 for performing blurring processing on the RAW image according to the depth information, so that the second chip 20 can perform blurring processing on the RAW image according to the corrected edge information
  • the blurred image is processed to obtain a blurred image.
  • each module in the above-mentioned image processing chip can be implemented in whole or in part by software, hardware and combinations thereof.
  • a RAW image is obtained through the first chip, the RAW image is processed to obtain depth information and edge information of the image content, and the edge information is corrected according to the depth information, and the second chip is used to obtain a RAW image.
  • the RAW image is blurred according to the depth information, and the blurred image is processed according to the corrected edge information to obtain a blurred image.
  • the edge detection of the image content is carried out by using the characteristics of the RAW image containing more image information to obtain the edge detection result, and the edge detection result is corrected in combination with the depth information to make the edge detection result more accurate, and then according to the corrected edge detection result Reprocessing the blurred image makes the blurring effect closer to the ideal effect and effectively improves the image blurring effect.
  • the depth information is acquired by the first chip, which is beneficial for the first chip to obtain a better edge detection effect by using the depth information.
  • an electronic device including a display 100 and the aforementioned image processing chip 200 .
  • the display 100 is connected to the image processing chip 200 for displaying virtual images processed by the image processing chip 200 . image.
  • Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain Road (Synchlink) DRAM
  • SLDRAM synchronous chain Road (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

本申请涉及一种图像处理方法、芯片及电子装置。图像处理方法包括以下步骤:获取RAW图像;对RAW图像进行处理,获得深度信息和图像内容的边缘信息;根据深度信息,对RAW图像进行虚化处理和对边缘信息进行修正;根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像。其中,利用RAW图像包含更多的图像信息的特点进行图像内容的边缘检测得到边缘检测结果,同时结合深度信息对边缘检测结果进行修正,使得边缘检测结果更加准确,而后根据修正后的边缘检测结果对虚化处理后的图像再处理,使得虚化效果更接近理想效果,有效改善了图像虚化效果。

Description

图像处理方法、芯片及电子装置
相关申请的交叉引用
本申请要求于2020年12月24日提交的申请号为202011548396.5,名称为“图像处理方法、芯片及电子装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种图像处理方法、芯片及电子装置。
背景技术
图像虚化也称背景虚化,具体是指使景深变浅,使焦点聚集在主题上。目前图像虚化的方式有两种,分别为基于一个摄像头的图像虚化和基于两个摄像头的图像虚化,但是不管哪种方式均存在虚化效果不理想的问题。
发明内容
本申请旨在至少在一定程度上解决相关技术中的技术问题之一。为此,本申请的第一个目的在于提出一种能够改善图像虚化效果的图像处理方法。
本申请的第二个目的在于提出一种图像处理芯片。
本申请的第三个目的在于提出一种电子装置。
为达到上述目的,本申请第一方面实施例提出一种图像处理方法,包括以下步骤:获取RAW图像;对RAW图像进行处理,获得深度信息和图像内容的边缘信息;根据深度信息,对RAW图像进行虚化处理和对边缘信息进行修正;根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像。
根据本申请实施例的图像处理方法,通过对获取的RAW图像进行处理获得深度信息和图像内容的边缘信息,并根据深度信息对RAW图像进行虚化处理和对边缘信息进行修正,以及根据修正后的边缘信息对虚化处理后的图像进行处理获得虚化图像。其中,利用RAW图像包含更多的图像信息的特点进行图像内容的边缘检测得到边缘检测结果,同时结合深度信息对边缘检测结果进行修正,使得边缘检测结果更加准确,而后根据修正后的边缘检测结果对虚化处理后的图像再处理,使得虚化效果更接近理想效果,有效改善了图像虚化效果。
为达到上述目的,本申请第二方面实施例提出一种图像处理芯片,包括:第一芯片、第二芯片以及连接第一芯片和第二芯片的连接接口,其中,第一芯片,用于获取RAW图像,并对RAW图像进行处理以获得深度信息和图像内容的边缘信息,以及根据深度信息对边缘信息进行修正;第二芯片,用于根据深度信息对RAW图像进行虚化处理,并根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像。
根据本申请实施例的图像处理芯片,通过第一芯片获取RAW图像,并对RAW图像进行处理以获取深度信息和图像内容的边缘信息,以及根据深度信息对边缘信息进行修 正,并通过第二芯片根据深度信息对RAW图像进行虚化处理,并根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像。其中,利用RAW图像包含更多的图像信息的特点进行图像内容的边缘检测得到边缘检测结果,同时结合深度信息对边缘检测结果进行修正,使得边缘检测结果更加准确,而后根据修正后的边缘检测结果对虚化处理后的图像再处理,使得虚化效果更接近理想效果,有效改善了图像虚化效果。同时,深度信息由第一芯片获取,有利于第一芯片利用深度信息得到更好的边缘检测效果。
为达到上述目的,本申请第三方面实施例提出一种电子装置,包括显示器和上述的图像处理芯片,显示器与图像处理芯片相连,用于显示图像处理芯片处理后的虚化图像。
根据本申请实施例的电子装置,通过上述的图像处理芯片,能够获得虚化效果更接近理想效果的虚化图像,有效改善了图像虚化效果。
附图说明
图1为一个实施例中图像处理方法的应用场景图;
图2为一个实施例中图像处理方法的流程示意图;
图3为一个实施例中深度信息获取的流程示意图;
图4为图像虚化原理示意图;
图5a-图5b为具有噪声的图像和采用现有技术对具有噪声的图像进行虚化处理获得的虚化图像;
图6为具有颜色一致区域的图像和采用现有技术对具有颜色一致的图像进行虚化处理获得的虚化图像;
图7为采用现有技术进行虚化处理获得的虚化图像;
图8为另一个实施例中图像处理方法的流程示意图;
图9为一个实施例中图像处理芯片的结构图;
图10为另一个实施例中图像处理芯片的结构图;
图11为又一个实施例中图像处理芯片的结构图;
图12为一个实施例中电子装置的结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅用以解释本申请,并不用于限定本申请。
本申请提供的图像处理方法,可以应用于如图1所示的电子装置中。电子装置包括前端图像信号处理芯片和后端应用处理芯片,其中,前端图像信号处理芯片用于获取RAW图像,并对RAW图像进行处理以获得深度信息和图像内容的边缘信息,以及根据深度信息对RAW图像进行虚化处理和对边缘信息进行修正;后端应用处理芯片主要用于根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像。电子装置可以为手 机、平板电脑、个人计算机、智能相机以及摄像头等具有拍照或摄像功能的设备。
在一个实施例中,参考图2所示,提供了一种图像处理方法,以该方法应用于图1所示的电子装置为例,可包括以下步骤:
步骤S202,获取RAW图像。
本申请中,RAW图像是指基于RAW格式的图像,RAW格式是一种图片格式,其原意是指未经加工,那么RAW图像就可以理解为由图像采集器如CMOS或者CCD图像感应器将捕捉到的光源信号转换为数字信号的原始图像数据。在实际应用中,可通过图像采集器如CMOS或者CCD图像感应器获取RAW图像,并传递给前端图像信号处理芯片进行处理。
步骤S204,对RAW图像进行处理,获得深度信息和图像内容的边缘信息。
前端信号处理芯片在接收到RAW图像后,基于RAW图像进行深度信息计算,获得深度信息。其中,深度信息是用于描述三维图像和三维场景的重要信息,具体是指图像采集器与拍摄区域之间的距离信息。
在基于RAW图像获得深度信息时,一种方式是基于两个图像采集器获得的同一时刻的两幅图像计算获得;另一种方式是基于一个图像采集器获得的不同时刻的两幅图像计算获得或者基于一个图像采集器获得的一幅图像计算获得。
下面以第一种方式为例进行说明,参考图3所示,可包括以下步骤:
步骤S302,获取同一时刻通过两个图像采集器采集获得的第一RAW图像和第二RAW图像。在实际应用中,两个图像采集器可以是双目摄像头。
需要说明的是,在执行步骤S302之前,还需要对两个图像采集器进行标定,即对双目摄像头进行标定,以获得双目摄像头的内参和外参,进而确定出图像坐标系、摄像头坐标系以及三维空间世界坐标系之间的映射关系。其中,内参包括焦距、成像原点等,外参包括双目摄像头的相对位置关系,如旋转矩阵、平移向量等,在获得内参和外参后,可采用基于单平面棋盘格的标定方法确定出图像坐标系、摄像头坐标系以及三维空间世界坐标系之间的映射关系,该方法不仅具有较高的标定精度和稳定性,而且方法简单且对标定物的精度要求低。
在执行步骤S302之前,还需要对两个图像采集器进行校正,即对双目摄像头进行校正,以提高拍摄性能。可以理解的是,理想情况下双目摄像头的两个图像平面均与光轴平行,但由于安装误差等,导致无法满足上述要求,因此需要对双目摄像头进行校正。具体可根据双目摄像头的内参和外参对两个RAW图像进行畸变消除,以尽可能满足上述要求,以降低后续像素点匹配的复杂度。
步骤S304,按照预设立体匹配算法对第一RAW图像和第二RAW图像中的各个像素点进行匹配,并计算匹配的像素点之间的视觉差值以获得深度信息。
图像的立体匹配实质上是为了解决同一物体的多幅图像在不同空间位置或不同时间下对应匹配像素点的问题,立体匹配算法是用来解决这一问题的相关算法,包括但不限于是SAD(Sum of absolute differences,绝对差值和)算法、STAD(Sum of Truncated  Absolute Differences,截断绝对差值和)算法和SSD(Sum of squared Differences,差值平方和)算法。
在按照预设立体匹配算法对第一RAW图像和第二RAW图像中的各个像素点进行匹配得到多个像素点组后,计算每个像素点组中像素点之间的视觉差值,进而根据视觉差值获得RAW图像的深度信息。例如,可先根据预先标定获得的图像坐标系、摄像头坐标系以及三维空间世界坐标系之间的映射关系,确定出各个像素点组中每个像素点在相应摄像头坐标系下的投影矩阵,并根据投影矩阵确定出各个像素点组中每个像素点在相应摄像头坐标系下的坐标位置,以及根据坐标位置确定出各个像素点组中每个像素点距离相应摄像头坐标系的中心点的距离信息,进而根据距离信息计算获得每个像素点组的视觉差值,根据该视觉差值即可确定出RAW图像的深度信息,例如,深度信息=双目摄像头的焦距*双目摄像头的中心距/空间任一点至双目摄像头的中心连线的垂直距离。由此,基于双目摄像头获得的两幅图像通过一定的算法可计算获得深度信息。
需要说明的是,上述方式仅用于示例性说明,在实际应用中还可以采用其它任何现有方式实现,具体这里并不对其进行限制。
前端信号处理芯片在接收到RAW图像后,还基于RAW图像对图像内容进行边缘检测,获得RAW图像的边缘信息。所谓边缘是指其周围像素点的灰度值急剧变化的像素点的集合,是图像最基本的特征。边缘大致可以分为两种,一种是阶跃状边缘,边缘两侧像素点的灰度值明显不同;另一种为屋顶状边缘,边缘处于灰度值由小到大再到小的变化转折点处。边缘主要存在于物体与物体、物体与背景、区域与区域(包括不同色彩)之间。边缘所分开区域的内部特征或属性是一致的,而不同的区域内部特征或属性是不同的。边缘检测正是利用物体和背景在某种图像特征上的差异来实现的,这些差异包括灰度值、颜色或纹理特征,边缘检测实际上就是检测图像特征发生变化的位置。
在进行边缘检测时,可采用预设边缘检测算子和算法对RAW图像进行图像内容的边缘检测,以获得RAW图像的边缘信息。例如,可采用Laplacian算子、Roberts算子、Sobel算子、log(Laplacian-Gauss)算子、Kirsch算子或Prewitt算子,以及多尺度的小波边缘检测算法等对RAW图像进行图像内容的边缘检测。
在一个实施例中,对RAW图像进行处理,获取图像内容的边缘信息,包括:对RAW图像进行图像分量抽取,获得单一分量图像;对单一分量图像进行图像内容边缘检测,获得边缘信息。
前端图像信号处理芯片在接收到RAW图像后,可先从RAW图像中抽取某一图像分量(例如抽取灰度分量等),以获得单一分量图像(例如灰度图像等),而后根据单一分量图像进行图像内容的边缘检测,以确定出RAW图像中图像内容的边缘信息。其中,可采用Laplacian算子、Roberts算子、Sobel算子、log(Laplacian-Gauss)算子、Kirsch算子或Prewitt算子,以及多尺度的小波边缘检测算法等对RAW图像进行图像内容的边缘检测。
需要说明的是,本申请中,既可以对RAW图像进行边缘检测获得边缘信息,也可以 对RAW图像处理后的单一分量图像进行边缘检测获得边缘信息。其中,虽然前者包含的数据信息多,边缘检测效果更好,尤其是相同颜色的边缘检测,但是由于后续会对边缘检测结果即边缘信息进行修正处理,因此采用后者最终能够达到相同的效果,且后者的计算量更小,使得图像处理速度更快。
步骤S206,根据深度信息,对RAW图像进行虚化处理和对边缘信息进行修正。
前端图像信号处理芯片在对RAW图像进行深度信息计算获得深度信息后,可根据深度信息对RAW图像进行虚化处理。
目前常用的虚化处理方式有两种,一种是基于单目摄像头,另一种是基于双目摄像头。参考图4所示,前者多是通过对对焦区域拍摄对象以外的区域进行模糊算法处理以实现虚化,或者利用深度学***面的距离,从而根据焦平面的距离进行虚化。在本申请中,可采用现有技术中的任一种方式对RAW图像进行虚化处理,具体这里不做限定。
前端图像信号处理芯片在对RAW图像进行深度信息计算获得深度信息后,还根据深度信息对边缘信息进行修正,以得到更加准确的图像内容的边缘信息。
具体来说,参考图5a-图5b所示,在图像具有噪声时,边缘检测效果会变得很差,甚至出现错误的边缘检测结果,这是因为图像中的噪声在进行转换之后会变得和物体边缘转换之后的结果类似,导致边缘无法检测出来;参考图6所示,对于颜色一致的区域,即使具有边缘,也会因颜色转换之后的结果类似而无法区分,导致边缘无法检测出来。基于此,本申请中,在对RAW图像进行边缘检测获得边缘信息后,还利用深度信息对边缘检测结果即边缘信息进行修正,以得到更加准确的图像内容的边缘信息。
在一些实施例中,根据深度信息,对边缘信息进行修正,包括:获取边缘信息对应的像素点的深度信息和像素点的周围像素点的深度信息;根据边缘信息对应的像素点的深度信息和周围像素点的深度信息,基于边缘深度信息的相似性和渐变性,对边缘信息进行修正。
也就是说,利用物体边缘的深度信息具有相似性和渐变性的特点对物体边缘进行修正,以得到更加准确的图像内容的边缘信息。
作为一个具体示例,根据边缘信息对应的像素点的深度信息和周围像素点的深度信息,基于边缘深度信息的相似性和渐变性,对边缘信息进行修正,包括:将边缘信息对应的像素点中深度信息与其它像素点的深度信息不一致的像素点剔除。也就是说,将边缘信息对应的像素点中深度信息过大或过小的像素点从边缘信息中剔除,从而使得修正后的边缘信息更加准确,更接近理想状态。
具体来说,可先获取物体边缘对应的像素点的深度信息,然后判断这些像素点的深 度信息是否一致(如相同或之差小于一个较小的值),如果不一致,则将差别较大的像素点从边缘信息中剔除,从而使得修正后的边缘信息更加完善,更接近理想状态。例如,对于具有噪声的图像,噪声在进行转换之后,会变得和物体边缘转换之后的结果类似,此时可能将其作为边缘信息,导致边缘误检,但是噪声的深度信息与物体边缘的深度信息并不相同,因此基于深度信息的不同能够将其从边缘信息中剔除,从而获得更加准确的边缘信息。
作为另一个具体示例,根据边缘信息对应的像素点的深度信息和周围像素点的深度信息,基于边缘深度信息的相似性和渐变性,对边缘信息进行修正,包括:确定周围像素点的深度信息与边缘信息对应的像素点的深度信息一致时,根据周围像素点对边缘信息进行补充。也就是说,将周围像素点中深度信息与边缘信息对应的像素点的深度信息一致的像素点补充至边缘信息,从而使得修正后的边缘信息更加准确,更接近理想状态。
具体来说,可先获取物体边缘对应的像素点的深度信息,以及物体边缘周围像素点的深度信息,然后判断物体边缘对应的像素点的深度信息与物体边缘周围像素点的深度信息是否一致(如相同或之差小于一个较小的值),如果一致,则将该物体边缘周围像素点补充至边缘信息中;如果不一致,则不进行补充,从而使得修正后的边缘信息更加完善,更接近理想状态。例如,对于颜色一致的区域,由于传统的边缘检测是基于颜色(通过灰度值变化进行区分)进行检测,会出现边缘检测不出或边缘误检的情况,而真正的物体边缘的深度信息是具有一致性或渐变性的,区别于非边缘区域,即边缘附近(尤其是传统处理方法得当的边缘两端延伸区域)的深度信息是缓慢变化或比较接近的,不会像边缘以外的区域出现跃变,因此可以对传统边缘检测结果结合深度信息进行修正,得到更加准确的边缘信息,使得边缘检测结果更加完善。
在实际应用中,在基于深度信息对边缘检测结果进行修正时,可关注同一深度或焦平面的对称边缘检测结果,通过整幅图像的深度信息对边缘检测结果进行修正,根据边缘深度信息具有相似和渐变性的原理进行边缘补全,例如对同一亮度区域,如果仅是基于单一分量进行边缘识别,有些边缘会识别失败,但结合深度信息进行补全,则可以将边缘识别补充出来。
需要说明的是,在上述示例中,对RAW图像的虚化处理也可以由后端应用处理芯片实现,具体可根据实际需求选择设置,这里不做限制。但是对于深度信息的计算,需要在前端图像信号处理芯片中进行,以有利于前端图像信号处理芯片利用深度信息对边缘信息进行修正,得到更好的边缘检测效果。
步骤S208,根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像。
参考图4所示,对于图像的虚化处理,理论上越远离焦平面,弥散圆越大颗,成像也就越模糊,也就是说,越靠近焦平面的位置,成像也就越清晰。图像虚化比较理想的效果就是景深范围之外虽然都是模糊的,但模糊程度是有差异的,越靠近焦平面,就越清晰,越远离焦平面则越模糊。
但是,在实际虚化过程中,很容易出现误虚化或漏虚化的问题,导致虚化效果不理 想,尤其是对于范围较小的缝隙区域更容易出现虚化效果不理想的问题。如图7所示,在该虚化图像中,虽然人物和背景有明显的区分,但是人物边缘还是存在较大问题,如树与腿之间(图中虚线框内)的空隙明显没有虚化,存在漏虚化。另外,理论上由近及远的虚化程度应该是逐渐加大的,但是该虚化图像中人物之外的部分虚化程度一样。基于此,本申请中,可由后端应用处理芯片根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像,以减少漏虚化和误虚化的情况,同时使得虚化图像更加自然,有效改善了图像虚化效果。
在一个实施例中,根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像,包括:根据修正后的边缘信息对虚化处理后的图像进行虚化结果检测,获得虚化异常区域;对虚化异常区域进行修正,获得虚化图像。
也就是说,可根据修正后的边缘信息对虚化处理后的图像进行边缘识别,并对比边缘区域的虚化结果,以确定是否已经进行虚化以及是否误虚化,对漏虚化和误虚化的区域进行修正,例如,对虚化程度过高或过低的区域进行调整,对漏虚化的区域进行再次虚化处理等,最终得到更加准确的虚化图像。
在一个实施例中,根据修正后的边缘信息对虚化处理后的图像进行虚化结果检测,获得虚化异常区域,包括:基于边缘信息,通过对比RAW图像与虚化处理后的图像之间对应像素点的值是否发生变化,确定漏虚化区域;基于边缘信息,通过对比RAW图像与虚化处理后的图像之间对象像素点的值的变化幅度,确定误虚化区域和虚化不足区域。
也就是说,可以结合边缘信息,对比虚化前后图像对应像素点的值是否发生变化来检测是否存在漏虚化的情况;通过对比同一深度图像区域虚化前后像素点的变化幅度来检测是否存在误虚化或虚化不到位的情况。
具体来说,图像虚化结果的检测主要是基于同一深度的虚化程度一致的原理进行,对于深度相同的物体边缘虚化,理论上虚化的程度是一致的,反映在图像中可以检测虚化前后模糊程度的变化,具体可通过统计对比虚化前后像素点的值的变化程度做出推断,由此来确定是否存在误虚化的情况。对于漏虚化,即对于应当虚化的图像区域没有进行虚化处理,一般这种情况经常出现在一些小的分割区域(即深度信息区别较大的区域)内,例如物体缝隙或角落,因此同样可以利用深度信息相似的原理,通过对边缘左右的深度信息的对比和图像是否经过处理来检测是否出现了漏虚化的情况,进而进行相应的后续处理。需要说明的是,边缘判断可基于焦平面确定重点判断区域,例如人胳膊中间的空隙(背景和人所在平面的深度信息是有较大差别的,虚化处理时是否体现)等。
在实际应用中,可根据虚化处理后图像对应像素点的值相对于RAW图像对应像素点的值的变化比例进行虚化结果检测,并在根据检测结果确定需要对虚化后的图像进行虚化修正处理时,根据变化比例对处于同一焦平面的图像区域进行等比例像素点弱化处理,即再虚化处理,从而使得虚化效果更理想。
作为一个具体示例,参考图8所示,图像处理方法可包括以下步骤:
步骤S802,启动相机。
步骤S804,图像采集器输出RAW图像。
步骤S806,对RAW图像进行深度信息计算得到深度信息。
步骤S808,根据深度信息对RAW图像进行虚化处理。
步骤S810,图像分量分离得到单一分量图像。
步骤S812,对单一分量图像进行图像内容边缘检测得到边缘信息。
步骤S814,根据深度信息对边缘信息进行修正。
步骤S816,根据修正后的边缘信息对虚化处理后的图像进行虚化结果检测。
步骤S818,对虚化处理后的图像进行虚化修正得到虚化图像。
步骤S820,对虚化图像进行后处理。
步骤S822,输出处理后的图像。
上述实施例中,利用前端图像信号处理芯片能够优先获得图像数据和计算速度较快的特点对RAW图像进行快速识别分析,进行边缘检测,得到图像内容的边缘信息,并根据边缘信息对虚化处理后的图像进行边缘识别,以及对比边缘区域的虚化结果(是否已进行虚化,以及是否误虚化),基于对比结果对虚化异常区域(如漏虚化和误虚化)进行修正,使得图像虚化效果更加接近理想的虚化效果,进而提升用户的拍摄体验,同时由于利用了深度信息进行边缘检测结果的补充,使得边缘检测结果也比传统的边缘检测得到的结果更加准确。
应该理解的是,虽然图2-3,8的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且图2-3,8中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
根据本申请实施例的图像处理方法,通过对获取的RAW图像进行处理获得深度信息和图像内容的边缘信息,并根据深度信息对RAW图像进行虚化处理和对边缘信息进行修正,以及根据修正后的边缘信息对虚化处理后的图像进行处理获得虚化图像。其中,利用RAW图像包含更多的图像信息的特点进行图像内容的边缘检测得到边缘检测结果,同时结合深度信息对边缘检测结果进行修正,使得边缘检测结果更加准确,而后根据修正后的边缘检测结果对虚化处理后的图像再处理,使得虚化效果更接近理想效果,有效改善了图像虚化效果。
在一个实施例中,提供了一种图像处理芯片,参考图9所示,该图像处理芯片包括:第一芯片10、第二芯片20和连接第一芯片10和第二芯片20的连接接口30。其中,第一芯片10用于获取RAW图像,并对RAW图像进行处理以获取深度信息和图像内容的边缘信息,以及根据深度信息对边缘信息进行修正;第二芯片20用于根据深度信息对RAW图像进行虚化处理,并根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化 图像。
在一个实施例中,连接接口30可包括MIPI(Mobile Industry Processor Interface,移动产业处理器接口)和PCIE(Peripheral Component Interconnect Express,高速串行计算机扩展总线标准)中的一种或多种。
在一个实施例中,参考图10所示,第一芯片10包括:深度信息计算模块11和图像边缘识别与修正模块12,其中,深度信息计算模块11用于对RAW图像进行处理以获取深度信息;图像边缘识别与修正模块12用于对RAW图像进行处理以获取图像内容的边缘信息,并根据深度信息对边缘信息进行修正。
在一个实施例中,参考图10所示,第一芯片10还包括:图像分量抽取模块13,图像分量抽取模块13用于对RAW图像进行图像分量抽取,获得单一分量图像;图像边缘识别与修正模块12用于对单一分量图像进行图像内容边缘检测,获得边缘信息。
在一个实施例中,图像边缘识别与修正模块12具体用于:获取边缘信息对应的像素点的深度信息和像素点的周围像素点的深度信息,并根据边缘信息对应的像素点的深度信息和周围像素点的深度信息,基于边缘深度信息的相似性和渐变性,对边缘信息进行修正。
在一个实施例中,第二芯片20包括:图像处理模块21,用于根据深度信息对RAW图像进行虚化处理,并根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像。
在一个实施例中,图像处理模块21具体用于:根据修正后的边缘信息对虚化处理后的图像进行虚化结果检测,获得虚化异常区域,并对虚化异常区域进行修正,获得虚化图像。
在一个实施例中,图像处理模块21具体用于:基于边缘信息,通过对比RAW图像与虚化处理后的图像之间对应像素点的值是否发生变化,确定漏虚化区域;基于边缘信息,通过对比RAW图像与虚化处理后的图像之间对象像素点的值的变化幅度,确定误虚化区域和虚化不足区域。
在一个实施例中,参考图11所示,第一芯片11还包括:图像虚化处理模块14,用于根据深度信息对RAW图像进行虚化处理,以便第二芯片20根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像。
关于图像处理芯片的具体限定可以参见上文中对于图像处理方法的限定,在此不再赘述。上述图像处理芯片中的各个模块可全部或部分通过软件、硬件及其组合来实现。
根据本申请实施例的图像处理芯片,通过第一芯片获取RAW图像,并对RAW图像进行处理以获取深度信息和图像内容的边缘信息,以及根据深度信息对边缘信息进行修正,并通过第二芯片根据深度信息对RAW图像进行虚化处理,并根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像。其中,利用RAW图像包含更多的图像信息的特点进行图像内容的边缘检测得到边缘检测结果,同时结合深度信息对边缘检测结果进行修正,使得边缘检测结果更加准确,而后根据修正后的边缘检测结果对虚化处理 后的图像再处理,使得虚化效果更接近理想效果,有效改善了图像虚化效果。同时,深度信息由第一芯片获取,有利于第一芯片利用深度信息得到更好的边缘检测效果。
在一个实施例中,参考图12所示,提供了一种电子装置,包括显示器100和前述的图像处理芯片200,显示器100与图像处理芯片200相连,用于显示图像处理芯片200处理后的虚化图像。
根据申请实施例的电子装置,通过上述的图像处理芯片,能够获得虚化效果更接近理想效果的虚化图像,有效改善了图像虚化效果。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (13)

  1. 一种图像处理方法,其特征在于,包括以下步骤:
    获取RAW图像;
    对所述RAW图像进行处理,获得深度信息和图像内容的边缘信息;
    根据所述深度信息,对所述RAW图像进行虚化处理和对所述边缘信息进行修正;
    根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像。
  2. 根据权利要求1所述的图像处理方法,其特征在于,对所述RAW图像进行处理,获得图像内容的边缘信息,包括:
    对所述RAW图像进行图像分量抽取,获得单一分量图像;
    对所述单一分量图像进行图像内容边缘检测,获得所述边缘信息。
  3. 根据权利要求1所述的图像处理方法,其特征在于,根据所述深度信息,对所述边缘信息进行修正,包括:
    获取所述边缘信息对应的像素点的深度信息和所述像素点的周围像素点的深度信息;
    根据所述边缘信息对应的像素点的深度信息和所述周围像素点的深度信息,基于边缘深度信息的相似性和渐变性,对所述边缘信息进行修正。
  4. 根据权利要求1-3中任一项所述的图像处理方法,其特征在于,所述根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像,包括:
    根据所述修正后的边缘信息对所述虚化处理后的图像进行虚化结果检测,获得虚化异常区域;
    对所述虚化异常区域进行修正,获得所述虚化图像。
  5. 根据权利要求4所述的图像处理方法,其特征在于,所述根据所述修正后的边缘信息对所述虚化处理后的图像进行虚化结果检测,获得虚化异常区域,包括:
    基于所述边缘信息,通过对比所述RAW图像与所述虚化处理后的图像之间对应像素点的值是否发生变化,确定漏虚化区域;
    基于所述边缘信息,通过对比所述RAW图像与所述虚化处理后的图像之间对象像素点的值的变化幅度,确定误虚化区域和虚化不足区域。
  6. 一种图像处理芯片,其特征在于,包括:第一芯片、第二芯片以及连接所述第一芯片和所述第二芯片的连接接口,其中,
    所述第一芯片,用于获取RAW图像,并对所述RAW图像进行处理以获得深度信息和图像内容的边缘信息,以及根据所述深度信息对所述边缘信息进行修正;
    所述第二芯片,用于根据所述深度信息对所述RAW图像进行虚化处理,并根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像。
  7. 根据权利要求6所述的图像处理芯片,其特征在于,所述第一芯片包括:
    深度信息计算模块,用于对所述RAW图像进行处理以获得所述深度信息;
    图像边缘识别与修正模块,用于对所述RAW图像进行处理以获得所述图像内容的边缘信息,并根据所述深度信息对所述边缘信息进行修正。
  8. 根据权利要求7所述的图像处理芯片,其特征在于,所述第一芯片还包括:
    图像分量抽取模块,用于对所述RAW图像进行图像分量抽取,获得单一分量图像;
    所述图像边缘识别与修正模块,用于对所述单一分量图像进行图像内容边缘检测,获得所述边缘信息。
  9. 根据权利要求7所述的图像处理芯片,其特征在于,所述图像边缘识别与修正模块具体用于:获取所述边缘信息对应的像素点的深度信息和所述像素点的周围像素点的深度信息,并根据所述边缘信息对应的像素点的深度信息和所述周围像素点的深度信息,基于边缘深度信息的相似性和渐变性,对所述边缘信息进行修正。
  10. 根据权利要求6-9中任一项所述的图像处理芯片,其特征在于,所述第二芯片包括:图像处理模块,所述图像处理模块具体用于:根据所述修正后的边缘信息对所述虚化处理后的图像进行虚化结果检测,获得虚化异常区域,并对所述虚化异常区域进行修正,获得所述虚化图像。
  11. 根据权利要求10所述的图像处理芯片,其特征在于,所述图像处理模块具体用于:
    基于所述边缘信息,通过对比所述RAW图像与所述虚化处理后的图像之间对应像素点的值是否发生变化,确定漏虚化区域;
    基于所述边缘信息,通过对比所述RAW图像与所述虚化处理后的图像之间对象像素点的值的变化幅度,确定误虚化区域和虚化不足区域。
  12. 根据权利要求7所述的图像处理芯片,其特征在于,所述第一芯片还包括:
    图像虚化处理模块,用于根据所述深度信息对所述RAW图像进行虚化处理,以便所述第二芯片根据修正后的边缘信息对虚化处理后的图像进行处理,获得虚化图像。
  13. 一种电子装置,其特征在于,包括显示器和如权利要求6-12中任一项所述的图像处理芯片,所述显示器与所述图像处理芯片相连,用于显示所述图像处理芯片处理后的虚化图像。
PCT/CN2021/121574 2020-12-24 2021-09-29 图像处理方法、芯片及电子装置 WO2022134718A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21908719.4A EP4266250A4 (en) 2020-12-24 2021-09-29 IMAGE PROCESSING METHOD AND CHIP, AND ELECTRONIC DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011548396.5A CN112712536B (zh) 2020-12-24 2020-12-24 图像处理方法、芯片及电子装置
CN202011548396.5 2020-12-24

Publications (1)

Publication Number Publication Date
WO2022134718A1 true WO2022134718A1 (zh) 2022-06-30

Family

ID=75544116

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/121574 WO2022134718A1 (zh) 2020-12-24 2021-09-29 图像处理方法、芯片及电子装置

Country Status (3)

Country Link
EP (1) EP4266250A4 (zh)
CN (1) CN112712536B (zh)
WO (1) WO2022134718A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974417A (zh) * 2024-03-28 2024-05-03 腾讯科技(深圳)有限公司 Ai芯片、电子设备及图像处理方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712536B (zh) * 2020-12-24 2024-04-30 Oppo广东移动通信有限公司 图像处理方法、芯片及电子装置
CN114339071A (zh) * 2021-12-28 2022-04-12 维沃移动通信有限公司 图像处理电路、图像处理方法及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012132277A1 (ja) * 2011-03-28 2012-10-04 パナソニック株式会社 撮影装置およびライブビュー画像表示方法
CN105245774A (zh) * 2015-09-15 2016-01-13 努比亚技术有限公司 一种图片处理方法及终端
CN107578418A (zh) * 2017-09-08 2018-01-12 华中科技大学 一种融合色彩和深度信息的室内场景轮廓检测方法
CN107977940A (zh) * 2017-11-30 2018-05-01 广东欧珀移动通信有限公司 背景虚化处理方法、装置及设备
CN111553940A (zh) * 2020-05-19 2020-08-18 上海海栎创微电子有限公司 一种深度图人像边缘优化方法及处理装置
CN112712536A (zh) * 2020-12-24 2021-04-27 Oppo广东移动通信有限公司 图像处理方法、芯片及电子装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012132277A1 (ja) * 2011-03-28 2012-10-04 パナソニック株式会社 撮影装置およびライブビュー画像表示方法
CN105245774A (zh) * 2015-09-15 2016-01-13 努比亚技术有限公司 一种图片处理方法及终端
CN107578418A (zh) * 2017-09-08 2018-01-12 华中科技大学 一种融合色彩和深度信息的室内场景轮廓检测方法
CN107977940A (zh) * 2017-11-30 2018-05-01 广东欧珀移动通信有限公司 背景虚化处理方法、装置及设备
CN111553940A (zh) * 2020-05-19 2020-08-18 上海海栎创微电子有限公司 一种深度图人像边缘优化方法及处理装置
CN112712536A (zh) * 2020-12-24 2021-04-27 Oppo广东移动通信有限公司 图像处理方法、芯片及电子装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4266250A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974417A (zh) * 2024-03-28 2024-05-03 腾讯科技(深圳)有限公司 Ai芯片、电子设备及图像处理方法

Also Published As

Publication number Publication date
CN112712536A (zh) 2021-04-27
CN112712536B (zh) 2024-04-30
EP4266250A4 (en) 2024-06-05
EP4266250A1 (en) 2023-10-25

Similar Documents

Publication Publication Date Title
WO2021017811A1 (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
WO2022134718A1 (zh) 图像处理方法、芯片及电子装置
JP7027537B2 (ja) 画像処理方法および装置、電子機器、ならびにコンピュータ可読記憶媒体
US11457138B2 (en) Method and device for image processing, method for training object detection model
WO2021022983A1 (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
JP6961797B2 (ja) プレビュー写真をぼかすための方法および装置ならびにストレージ媒体
WO2019105262A1 (zh) 背景虚化处理方法、装置及设备
CN109712192B (zh) 摄像模组标定方法、装置、电子设备及计算机可读存储介质
WO2014044126A1 (zh) 坐标获取装置、实时三维重建***和方法、立体交互设备
JP6577703B2 (ja) 画像処理装置及び画像処理方法、プログラム、記憶媒体
US20210099646A1 (en) Method and Apparatus for Detecting Subject, Electronic Device, and Computer Readable Storage Medium
WO2018082389A1 (zh) 一种肤色检测方法、装置及终端
CN111160232A (zh) 正面人脸重建方法、装置及***
US10346709B2 (en) Object detecting method and object detecting apparatus
CN113822942B (zh) 一种基于二维码的单目摄像头测量物体尺寸的方法
CN109559353B (zh) 摄像模组标定方法、装置、电子设备及计算机可读存储介质
CN114693760A (zh) 图像校正方法、装置及***、电子设备
CN112261292B (zh) 图像获取方法、终端、芯片及存储介质
US20240022702A1 (en) Foldable electronic device for multi-view image capture
WO2022127491A1 (zh) 图像处理方法及装置、存储介质、终端
JP2018160024A (ja) 画像処理装置、画像処理方法及びプログラム
TW201820261A (zh) 用於合成人物的影像合成方法
CN113723465B (zh) 一种改进的特征提取方法以及基于该方法的图像拼接方法
KR102112784B1 (ko) 왜곡 영상 보정 장치 및 방법
US20240231485A1 (en) Eyeball tracking method and virtual reality device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21908719

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021908719

Country of ref document: EP

Effective date: 20230721