WO2019105261A1 - 背景虚化处理方法、装置及设备 - Google Patents

背景虚化处理方法、装置及设备 Download PDF

Info

Publication number
WO2019105261A1
WO2019105261A1 PCT/CN2018/116475 CN2018116475W WO2019105261A1 WO 2019105261 A1 WO2019105261 A1 WO 2019105261A1 CN 2018116475 W CN2018116475 W CN 2018116475W WO 2019105261 A1 WO2019105261 A1 WO 2019105261A1
Authority
WO
WIPO (PCT)
Prior art keywords
blurring
different sub
regions
sub
main image
Prior art date
Application number
PCT/CN2018/116475
Other languages
English (en)
French (fr)
Inventor
欧阳丹
谭国辉
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019105261A1 publication Critical patent/WO2019105261A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the present application relates to the field of electronic devices, and in particular, to a background blur processing method, apparatus, and terminal device.
  • the present invention provides a background blur processing method, device and device, so as to solve the problem that the depth of field beyond a certain distance cannot be accurately calculated in the prior art, and thus the darkening of the image in the image cannot be realized according to the depth of field, resulting in virtual Technical problems with poor visual effects.
  • the embodiment of the present application provides a background blurring processing method, including: acquiring a main image captured by a main camera and a sub image captured by a sub camera, and acquiring depth information of the main image according to the main image and the sub image; Determining an original blurring intensity of different sub-regions in the background area of the main image according to the depth information and the in-focus area; determining a distribution orientation of the different sub-areas according to a display manner of the main image, and according to the distribution orientation
  • Corresponding weight setting policy determines a blurring weight of the different sub-regions; determining a target blurring strength of the different sub-regions according to the original blurring intensity of the different sub-regions and corresponding blurring weights; The target blurring intensity of the region blurs the background area of the main image.
  • a background blur processing apparatus including: a first acquiring module, configured to acquire a main image captured by a main camera and a sub image captured by a sub camera, and according to the main image and the sub image Acquiring the depth information of the main image; the first determining module is configured to determine an original blurring intensity of different sub-regions in the background area of the main image according to the depth information and the focus area; and a second determining module, configured to Determining a distribution orientation of the different sub-regions, and determining a blurring weight of the different sub-regions according to a weight setting policy corresponding to the distribution orientation; and a third determining module, configured to determine, according to the different The original blurring intensity of the sub-region and the corresponding blurring weight determine the target blurring intensity of the different sub-regions; the processing module is configured to perform virtualizing on the background area of the main image according to the target blurring intensity of the different sub-regions Processing.
  • a further embodiment of the present application provides a computer device including a memory and a processor, wherein the memory stores computer readable instructions, and when the instructions are executed by the processor, the processor performs the above implementation of the present application.
  • the background blurring method described in the example is described in the example.
  • a further embodiment of the present application provides a non-transitory computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements a background blurring processing method as described in the above embodiments of the present application.
  • FIG. 1 is a flow chart of a background blurring processing method according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a dual camera viewing angle coverage according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of acquiring a depth of field of a dual camera according to an embodiment of the present application
  • FIG. 5(a) is a schematic diagram showing division of a plurality of sub-areas in a background area of a main image according to an embodiment of the present application;
  • FIG. 5(b) is a schematic diagram showing division of a plurality of sub-areas in a background area of a main image according to another embodiment of the present application;
  • FIG. 7 is a schematic structural diagram of a background blur processing apparatus according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a background blur processing apparatus according to another embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a background blurring processing apparatus according to still another embodiment of the present application.
  • FIG. 10 is a schematic diagram of an image processing circuit according to another embodiment of the present application.
  • the execution body of the background blur processing method and apparatus of the embodiment of the present application may be a terminal device, where the terminal device may be a hardware device with a dual camera such as a mobile phone, a tablet computer, a personal digital assistant, a wearable device, or the like.
  • the wearable device can be a smart bracelet, a smart watch, smart glasses, and the like.
  • the present application provides a background blurring processing method for controlling the positional relationship of regions corresponding to different depths of field with different blur depths, and controlling regions corresponding to different depths of field to perform blurring of different intensities, thereby Accurately obtaining the depth of field information can also make the area corresponding to different depths of field get the appropriate intensity blur.
  • FIG. 1 is a flowchart of a background blur processing method according to an embodiment of the present application. As shown in FIG. 1, the method includes the following steps:
  • Step 101 Acquire a main image captured by the main camera and a sub image captured by the sub camera, and acquire depth information of the main image according to the main image and the sub image.
  • the spatial depth of the clear imaging allowed by the human eye before and after the focus area where the subject is located is the depth of field.
  • the depth of field of the human eye is mainly determined by binocular vision to distinguish the depth of field. This is the same as the principle of dual camera resolution of depth of field, mainly relying on the principle of triangular ranging as shown in Figure 2.
  • the imaged object is drawn, as well as the positions of the two cameras O R and O T , and the focal planes of the two cameras.
  • the focal plane is at a distance f from the plane of the two cameras. Two cameras are imaged at the focal plane position to obtain two captured images.
  • P and P' are the positions of the same object in different captured images, respectively.
  • the distance from the P point to the left boundary of the captured image is X R
  • the distance from the P′ point to the left boundary of the captured image is X T .
  • O R and O T are two cameras respectively, and the two cameras are on the same plane with a distance B.
  • the distance Z between the object in Figure 2 and the plane of the two cameras has the following relationship:
  • d is the difference in distance between the positions of the same object in different captured images. Since B and f are constant values, the distance Z of the object can be determined according to d.
  • the above formula is implemented based on two parallel cameras.
  • the main camera is used to take the main image of the actual image.
  • the sub-image obtained by the sub-camera is mainly used to calculate the depth of field. Based on the above analysis, the sub-camera The FOV is generally larger than the main camera, but even if it is as shown in Figure 3, objects that are closer together may still be in different images acquired by the two cameras.
  • the adjusted calculated depth of field range is as follows:
  • a map of different point differences is calculated by the main image acquired by the main camera and the sub-image acquired by the sub-camera, and is represented by a disparity map, which is the same on the two graphs.
  • Step 102 Determine an original blurring intensity of different sub-regions in the background area of the main image according to the depth of field information and the focus area.
  • the range of imaging before the focus area is the foreground depth of field
  • the area corresponding to the foreground depth of field is the foreground area
  • the range of clear imaging after the focus area is the background depth of field
  • the area corresponding to the background depth of field is the background area
  • the background area is divided into a plurality of different sub-areas according to the horizontal direction, wherein the size and shape of each sub-area can be adjusted according to application requirements, and the size and shape of the plurality of different sub-areas can be Similarly, the difference may be different.
  • the depth of field of the closest position from the focus area in each sub-area and the depth of field farthest from the focus area are respectively obtained, and the average depth of field of the closest position and the depth of field of the farthest position are averaged.
  • the average depth of field is taken as the average depth information of the corresponding sub-area, and the original blurring intensity is determined according to the average depth information of the sub-area, wherein the higher the average depth information, the greater the original blurring intensity.
  • the depth of field interval of each sub-area can be adjusted according to application requirements.
  • the depth of field interval of multiple different sub-areas can be the same or different.
  • multiple sub-areas can be shaped according to the background area.
  • the sub-areas are divided into a plurality of sub-areas that are inconsistent in size, or, for the convenience of further blurring processing, as shown in FIG. 5(b), the plurality of sub-areas may be horizontally distributed from the bottom to the top of the image having the same width and image width.
  • the depth information obtained in the sub-area may be obtained.
  • the depth of field probability distribution in the middle is calculated as the average depth of field of the sub-area.
  • Step 103 Determine a distribution orientation of different sub-regions according to a display manner of the main image, and determine a blur weight of the different sub-regions according to a weight setting policy corresponding to the distribution orientation.
  • the display manner includes the display reverse direction of the shooting subject, the proportion of the entire image, and the like.
  • the bottom of the image is the ground
  • the top of the image is the sky
  • the subject of the photograph is located on the ground. Therefore, the sub-region in the background area of the main image is closer to the top of the image. The more it is not related to the current subject, the closer it is to the bottom of the image, the more relevant it is to the current subject.
  • the right side of the image is a portrait
  • the left side of the image is a beach or a sea, etc., if the current photographing mode is the portrait mode, the closer to the left side of the image, the more the current subject is unrelated.
  • the distribution orientation of different sub-regions may be determined according to the display manner of the main image, and further, the blurring weights of different sub-regions may be determined according to the weight setting strategy corresponding to the distribution orientation, for example, for the above
  • the sub-region closer to the upper direction is not related to the subject in the main image
  • the sub-region closer to the lower direction is related to the subject in the main image, thereby determining according to the top-down weight reduction strategy.
  • the blurring weight of different sub-areas wherein the up and down direction does not only include the upper and lower sides of the traditional physical, referring to the above analysis, it is related to the display mode of the main image, and if the display mode of the main image is up and down, the The upper and lower directions indicate physical upper and lower, and if the display mode of the main image is horizontal display, the upper and lower directions indicate the left and right direction.
  • the sub-regions closer to the top of the image have greater blurring weights.
  • the sub-area near the bottom has a small blurring weight, which makes the final blur effect more natural and closer to the real optical virtual focus effect.
  • different display modes may be used to estimate the display mode of the main image according to different application scenarios.
  • the orientation of the terminal device may be obtained by detecting the gyroscope information of the terminal device, and then the terminal device is inferred to take a picture. The way the main image is displayed.
  • the relationship between the distribution orientation and the blurring effect may be obtained according to a large number of experiments in advance, and further, according to the relationship
  • the correspondence between the distribution orientation and the weight setting policy is established and stored, so that after determining the display manner of the main image, the corresponding relationship is queried to determine a corresponding weight setting policy.
  • a linear distribution curve including a mapping relationship between the positioning coordinates of the sub-region and the blurring weight, or a nonlinear distribution curve, wherein the positioning coordinate can be any point coordinate of the central region of the sub-region, is further set in advance.
  • the positioning coordinates of different sub-areas are determined according to the positioning coordinate query according to a preset linear weight distribution curve or a nonlinear weight distribution curve to determine the blurring weights of different sub-areas.
  • the correspondence between the display direction of the image and the blurring weight of the sub-regions of different orientations is constructed, and a deep neural network model is constructed according to the learning result, thereby inputting the display direction of the main image and the orientation of different sub-regions in the model. Further, the blurring weight of the corresponding sub-region is obtained according to the output of the model.
  • Step 104 Determine target blurring strengths of different sub-regions according to original blurring strengths of different sub-regions and corresponding blurring weights.
  • Step 105 Perform blurring processing on the background area of the main image according to the target blurring intensity of different sub-areas.
  • the original blurring strength of the different sub-regions and the corresponding blurring weights are different in determining the target blurring strength of different sub-regions, including but not limited to the following ways:
  • the target blurring strength of different sub-regions is determined.
  • the blurring weights of the two sub-regions whose original blurring intensities are a and b respectively are 80% and 90%, then a*80% and b*90% can be used as the target blurring intensity for the above two sub-regions, respectively.
  • the blurring weights of the adjacent sub-regions with large differences in the original blurring intensity are subjected to a certain degree of close processing, for example, for adjacent sub-regions 1 and 2 If the original blurring intensity of sub-region 1 is larger than that of sub-region 2, then the blurring weight of sub-region 1 is multiplied by a coefficient smaller than 1, wherein the original blurring intensity of sub-region 1 is compared The larger the sub-region 2 is, the smaller the coefficient is. Further, the product of the original blurring intensity, the blurring weight and the coefficient of the sub-region 1 is taken as the target blurring intensity of the sub-region 1, and the original blurring intensity of the sub-region 2 is obtained.
  • the product of the divisor weight is used as the target blur strength of sub-region 2, or the blur weight of sub-region 2 is multiplied by a coefficient greater than 1, wherein the original blur strength of sub-region 1 is compared with sub-region 2
  • the manner of performing the blurring process on the background area of the main image according to the target blurring intensity of different sub-areas also includes, but is not limited to, the following processing manners:
  • the depth information of the sub-area may be acquired in the manner described in step 102.
  • acquiring a main image captured by the main camera and a sub image captured by the sub camera and acquiring depth information of the main image according to the main image and the sub image, and further, according to the depth information and focus
  • the region determines the original blurring intensity of different sub-regions in the background area of the main image, multiplies the original blurring intensity by the blurring weight of the sub-region where the original blurring intensity is located, and obtains the target blurring intensity of the corresponding sub-region, Further, the background area of the main image is blurred according to the target blurring intensity to obtain a blurred image.
  • the display direction of the main image may be estimated according to the gyroscope information of the terminal device, and then the upper and lower orientations of the different sub-areas are determined according to the display direction of the main image and determined according to the top-to-bottom weight reduction strategy.
  • the ambiguity weight of different sub-areas may be determined according to the gyroscope information of the terminal device, and then the upper and lower orientations of the different sub-areas are determined according to the display direction of the main image and determined according to the top-to-bottom weight reduction strategy.
  • the background region in addition to dividing the background region into sub-regions in a direction parallel to the in-focus region, the background region may be divided into different sub-regions in a vertical direction (a direction perpendicular to the in-focus region).
  • the background area is divided into a plurality of different sub-areas according to the size of the depth of field, and further, the depth of field closest to the focus area in each sub-area and the farthest distance from the focus area are respectively obtained.
  • the depth of field of the position is averaged according to the depth of field of the nearest position and the depth of field of the farthest position, and the average depth of field is taken as the average depth of field information of the corresponding sub-area, and then the original blurring intensity is determined according to the average depth information of the sub-area Among them, the higher the average depth of field information, the greater the original blurring intensity.
  • the sub-area may be acquired according to the sub-area.
  • the depth of field probability distribution in the depth of field information calculates the average depth of field of the sub-area, or the depth of field information of one or more sub-areas that are far from the focus area is inaccurate, and the focus area can be obtained according to the more accurate distance obtained.
  • the trend of the average depth of field information of the plurality of sub-areas, and the average depth of field information of one or more sub-areas that are farther from the focus area is derived.
  • the blurring weight of each sub-area can be calculated in the same manner as the above-described example to obtain the target blurring intensity of each sub-area for blurring processing.
  • the background blur processing method of the present application acquires the main image captured by the main camera and the sub image captured by the sub camera, and acquires the depth information of the main image according to the main image and the sub image, and determines according to the depth information and the focus area.
  • FIG. 7 is a schematic structural diagram of a background blur processing apparatus according to an embodiment of the present application. As shown in FIG. 7, the background blur processing apparatus is provided.
  • the first obtaining module 100, the first determining module 200, the second determining module 300, the third determining module 400, and the processing module 500 are included.
  • the first obtaining module 100 is configured to acquire a main image captured by the main camera and a sub image captured by the sub camera, and acquire depth information of the main image according to the main image and the sub image.
  • the first determining module 200 is configured to determine an original blurring intensity of different sub-regions in the background area of the main image according to the depth of field information and the focus area.
  • the first determining module 200 includes a first determining unit 210, a first obtaining unit 220, and a second obtaining unit 230.
  • the first obtaining unit 220 is configured to acquire, according to the second depth information, average depth information of different sub-regions in the background area of the main image.
  • the second obtaining unit 230 is configured to acquire the original blurring intensity of different sub-regions according to the first depth information and the average depth information of different sub-regions.
  • the second determining module 300 is configured to determine a distributed orientation of different sub-regions according to a display manner of the main image, and determine a blurring weight of the different sub-regions according to a weight setting policy corresponding to the distributed orientation.
  • the second determining module 300 includes a third obtaining unit 310 and a second determining unit 320.
  • the third obtaining unit 310 is configured to acquire positioning coordinates of different sub-regions.
  • the processing module 500 is configured to perform a blurring process on the background area of the main image according to the target blurring intensity of different sub-regions.
  • each module in the background blur processing device is for illustrative purposes only. In other embodiments, the background blur processing device may be divided into different modules as needed to complete all or part of the background blur processing device.
  • the background blur processing device of the present application acquires the main image captured by the main camera and the sub image captured by the sub camera, and acquires the depth information of the main image according to the main image and the sub image, and determines according to the depth information and the focus area.
  • the main image background area is blurred according to the target blurring intensity of different sub-areas.
  • the image processing circuit includes an ISP processor 1040 and a control logic 1050.
  • the image data captured by imaging device 1010 is first processed by ISP processor 1040, which analyzes the image data to capture image statistical information that may be used to determine and/or control one or more control parameters of imaging device 1010.
  • the imaging device 1010 (camera) may include a camera having one or more lenses 1012 and an image sensor 1014, wherein the imaging device 1010 includes two sets of cameras for implementing the background blurring method of the present application, wherein, with continued reference to FIG.
  • Imaging device 1010 can simultaneously capture scene images based on a primary camera and a secondary camera
  • image sensor 1014 can include a color filter array (eg, a Bayer filter), and image sensor 1014 can acquire light intensity captured by each imaging pixel of image sensor 1014 and The wavelength information is provided and a set of raw image data that can be processed by the ISP processor 1040 is provided.
  • Sensor 1020 can provide raw image data to ISP processor 1040 based on sensor 1020 interface type, wherein ISP processor 1040 can generate raw image data acquired by image sensor 1014 in the main camera and image sensor in the secondary camera based on sensor 1020
  • the original image data acquired by 1014 calculates depth information and the like.
  • the sensor 1020 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
  • SMIA Serial or parallel camera interfaces
  • the ISP processor 1040 processes the raw image data pixel by pixel in a variety of formats.
  • each image pixel can have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 1040 can perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, image processing operations can be performed with the same or different bit depth precision.
  • ISP processor 1040 can also receive pixel data from image memory 1030. For example, raw pixel data is sent from the sensor 1020 interface to the image memory 1030, and the raw pixel data in the image memory 1030 is then provided to the ISP processor 1040 for processing.
  • Image memory 1030 can be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and can include DMA (Direct Memory Access) features.
  • DMA Direct Memory Access
  • ISP processor 1040 When receiving raw image data from sensor 1020 interface or from image memory 1030, ISP processor 1040 can perform one or more image processing operations, such as time domain filtering.
  • the processed image data can be sent to image memory 1030 for additional processing prior to being displayed.
  • the ISP processor 1040 receives the processed data from the image memory 1030 and performs image data processing in the original domain and in the RGB and YCbCr color spaces.
  • the processed image data may be output to display 1070 for viewing by a user and/or further processed by a graphics engine or a GPU (Graphics Processing Unit). Additionally, the output of ISP processor 1040 can also be sent to image memory 1030, and display 1070 can read image data from image memory 1030.
  • image memory 1030 can be configured to implement one or more frame buffers.
  • the statistics determined by the ISP processor 1040 can be sent to the control logic 1050 unit.
  • the statistical data may include image sensor 1014 statistical information such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens 1012 shading correction, and the like.
  • Control logic 1050 can include a processor and/or a microcontroller that executes one or more routines, such as firmware, and one or more routines can determine control parameters and control of imaging device 1010 based on received statistical data. parameter.
  • the control parameters may include sensor 1020 control parameters (eg, gain, integration time for exposure control), camera flash control parameters, lens 1012 control parameters (eg, focus or zoom focal length), or a combination of these parameters.
  • the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), as well as lens 1012 shading correction parameters.
  • the main image background area is blurred according to the target blurring intensity of the different sub-areas.
  • the present application also proposes a non-transitory computer readable storage medium that enables execution of a background blurring processing method as in the above embodiment when instructions in the storage medium are executed by a processor.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种背景虚化处理方法、装置及设备,其中,方法包括:获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据主图像和副图像获取主图像的景深信息;根据景深信息和对焦区域确定主图像背景区域中不同子区域的原始虚化强度;根据主图像的显示方式确定不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定不同子区域的虚化权重;根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度;根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理。由此,实现了虚化效果更加自然,更加接近真实光学虚焦的效果。

Description

背景虚化处理方法、装置及设备
相关申请的交叉引用
本申请要求广东欧珀移动通信有限公司于2017年11月30日提交的、申请名称为“背景虚化处理方法、装置及设备”的、中国专利申请号“201711242134.4”的优先权。
技术领域
本申请涉及电子设备领域,尤其涉及一种背景虚化处理方法、装置及终端设备。
背景技术
随着智能手机等终端设备制造技术的进步,当前较多的终端设备使用了双摄***,即通过双摄像头同时获取的两幅图像来计算景深信息,再利用景深信息进行虚化处理,其中,在虚化处理时,用户通常要求虚化的效果更加接近真实的光学虚焦效果,即景深越大的地方虚化的强度更高,然而,目前景深的计算精度有限,可能无法准确计算超过一定距离的景深,从而,无法根据景深实现图像中较远距离区域的虚化,导致虚化处理的视觉效果不好。
申请内容
本申请提供一种背景虚化处理方法、装置及设备,以解决现有技术中,因无法准确计算超过一定距离的景深,从而,无法根据景深实现图像中较远距离区域的虚化,导致虚化处理的视觉效果不好的技术问题。
本申请实施例提供一种背景虚化处理方法,包括:获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息;根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度;根据所述主图像的显示方式确定所述不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重;根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度;根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理。
本申请另一实施例提供一种背景虚化处理装置,包括:第一获取模块,用于获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息;第一确定模块,用于根据所述景深信息和对焦区域确定所述主图像背 景区域中不同子区域的原始虚化强度;第二确定模块,用于根据所述主图像的显示方式确定所述不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重;第三确定模块,用于根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度;处理模块,用于根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理。
本申请又一实施例提供一种计算机设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行本申请上述实施例所述的背景虚化处理方法。
本申请还一实施例提供一种非临时性计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如本申请上述实施例所述的背景虚化处理方法。
本申请实施例提供的技术方案可以包括以下有益效果:
获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据主图像和副图像获取主图像的景深信息,根据景深信息和对焦区域确定主图像背景区域中不同子区域的原始虚化强度,根据主图像的显示方式确定不同子区域的分布方位,并按照与分布方位对应的权重设置策略确定不同子区域的虚化权重,进而,根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度,最终,根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理。由此,实现了虚化效果更加自然,更加接近真实光学虚焦的效果。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据本申请一个实施例的背景虚化处理方法的流程图;
图2是根据本申请一个实施例的三角测距的原理示意图;
图3是根据本申请一个实施例的双摄像头视角覆盖范围示意图;
图4是根据本申请一个实施例的双摄像头景深获取示意图;
图5(a)是根据本申请一个实施例的主图像背景区域中多个子区域的划分示意图;
图5(b)是根据本申请另一个实施例的主图像背景区域中多个子区域的划分示意图;
图6是根据本申请一个具体实施例的背景虚化处理方法的流程图;
图7是根据本申请一个实施例的背景虚化处理装置的结构示意图;
图8是根据本申请另一个实施例的背景虚化处理装置的结构示意图;
图9是根据本申请又一个实施例的背景虚化处理装置的结构示意图;以及
图10是根据本申请另一个实施例的图像处理电路的示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
下面参考附图详细描述本申请实施例的背景虚化处理方法、装置和终端设备。
其中,本申请实施例的背景虚化处理方法和装置的执行主体可以为终端设备,其中,终端设备可以是手机、平板电脑、个人数字助理、穿戴式设备等具有双摄像头的硬件设备。该穿戴式设备可以是智能手环、智能手表、智能眼镜等。
基于以上分析可知,现有技术中,由于根据景深信息来进行虚化,因此,当景深信息由于精度的限制无法准确获取时,则会导致对应的区域的虚化强度无法实现,从而影响图像的虚化效果。
为了解决上述技术问题,本申请提供了一种背景虚化处理方法,就虚化强度与不同景深对应的区域的位置关系,控制不同景深对应的区域进行不同强度的虚化,由此,即使不能精确获取到景深信息,也能使得不同景深对应的区域得到合适强度的虚化。
图1是根据本申请一个实施例的背景虚化处理方法的流程图,如图1所示,该方法包括以下步骤:
步骤101,获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据主图像和副图像获取主图像的景深信息。
其中,在对拍摄的主体聚焦后,在主体所在的焦点区域之前和之后一段人眼容许的清晰成像的空间深度范围为景深。
需要说明的是,在实际应用中,人眼分辩景深主要是依靠双目视觉分辨景深,这与双摄像头分辨景深的原理一样,主要是依靠如图2所示的三角测距的原理实现的,基于图2中,在实际空间中,画出了成像对象,以及两个摄像头所在位置O R和O T,以及两个摄像头的焦平面,焦平面距离两个摄像头所在平面的距离为f,在焦平面位置两个摄像头进行成像,从而得到两张拍摄图像。
其中,P和P’分别是同一对象在不同拍摄图像中的位置。其中,P点距离所在拍摄图像的左侧边界的距离为X R,P’点距离所在拍摄图像的左侧边界的距离为X T。O R和O T分别为两个摄像头,这两个摄像头在同一平面,距离为B。
基于三角测距原理,图2中的对象与两个摄像头所在平面之间的距离Z,具有如下关 系:
Figure PCTCN2018116475-appb-000001
基于此,可以推得
Figure PCTCN2018116475-appb-000002
其中,d为同一对象在不同拍摄图像中的位置之间的距离差。由于B、f为定值,因此,根据d可以确定出对象的距离Z。
需要强调的是,上面的公式是基于两个平行的相同摄像头来实施的,但是实际使用的时候实际上有很多问题,比如在上图两个摄像头计算景深中总有一部分场景不能相交,因此实际的为了景深计算两个摄像头的FOV设计会不一样,其中,主摄像头是用来取实际图的主图像的,副摄像头获取的副的图像主要是用来参考计算景深,基于以上分析,副摄像头的FOV一般会大于主摄像头,但是即使是这样如图3所示,距离较近的物体依然有可能不同时在两个摄像头获取图像当中,经过调整的计算景深范围的关系如下公式所示:
Figure PCTCN2018116475-appb-000003
即可根据调整后的公式,计算主图像的景深范围等。
当然,除了三角测距法,也可以采用其他的方式来计算主图像的景深,比如,主摄像头和副摄像头针对同一个场景拍照时,场景中的物体距离摄像头的距离与主摄像头和副摄像头成像的位移差、姿势差等成比例关系,因此,在本申请的一个实施例中,可以根据这种比例关系获取上述距离Z。
举例而言,如图4所示,通过主摄像头获取的主图像以及副摄像头获取的副图像,计算出不同点差异的图,这里用视差图表示,这个图上表示的是两张图上相同点的位移差异,但是由于三角定位中的位移差异和Z成正比,因此很多时候视差图就直接被用作景深图。
步骤102,根据景深信息和对焦区域确定主图像背景区域中不同子区域的原始虚化强度。
可以理解,由于在对焦区域之前成像的范围为前景景深,前景景深对应的区域为前景区域,在对焦区域之后清晰成像的范围为背景景深,背景景深对应的区域为背景区域,根据景深信息和对焦区域确定主图像背景区域,进而,初步确定出对背景区域中不同子区域的原始虚化强度,将该原始虚化强度作为后续对背景区域每个子区域进行虚化的调整基准。
作为一种可能的实现方式,按照水平方向(平行于对焦区域的方向)将背景区域划分为不同子区域。
根据景深信息和对焦区域确定主图像中前景区域的第一景深信息和背景区域的第二景深信息,根据第二景深信息获取主图像背景区域中不同子区域的平均景深信息,进而,根据第一景深信息和不同子区域的平均景深信息获取不同子区域的原始虚化强度。
具体而言,在本示例中,将背景区域按照水平方向分为多个不同的子区域,其中,每 个子区域的大小和形状可以根据应用需要进行调整,多个不同子区域的大小和形状可以相同,也可以不同,进而,分别获取每个子区域中距离对焦区域的距离最近位置的景深和距离对焦区域的距离最远位置的景深,并根据该最近位置的景深和最远位置的景深取平均,将该平均后的景深作为对应子区域的平均景深信息,根据该子区域的平均景深信息确定原始虚化强度,其中,平均景深信息越高,原始虚化强度越大。
其中,每个子区域的景深区间大小可根据应用需要进行调整,多个不同子区域的景深区间大小可以相同,也可以不同,如图5(a)所示,多个子区域可以根据背景区域的形状划分为多个大小不一致的子区域,或者,为了进一步虚化处理的方便,如图5(b)所示,多个子区域可以为宽度与图像宽度相同的从图像下方到上方水平分布。
需要强调的是,如果子区域距离对焦区域的距离最近位置的景深和距离对焦区域的距离最远位置的景深,由于景深计算精度的限制没有获取到,则可以根据该子区域中获取的景深信息中的景深概率分布计算出该子区域的平均景深等。
步骤103,根据主图像的显示方式确定不同子区域的分布方位,并按照与分布方位对应的权重设置策略确定不同子区域的虚化权重。
其中,上述显示方式包括拍摄主体的显示反向、相对整张图像的占比等。
可以理解的是,在第一拍照场景下,图像的下方为地面,图像的上方为天空等,而拍照的主***于地面上,因而,主图像背景区域中的子区域,越是接近图像上方,越是与当前拍摄主体无关,越是接近图像的下方越是与当前拍摄主体相关。或者,在第二拍照场景下,图像的右边为人像,图像的左边为沙滩或者大海等,则如果当前拍照模式为人像模式,则越是靠近图像的左边,则越是与当前拍摄的主体无关,越是靠近图像的右边,越是与当前拍摄的主体相关等,又或者,在第三拍照场景下,即人像拍照模式下,人像在整个图像中占据比例较大,背景区域仅仅在图像的四个角落位置,则越是远离人像所在区域,则越是有当前拍摄主体无关等。
因而,在本申请的实施例中,可以根据主图像的显示方式确定不同子区域的分布方位,进而,按照与分布方位对应的权重设置策略确定不同子区域的虚化权重,比如,对于上述第一场景中,越接近上方位的子区域越是与主图像中的拍摄主体无关,越接近下方位的子区域越是与主图像中的拍摄主体相关,从而按照由上到下权重递减策略确定不同子区域的虚化权重,其中,该上下方位并不仅仅包括传统物理上的上和下,参照上述分析,它与主图像的显示方式有关,如果主图像的显示方式为上下显示,则该上下方位表示物理学上的上和下,如果主图像的显示方式为水平显示,则该上下方位表示左右方向。
举例而言,当主图像的显示方式为上下显示,且拍照主体相对于整张图像占比较小,上下方位表示物理学上的上和下时,越靠近图片上方的子区域虚化权重越大,靠近下方的 子区域虚化权重小,这样可以使最终的虚化效果更自然,更接近真实光学虚焦效果。
其中,在实际应用中,根据应用场景的不同,可采用不同的实现方式推测主图像的显示方式,比如,可以通过检测终端设备的陀螺仪信息获知终端设备的朝向,进而,推测出终端设备拍照时主图像的显示方式。
本申请实施中,按照与分布方位对应的权重设置策略确定不同子区域的虚化权重的方式有多种,比如,可以预先根据大量实验获取分布方位与虚化效果的关系,进而,根据该关系建立并存储分布方位和权重设置策略的对应关系,以便于在确定出主图像的显示方式后,查询该对应关系确定出对应的权重设置策略。
为了更加清楚的说明,下面以按照由上到下权重递减策略确定不同子区域的虚化权重的方式为例进行举例说明:
方式一:
在本方式中,预先设置包含子区域的定位坐标和虚化权重对应关系的线性分布曲线,或,非线性分布曲线,其中,该定位坐标可以为子区域中心区域的任意一点坐标,进而,获取不同子区域的定位坐标,根据定位坐标查询按照预设的线性权重分布曲线或者非线性权重分布曲线确定不同子区域的虚化权重。
方式二:
根据大量实验数据学习图像的显示方向与不同方位的子区域的虚化权重的对应关系,根据学习结果构建深度神经网络模型,从而在该模型中输入主图像的显示方向和不同子区域的方位,进而,根据该模型的输出获取对应的子区域的虚化权重。
步骤104,根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度。
步骤105,根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理。
具体地,在确定不同子区域的原始虚化强度和对应的虚化权重后,根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度,并根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理,由此,根据子区域的方位和主图像显示方向的关系对不同的子区域进行不同强度的虚化的,不需要获知每个子区域的精确的景深信息,也可使得不同子区域得到对应强度的虚化,使得虚化结果更加自然。
也就是说,仅仅根据初步获取的景深信息确定的不同子区域的原始虚化强度,可能由于景深信息获取的不精确,而导致该虚化强度与真实光学虚焦效果对应的虚化强度有偏差,而本申请的实施例中,进一步根据主图像的显示方向和不同子区域的方位确定不同子区域的虚化权重,进而根据该虚化权重修正原始虚化强度确定目标虚化强度,使得虚化结果更加自然。
需要说明的是,根据应用场景的不同,对不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度的实现方式不同,包括但不限于以下几种方式:
方式一:
通过获取不同子区域的原始虚化强度与对应的虚化权重的乘积,确定不同子区域的目标虚化强度,比如,原始虚化强度分别为a和b的两个子区域的虚化权重分别为80%和90%,则可以将a*80%和b*90%分别作为针对上述两个子区域的目标虚化强度。
方式二:
为了使得相邻的子区域的虚化效果衔接更加自然,对原始虚化强度差距较大的相邻子区域的虚化权重进行一定程度的拉近处理,比如,对于相邻子区域1和2,如果子区域1的原始虚化强度相较于子区域2较大,则此时对子区域1的虚化权重乘以一个小于1的系数,其中,子区域1的原始虚化强度相较于子区域2越大,该系数越小,进而,将子区域1的原始虚化强度、虚化权重和系数的乘积作为子区域1的目标虚化强度,将子区域2的原始虚化强度和虚化权重的乘积作为子区域2的目标虚化强度,或者,对子区域2的虚化权重乘以一个大于1的系数,其中,子区域1的原始虚化强度相较于子区域2越大,该系数越大,进而,将子区域2的原始虚化强度、虚化权重和系数的乘积作为子区域2的目标虚化强度,将子区域1的原始虚化强度和虚化权重的乘积作为子区域1的目标虚化强度。
进一步地,根据不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理的方式也包括但不限于以下处理方式:
作为一种可能的实现方式:
根据不同子区域的目标虚化强度和不同子区域中各像素的景深信息确定背景区域中每个像素的虚化系数,根据背景区域中每个像素的虚化系数对背景区域进行高斯模糊处理生成虚化照片,由此,实现了背景景深信息越大,虚化程度越强的效果。
当然,在本实施例中,如果不能精确获知到各像素的景深信息,可以如步骤102描述的方式来获取子区域的景深信息。
为了更加清楚的描述而本申请的背景虚化处理方式,下面结合具体地应用场景进行举例:
如图6所示,获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息,进而,根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度,将原始虚化强度乘以与该原始虚化强度所在子区域的虚化权重,获取到对应子区域的目标虚化强度,进而,根据该目标虚化强度对主图像的背景区域进行虚化处理得到虚化后的图像。
其中,继续参考图6,可以根据终端设备的陀螺仪信息推测主图像的显示方向,进而,根据主图像的显示方向,确定不同子区域的上下方位并按照由上到下权重递减策略确定所述不同子区域的虚化权重。
基于以上实施例,除了上述将背景区域按照与对焦区域平行的方向划分子区域外,还可以按照垂直方向(垂直于对焦区域的方向)将背景区域划分为不同子区域。
根据景深信息和对焦区域确定主图像中前景区域的第一景深信息和背景区域的第二景深信息,按照第二景深信息的大小将背景区域分为不同的子区域,其中,距离对焦区域距离越近的子区域景深越小,根据第二景深信息获取主图像背景区域中不同子区域的平均景深信息,进而,根据第一景深信息和不同子区域的平均景深信息获取不同子区域的原始虚化强度。
具体而言,在本示例中,将背景区域按照景深的大小分为多个不同的子区域,进而,分别获取每个子区域中距离对焦区域的距离最近位置的景深和距离对焦区域的距离最远位置的景深,并根据该最近位置的景深和最远位置的景深取平均,将该平均后的景深作为对应子区域的平均景深信息,进而,根据该子区域的平均景深信息确定原始虚化强度,其中,平均景深信息越高,原始虚化强度越大。
其中,需要强调的是,如果子区域的距离对焦区域的距离最近位置的景深和距离对焦区域的距离最远位置的景深,由于景深计算精度的限制没有获取到,则可以根据该子区域中获取的景深信息中的景深概率分布计算出该子区域的平均景深等,或者,距离对焦区域较远的一个或多个子区域的景深信息获取不精确,则可以根据获取到的较为精确的距离对焦区域的多个子区域的平均景深信息的变化趋势,推导出距离对焦区域较远的一个或多个子区域的平均景深信息。
进而,在本申请的实施例中,可以采用与上述示例示出的同样的方式计算每个子区域的虚化权重进而获取每个子区域的目标虚化强度进行虚化处理。
综上所述,本申请的背景虚化处理方法,获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据主图像和副图像获取主图像的景深信息,根据景深信息和对焦区域确定主图像背景区域中不同子区域的原始虚化强度,根据所述主图像的显示方式确定不同子区域的分布方位,并按照与分布方位对应的权重设置策略确定不同子区域的虚化权重,进而,根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度,最终,根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理。由此,实现了虚化效果更加自然,更加接近真实光学虚焦的效果。为了实现上述实施例,本申请还提出了一种背景虚化处理装置,图7是根据本申请一个实施例的背景虚化处理装置的结构示意图,如图7所示,该背景虚化处理装置包括:第一获取模块100、第一确定模块200、第二确定 模块300、第三确定模块400和处理模块500。
其中,第一获取模块100,用于获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据主图像和副图像获取主图像的景深信息。
第一确定模块200,用于根据景深信息和对焦区域确定主图像背景区域中不同子区域的原始虚化强度。
在本申请的一个实施例中,如图8所示,在如图7所示的基础上,第一确定模块200包括第一确定单元210、第一获取单元220和第二获取单元230。
其中,第一确定单元210,用于根据景深信息和对焦区域确定主图像背景区域中确定主图像中前景区域的第一景深信息和背景区域的第二景深信息。
第一获取单元220,用于根据第二景深信息获取主图像背景区域中不同子区域的平均景深信息。
第二获取单元230,用于根据第一景深信息和不同子区域的平均景深信息获取不同子区域的原始虚化强度。
第二确定模块300,用于根据主图像的显示方式确定不同子区域的分布方位,并按照与分布方位对应的权重设置策略确定不同子区域的虚化权重。
在本申请的一个实施例中,如图9所示,在如图7所示的基础上,第二确定模块300包括第三获取单元310和第二确定单元320。
其中,第三获取单元310,用于获取不同子区域的定位坐标。
第二确定单元320,用于根据定位坐标查询按照预设的线性权重分布曲线或者非线性权重分布曲线,确定不同子区域的虚化权重。
第三确定模块400,用于根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度。
处理模块500,用于根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理。
需要说明的是,前述对方法实施例的描述,也适用于本申请实施例的装置,其实现原理类似,在此不再赘述。
上述背景虚化处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将背景虚化处理装置按照需要划分为不同的模块,以完成上述背景虚化处理装置的全部或部分功能。
综上所述,本申请的背景虚化处理装置,获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据主图像和副图像获取主图像的景深信息,根据景深信息和对焦区域确定主图像背景区域中不同子区域的原始虚化强度,根据主图像的显示方式确定不同子区域的分布方位,并按照与分布方位对应的权重设置策略确定不同子区域的虚化权重,进而, 根据不同子区域的原始虚化强度和对应的虚化权重确定不同子区域的目标虚化强度,最终,根据不同子区域的目标虚化强度对主图像背景区域进行虚化处理。由此,实现了虚化效果更加自然,更加接近真实光学虚焦的效果。
为了实现上述实施例,本申请还提出了一种计算机设备,其中,计算机设备为包括包含存储计算机程序的存储器及运行计算机程序的处理器的任意设备,比如,可以为智能手机、个人电脑等,上述计算机设备中还包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图10为一个实施例中图像处理电路的示意图。如图10所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。
如图10所示,图像处理电路包括ISP处理器1040和控制逻辑器1050。成像设备1010捕捉的图像数据首先由ISP处理器1040处理,ISP处理器1040对图像数据进行分析以捕捉可用于确定和/或成像设备1010的一个或多个控制参数的图像统计信息。成像设备1010(照相机)可包括具有一个或多个透镜1012和图像传感器1014的摄像头,其中,为了实施本申请的背景虚化处理方法,成像设备1010包含两组摄像头,其中,继续参照图8,成像设备1010可基于主摄像头和副摄像头同时拍摄场景图像,图像传感器1014可包括色彩滤镜阵列(如Bayer滤镜),图像传感器1014可获取用图像传感器1014的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器1040处理的一组原始图像数据。传感器1020可基于传感器1020接口类型把原始图像数据提供给ISP处理器1040,其中,ISP处理器1040可基于传感器1020提供的主摄像头中的图像传感器1014获取的原始图像数据和副摄像头中的图像传感器1014获取的原始图像数据计算景深信息等。传感器1020接口可以利用SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行照相机接口或上述接口的组合。
ISP处理器1040按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器1040可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。
ISP处理器1040还可从图像存储器1030接收像素数据。例如,从传感器1020接口将原始像素数据发送给图像存储器1030,图像存储器1030中的原始像素数据再提供给ISP处理器1040以供处理。图像存储器1030可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。
当接收到来自传感器1020接口或来自图像存储器1030的原始图像数据时,ISP处理器1040可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器1030,以便在被显示之前进行另外的处理。ISP处理器1040从图像存储器1030接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。处理后的图像数据可输出给显示器1070,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器1040的输出还可发送给图像存储器1030,且显示器1070可从图像存储器1030读取图像数据。在一个实施例中,图像存储器1030可被配置为实现一个或多个帧缓冲器。此外,ISP处理器1040的输出可发送给编码器/解码器1060,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器1070设备上之前解压缩。编码器/解码器1060可由CPU或GPU或协处理器实现。
ISP处理器1040确定的统计数据可发送给控制逻辑器1050单元。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜1012阴影校正等图像传感器1014统计信息。控制逻辑器1050可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定成像设备1010的控制参数以及的控制参数。例如,控制参数可包括传感器1020控制参数(例如增益、曝光控制的积分时间)、照相机闪光控制参数、透镜1012控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜1012阴影校正参数。
以下为运用图10中图像处理技术实现背景虚化处理方法的步骤:
获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息;
根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度;
根据所述主图像的显示方式确定所述不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重;
根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度;
根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理。
为了实现上述实施例,本申请还提出一种非临时性计算机可读存储介质,当所述存储介质中的指令由处理器被执行时,使得能够执行如上述实施例的背景虚化处理方法。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行***、装置或设备(如基于计算机的***、包括处理器的***或其他可以从指令执行***、装置或设备取指令并执行指令的***)使用,或结合这些指令执行***、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行***、装置或设备或结合这些指令执行***、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行***执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技 术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (15)

  1. 一种背景虚化处理方法,其特征在于,包括:
    获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息;
    根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度;
    根据所述主图像的显示方式确定所述不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重;
    根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度;
    根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度,包括:
    根据所述景深信息和对焦区域确定所述主图像中前景区域的第一景深信息和背景区域的第二景深信息;
    根据所述第二景深信息获取所述主图像背景区域中不同子区域的平均景深信息;
    根据所述第一景深信息和所述不同子区域的平均景深信息获取所述不同子区域的原始虚化强度。
  3. 如权利要求1或2所述的方法,其特征在于,当所述不同子区域的分布方位为上下方位时,所述按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重包括:
    按照由上到下权重递减策略确定所述不同子区域的虚化权重。
  4. 如权利要求3所述的方法,其特征在于,所述按照由上到下权重递减策略确定所述不同子区域的虚化权重包括:
    获取所述不同子区域的定位坐标;
    根据所述定位坐标查询按照预设的线性权重分布曲线或者非线性权重分布曲线,确定所述不同子区域的虚化权重。
  5. 如权利要求1-4任一所述的方法,其特征在于,所述根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度,包括:
    获取所述不同子区域的原始虚化强度与对应的虚化权重的乘积,确定所述不同子区域的目标虚化强度。
  6. 如权利要求1-5任一所述的方法,其特征在于,所述根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理,包括:
    根据所述不同子区域的目标虚化强度和所述不同子区域中各像素的景深信息确定所述背景区域中每个像素的虚化系数;
    根据所述背景区域中每个像素的虚化系数对所述背景区域进行高斯模糊处理生成虚化照片。
  7. 一种背景虚化处理装置,其特征在于,包括:
    第一获取模块,用于获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息;
    第一确定模块,用于根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度;
    第二确定模块,用于根据所述主图像的显示方式确定所述不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重;
    第三确定模块,用于根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度;
    处理模块,用于根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理。
  8. 如权利要求7所述的装置,其特征在于,所述第一确定模块包括:
    第一确定单元,用于根据所述景深信息和对焦区域确定所述主图像背景区域中确定所述主图像中前景区域的第一景深信息和背景区域的第二景深信息;
    第一获取单元,用于根据所述第二景深信息获取所述主图像背景区域中不同子区域的平均景深信息;
    第二获取单元,用于根据所述第一景深信息和所述不同子区域的平均景深信息获取所述不同子区域的原始虚化强度。
  9. 如权利要求7或8所述的装置,其特征在于,当所述不同子区域的分布方位为上下方位时,所述第二确定模块,具体用于:
    按照由上到下权重递减策略确定所述不同子区域的虚化权重。
  10. 如权利要求9所述的装置,其特征在于,所述第二确定模块,具体用于:
    获取所述不同子区域的定位坐标;
    根据所述定位坐标查询按照预设的线性权重分布曲线或者非线性权重分布曲线,确定所述不同子区域的虚化权重。
  11. 如权利要求7-10任一所述的装置,其特征在于,所述第三确定模块,具体用于:
    获取所述不同子区域的原始虚化强度与对应的虚化权重的乘积,确定所述不同子区域的目标虚化强度。
  12. 如权利要求7-11任一所述的装置,其特征在于,所述处理模块,具体用于:
    根据所述不同子区域的目标虚化强度和所述不同子区域中各像素的景深信息确定所述背景区域中每个像素的虚化系数;
    根据所述背景区域中每个像素的虚化系数对所述背景区域进行高斯模糊处理生成虚化照片。
  13. 一种计算机设备,其特征在于,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如权利要求1-6中任一所述的背景虚化处理方法。
  14. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-6中任一所述的背景虚化处理方法。
  15. 一种图像处理电路,其特征在于,包括:ISP处理器,其中,所述ISP处理器包括:具有图像传感器的主摄像头和副摄像头,
    所述ISP处理器,用于通过所述图像传感器的接口获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像,并根据所述主图像和所述副图像获取所述主图像的景深信息,根据所述景深信息和对焦区域确定所述主图像背景区域中不同子区域的原始虚化强度,根据所述主图像的显示方式确定所述不同子区域的分布方位,并按照与所述分布方位对应的权重设置策略确定所述不同子区域的虚化权重,根据所述不同子区域的原始虚化强度和对应的虚化权重确定所述不同子区域的目标虚化强度,根据所述不同子区域的目标虚化强度对所述主图像背景区域进行虚化处理。
PCT/CN2018/116475 2017-11-30 2018-11-20 背景虚化处理方法、装置及设备 WO2019105261A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711242134.4 2017-11-30
CN201711242134.4A CN108053363A (zh) 2017-11-30 2017-11-30 背景虚化处理方法、装置及设备

Publications (1)

Publication Number Publication Date
WO2019105261A1 true WO2019105261A1 (zh) 2019-06-06

Family

ID=62121994

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116475 WO2019105261A1 (zh) 2017-11-30 2018-11-20 背景虚化处理方法、装置及设备

Country Status (2)

Country Link
CN (1) CN108053363A (zh)
WO (1) WO2019105261A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053363A (zh) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 背景虚化处理方法、装置及设备
CN110555809B (zh) * 2018-06-04 2022-03-15 瑞昱半导体股份有限公司 基于前景影像的背景虚化方法与电子装置
CN109191469A (zh) * 2018-08-17 2019-01-11 广东工业大学 一种图像自动定焦方法、装置、设备及可读存储介质
CN112889265B (zh) * 2018-11-02 2022-12-09 Oppo广东移动通信有限公司 深度图像处理方法、深度图像处理装置和电子装置
CN111539960B (zh) * 2019-03-25 2023-10-24 华为技术有限公司 图像处理方法以及相关设备
CN112785487B (zh) * 2019-11-06 2023-08-04 RealMe重庆移动通信有限公司 图像处理方法及装置、存储介质和电子设备
CN114514735B (zh) * 2019-12-09 2023-10-03 Oppo广东移动通信有限公司 电子设备和控制电子设备的方法
CN112040203B (zh) * 2020-09-02 2022-07-05 Oppo(重庆)智能科技有限公司 计算机存储介质、终端设备、图像处理方法及装置
CN113066001A (zh) * 2021-02-26 2021-07-02 华为技术有限公司 一种图像处理方法及相关设备
CN114339071A (zh) * 2021-12-28 2022-04-12 维沃移动通信有限公司 图像处理电路、图像处理方法及电子设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993112A (zh) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 基于景深的背景虚化方法及装置和电子装置
CN107395965A (zh) * 2017-07-14 2017-11-24 维沃移动通信有限公司 一种图像处理方法及移动终端
CN108053363A (zh) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 背景虚化处理方法、装置及设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993112A (zh) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 基于景深的背景虚化方法及装置和电子装置
CN107395965A (zh) * 2017-07-14 2017-11-24 维沃移动通信有限公司 一种图像处理方法及移动终端
CN108053363A (zh) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 背景虚化处理方法、装置及设备

Also Published As

Publication number Publication date
CN108053363A (zh) 2018-05-18

Similar Documents

Publication Publication Date Title
WO2019105261A1 (zh) 背景虚化处理方法、装置及设备
WO2019105262A1 (zh) 背景虚化处理方法、装置及设备
JP7003238B2 (ja) 画像処理方法、装置、及び、デバイス
CN107945105B (zh) 背景虚化处理方法、装置及设备
US10757312B2 (en) Method for image-processing and mobile terminal using dual cameras
JP6911192B2 (ja) 画像処理方法、装置および機器
US9998650B2 (en) Image processing apparatus and image pickup apparatus for adding blur in an image according to depth map
WO2020259271A1 (zh) 图像畸变校正方法和装置
WO2019109805A1 (zh) 图像处理方法和装置
EP3480784B1 (en) Image processing method, and device
WO2019105214A1 (zh) 图像虚化方法、装置、移动终端和存储介质
KR102143456B1 (ko) 심도 정보 취득 방법 및 장치, 그리고 이미지 수집 디바이스
WO2019105254A1 (zh) 背景虚化处理方法、装置及设备
CN108154514B (zh) 图像处理方法、装置及设备
WO2019105297A1 (zh) 图像虚化处理方法、装置、移动设备及存储介质
WO2019105260A1 (zh) 景深获取方法、装置及设备
CN107872631B (zh) 基于双摄像头的图像拍摄方法、装置及移动终端
JP2016208075A (ja) 画像出力装置およびその制御方法、撮像装置、プログラム
US20230033956A1 (en) Estimating depth based on iris size
CN117058183A (zh) 一种基于双摄像头的图像处理方法、装置、电子设备及存储介质
WO2022011657A1 (zh) 图像处理方法及装置、电子设备及计算机可读存储介质
CN112866547A (zh) 对焦方法和装置、电子设备、计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18884721

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18884721

Country of ref document: EP

Kind code of ref document: A1