WO2019105297A1 - 图像虚化处理方法、装置、移动设备及存储介质 - Google Patents

图像虚化处理方法、装置、移动设备及存储介质 Download PDF

Info

Publication number
WO2019105297A1
WO2019105297A1 PCT/CN2018/117195 CN2018117195W WO2019105297A1 WO 2019105297 A1 WO2019105297 A1 WO 2019105297A1 CN 2018117195 W CN2018117195 W CN 2018117195W WO 2019105297 A1 WO2019105297 A1 WO 2019105297A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
image
current
target image
frame rate
Prior art date
Application number
PCT/CN2018/117195
Other languages
English (en)
French (fr)
Inventor
谭国辉
杜成鹏
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019105297A1 publication Critical patent/WO2019105297A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an image blurring processing method, apparatus, mobile device, and storage medium.
  • the mobile device or the subject body of the camera device moves, because the blurring process needs to calculate the depth of field, and the depth of field calculation takes a long time, which causes the movement of the mobile device or the subject of the camera.
  • the processing speed of the processor may not be able to keep up with the moving speed of the mobile device or the subject of the camera, resulting in the inability to determine the depth of field in time, the smearing effect is poor, and the user experience is poor.
  • the present application provides an image blurring processing method, apparatus, mobile device, and storage medium.
  • determining a current depth of field calculation frame rate according to a current motion speed of the mobile device, and determining a frame rate according to the depth of field determining that the current preview image is a target image.
  • the current preview image is blurred, which improves the followability of the blur effect and improves the user experience.
  • the embodiment of the present application provides an image blurring processing method, which is applied to a mobile device including a camera component, including: determining a current motion speed of the mobile device; determining a current depth of field calculation frame rate according to the current motion speed; The depth of field calculates a frame rate, determines whether the current preview image is a target image; if yes, acquires depth information of a background region of the target image; and performs blurring processing on the current preview image according to the depth information.
  • Another embodiment of the present invention provides an image blurring processing apparatus, which is applied to a mobile device including an image capturing component, including: a first determining module, configured to determine a current moving speed of the mobile device; and a second determining module, Determining, according to the current motion speed, a current depth of field calculation frame rate; a determining module, configured to calculate a frame rate according to the depth of field, and determine whether the current preview image is a target image; and the first acquiring module is configured to target the current preview image And acquiring the depth information of the background area of the target image; the first processing module is configured to perform a blurring process on the current preview image according to the depth information.
  • a further embodiment of the present application provides a mobile device, including a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the program, implementing the method as described in the first aspect Image blurring method.
  • a further embodiment of the present application provides a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements an image blurring processing method as described in the above embodiments of the present application.
  • a further embodiment of the present application provides a computer program, when the computer program is executed by a processor, to implement an image blurring processing method as described in the above embodiments of the present application.
  • the current depth of field calculation frame rate is determined according to the current motion speed of the mobile device, and when the current preview image is determined as the target image according to the depth of field, the current preview image is blurred according to the depth information of the background area of the target image. Processing improves the follow-up of blurring effects and improves the user experience.
  • FIG. 1 is a flow chart of an image blurring processing method according to an embodiment of the present application.
  • FIG. 2 is a flowchart of an image blurring processing method according to another embodiment of the present application.
  • FIG. 3 is a schematic diagram of an image blurring processing method according to an embodiment of the present application.
  • FIG. 4 is a diagram showing an example of an image blurring processing method according to another embodiment of the present application.
  • FIG. 5 is a flowchart of an image blurring processing method according to another embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an image blurring processing apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an image processing circuit in accordance with one embodiment of the present application.
  • the processing speed of the processor may not keep up with the moving speed of the mobile device or the photographing subject, resulting in the inability to determine the depth of field in time, the poor followability of the blurring effect, and the poor user experience, and propose a Image blurring method.
  • the image blurring processing method provided by the embodiment of the present application determines the current depth of field calculation frame rate according to the current motion speed of the mobile device, and determines the background image according to the depth of field to determine the current preview image as the target image, according to the background of the target image.
  • the depth information of the area is used to blur the current preview image, which improves the followability of the blur effect and improves the user experience.
  • FIG. 1 is a flow chart of an image blurring processing method according to an embodiment of the present application.
  • the image blurring processing method is applied to a mobile device including a camera component, and the method includes:
  • Step 101 Determine a current moving speed of the mobile device.
  • the execution subject of the image blurring processing method provided by the embodiment of the present application is the image blurring processing device provided by the embodiment of the present application, and the device may be configured in a mobile device including the camera component to perform the captured image. Blurring processing.
  • mobile devices such as mobile phones, tablets, and notebook computers.
  • the current motion speed of the mobile device may be determined by using a sensor such as a gyroscope, an accelerometer, a speed sensor, or the like, which is set in the mobile device.
  • a sensor such as a gyroscope, an accelerometer, a speed sensor, or the like, which is set in the mobile device.
  • Step 102 Determine a current depth of field calculation frame rate according to the current motion speed.
  • Step 103 Calculate a frame rate according to the depth of field, and determine whether the current preview image is a target image.
  • the camera module continuously collects images, that is, the captured image is a multi-frame image, and related technologies, when the captured image is blurred, it is necessary to perform image processing for each frame. Depth of field calculation, and because the depth of field calculation takes a long time, the processing speed of the processor may not keep up with the moving speed of the mobile device or the photographing subject during the movement of the mobile device, resulting in the inability to determine the depth of field in time, and the blurring effect is poor.
  • the depth of field calculation may not be performed on each frame image collected by the camera component, but the current depth of field calculation frame rate is determined according to the current motion speed of the mobile device, and the frame rate is calculated according to the depth of field.
  • the target image is extracted from the acquired multi-frame image for depth of field calculation, and for the frame image other than the target image, the depth of field calculation result of the previously extracted target image is directly used, thereby reducing the time of depth of field calculation and improving the blurring effect following. Sex, improve the user experience.
  • the depth of field calculation frame rate may refer to a frame interval when the target image is extracted from the acquired image.
  • the depth of field calculation frame rate is 2, and if the target image extracted for the first time is the first frame image, the target image extracted for the second time is the fourth frame image.
  • the correspondence between the motion speed of the mobile device and the depth of field calculation frame rate may be preset, so that after determining the current motion speed of the mobile device, the current depth of field calculation frame rate may be determined according to the preset correspondence.
  • the speed of the movement is proportional to the principle of setting.
  • Step 104 if yes, acquiring depth information of a background area of the target image.
  • Step 105 Perform a blurring process on the current preview image according to the depth information.
  • the background area refers to other areas except the area where the photographing body is located.
  • the depth information of the background region of the target image may be acquired, and the blur is determined according to the depth information.
  • the background area may contain different people or objects, and the depth data corresponding to different people or objects may be different, so the depth information of the above background area may be a numerical value or a numerical range.
  • the depth information of the background area is a value
  • the value may be obtained by averaging the depth data of the background area; or, by taking the median value of the depth data of the background area.
  • different depth ranges may be preset, corresponding to different ambiguity levels, so that after determining the depth information of the background area of the target image, the corresponding ambiguity may be determined according to the determined depth information and the preset correspondence relationship.
  • Level to blur the current preview image may be preset, corresponding to different ambiguity levels, so that after determining the depth information of the background area of the target image, the corresponding ambiguity may be determined according to the determined depth information and the preset correspondence relationship.
  • Level to blur the current preview image may be preset, corresponding to different ambiguity levels
  • a Gaussian kernel function may be used to blur the current preview image.
  • the Gaussian kernel can be regarded as a weight matrix.
  • the weight matrix By using the weight matrix to perform Gaussian blur value calculation on the pixels in the current preview image, the current preview image can be blurred.
  • the pixel to be calculated is taken as the center pixel, and the weight matrix is used to weight the pixel value of the pixel around the center pixel, and finally the Gaussian blur value of the pixel to be calculated is obtained.
  • Gaussian fuzzy values are calculated by using different weight matrices for the same pixel, and different degrees of blurring effects can be obtained.
  • the weight matrix is related to the variance of the Gaussian kernel function. The larger the variance, the wider the radial range of the Gaussian kernel function, and the better the smoothing effect, the higher the degree of blur. Therefore, the correspondence between the ambiguity level and the variance of the Gaussian kernel function may be set in advance, so that after determining the ambiguity level of the target image, the variance of the Gaussian kernel function may be determined according to the preset correspondence relationship, thereby determining the weight matrix, thereby The current preview image is subjected to a corresponding degree of blurring processing.
  • the background area of the current preview image is blurred
  • the background area since the background area may contain different people or objects, the gradient of the depth information of the background area may be large, for example, the depth data of an area in the background area is very Large, the depth data of a certain area is very small. If the entire background area is blurred according to the same blur level, the blur effect may be unnatural. Therefore, in the embodiment of the present application, the background area may be further divided into different areas, and different levels of blurring processing are performed on different areas.
  • the background area may be divided into multiple areas according to the depth information of the background area, and the span of the depth range of each area increases as the depth position of the area increases, thereby differentiating different areas.
  • the degree of blurring makes the image blurring effect more natural, closer to the optical focusing effect, and enhance the user's visual experience.
  • the frame rate is calculated according to the current depth of field, and after the target image is extracted from the acquired image, if the current preview image is not the target image, the current preview image may be blurred in the following manners. .
  • the current preview image is blurred according to the depth information of the target image before the current preview image.
  • the blur level may be determined according to the depth information of the target image before the current preview image, to perform blurring processing on the current preview image.
  • the frame rate is calculated based on the depth of field, and it is determined that the first frame image, the fourth frame image, and the like are extracted as the target image.
  • the blur level can be determined based on the depth information of the background region of the first frame image, and the first frame image can be blurred.
  • the image of the second frame can be determined based on the blur level determined by the previous target image, that is, the depth information of the background region of the first frame image. Perform blurring.
  • the first blur level is determined according to the current motion speed, and the current preview image is blurred according to the first blur level.
  • the correspondence between the motion speed and the blur level may be preset, so that when the current preview image is not the target image, the first blur level may be determined according to the current motion speed and the preset correspondence, according to the first Blur the level and blur the current preview image.
  • the degree is set in inverse proportion to the speed of the mobile device.
  • the moving speed of the mobile device is preset to be less than 0.5 m/s (m/s), and corresponding to the blurring level A, the moving speed is greater than or equal to 0.5 m/s, corresponding to the blurring level B.
  • the depth of field calculation frame rate is 2, and the frame rate is calculated based on the depth of field, and it is determined that the first frame image, the fourth frame image, and the like are extracted as the target image.
  • the current preview image is the second frame image
  • the blur level A can be determined according to the current motion speed and the preset correspondence relationship, thereby The level A is blurred, and the image of the second frame is blurred.
  • the blur level when the current preview image is not the target image, the blur level may be determined according to the depth information of the target image before the current preview image, and the first virtual state is determined according to the current motion speed. After the level is graded, the current preview image is blurred according to the lower blur level of the two blur levels.
  • the depth of field calculation is performed on each frame of the image, which requires a large power consumption.
  • the current depth of field is calculated to calculate the frame rate, thereby calculating the frame rate according to the depth of field, extracting the target image from the acquired multi-frame image for depth of field calculation, and reducing the power consumption during the blurring process.
  • the image blurring processing method provided by the embodiment of the present application determines the current depth of field to calculate the frame rate according to the current motion speed of the mobile device, and calculates the frame rate according to the depth of field to determine that the current preview image is the target image, according to the target image.
  • the depth information of the background area is used to blur the current preview image, which improves the followability of the blur effect and improves the user experience.
  • the current depth of field calculation frame rate can be determined according to the current motion speed of the mobile device, so that when the current preview image is determined as the target image according to the depth of field, the depth information of the background region of the target image is Preview the image for blurring.
  • the depth of field calculation processing speed of the mobile device may be combined to determine the current depth of field calculation frame rate.
  • FIG. 2 is a flow chart of an image blurring processing method according to another embodiment of the present application.
  • the image blurring processing method includes:
  • Step 201 Determine a current moving speed of the mobile device.
  • the current moving speed of the mobile device can be determined by sensors such as a gyroscope, an accelerometer, a speed sensor, and the like set in the mobile device.
  • sensors such as a gyroscope, an accelerometer, a speed sensor, and the like set in the mobile device.
  • Step 202 Calculate the processing speed according to the depth of field of the mobile device, and determine an initial depth of field calculation frame rate.
  • different depth of field calculation processing speeds may be preset, and frame rates are calculated corresponding to different initial depth of fields, so that after determining the depth of field calculation processing speed of the mobile device, the processing speed and the preset correspondence relationship may be calculated according to the determined depth of field. Determine the initial depth of field to calculate the frame rate.
  • the depth of field calculation processing speed of the mobile device may be determined according to the processor performance when the mobile device is shipped; or, because the software running in the mobile device is different, the processor processing speed of the mobile device may be different, so The depth of field calculation processing speed of the mobile device may also be determined according to the usage state of the mobile device, and is not limited herein.
  • Step 203 Adjust the initial depth of field calculation frame rate according to the current motion speed to obtain a current depth of field calculation frame rate.
  • Step 204 Calculate a frame rate according to the current depth of field, and determine whether the current preview image is a target image.
  • step 203 can be replaced by the following steps:
  • step 203a it is determined whether the current motion speed of the mobile device is greater than a threshold. If yes, step 203b is performed; otherwise, step 203c is performed.
  • step 203b the initial depth of field is calculated to calculate the frame rate.
  • step 203c the initial depth of field calculation frame rate is used as the current depth of field to calculate the frame rate.
  • the threshold can be set as needed.
  • the initial depth of field calculation frame rate may be increased. If the current motion speed of the mobile device is less than or equal to the threshold, the initial depth of field calculation frame rate is used as the current depth of field calculation frame rate.
  • the degree of increase of the initial depth of field calculation frame rate may be determined according to the difference between the current motion speed of the mobile device and the threshold. The greater the difference between the current motion speed of the mobile device and the threshold, the greater the increase of the initial depth of field calculation frame rate, the smaller the difference between the current motion speed of the mobile device and the threshold, and the smaller the initial depth of field calculation frame rate is. .
  • the current motion speed of the mobile device can be made faster, and the current depth of field calculation frame rate is larger.
  • Step 205 if yes, acquiring depth information of the background area of the target image.
  • Step 206 Perform a blurring process on the current preview image according to the depth information.
  • step 205 may include:
  • Step 205a Determine image depth information of the target image according to the target image and the corresponding depth image.
  • the target image is an RGB color image
  • the depth image includes depth information of each person or object in the target image.
  • a depth camera can be utilized to obtain a depth image.
  • the depth camera includes a depth camera based on structured light depth ranging and a depth camera based on time of flight (TOF) ranging.
  • TOF time of flight
  • the image depth information of the target image can be acquired according to the depth image.
  • Step 205b determining a background area of the target image according to the image depth information.
  • the foremost point of the target image may be obtained according to the image depth information, where the foremost point is equivalent to the beginning of the body, and the first point is diffused to obtain an area adjacent to the foremost point and continuously changing in depth, and the area and the foremost point are merged into In the area where the subject is located, the area other than the subject in the target image is the background area.
  • Step 205c Determine depth information of the background area according to the correspondence between the color information of the background area and the depth information of the depth image.
  • the target image may include a portrait.
  • the following method may be used to determine the background area of the target image, thereby determining the depth information of the background area. That is, in step 205, before acquiring the depth information of the background area of the target image, the method may further include:
  • the target image is segmented to determine the background area.
  • the trained depth learning model may firstly identify the face region included in the target image, and then the depth information of the face region may be determined according to the correspondence between the target image and the depth image.
  • the face area includes features such as the nose, eyes, ears, and lips
  • the depth data corresponding to each feature in the face region is different in the depth image, for example, the depth camera in which the face is facing the depth image.
  • the depth information of the above face area may be a numerical value or a numerical range.
  • the value may be obtained by averaging the depth data of the face region; or, by taking the median value of the depth data of the face region.
  • the portrait area includes a face area, that is, the portrait area and the face area are within a certain depth range
  • the portrait can be set according to the depth information of the face area.
  • the depth range of the region is further extracted from the region of the depth region and connected to the face region according to the depth range of the portrait region to obtain a portrait region.
  • the image sensor includes a plurality of photosensitive cells, each of which corresponds to one pixel, and the camera component is fixedly disposed with respect to the mobile device, so when the mobile device captures images in different postures When the same point on the subject corresponds to a different pixel on the image sensor.
  • the elliptical regions in FIG. 3 and FIG. 4 are respectively the regions in which the subject is located when the mobile terminal takes an image in a portrait mode and a landscape mode.
  • points a and b on the object correspond to the pixel point 10 and the pixel point 11, respectively
  • points a and b on the subject correspond to the pixel 11 and the pixel 8, respectively.
  • the points according to points a and b are The positional relationship needs to be extracted from the pixel point 10 to the pixel point 11. If the mobile device is in the horizontal screen state, it needs to be extracted from the pixel point 11 to the pixel point 8. That is to say, after determining a certain area, when it is necessary to extract other areas falling within a certain depth range, if the posture of the mobile device is different, it needs to be extracted in different directions.
  • the depth range of the portrait area is set according to the depth information of the face area, according to the depth range of the portrait area, when the area falling within the depth range and connected to the face area is extracted, Based on the current posture of the mobile device, it is determined in which direction the area connected to the face and falls within the set depth range is extracted, thereby determining the portrait area more quickly.
  • the target image may be segmented according to the portrait area, and the other areas except the portrait area are determined as the background area, and then the color information of the background area corresponds to the depth information of the depth image. Relationship, determining the depth information of the background area.
  • the current preview image may be blurred according to the depth information.
  • the image blurring processing method determines the initial depth of field calculation frame rate by calculating the processing speed according to the depth of field of the mobile device after determining the current motion speed of the mobile device, and then calculates the frame for the initial depth of field according to the current motion speed.
  • the rate is adjusted to obtain the current depth of field calculation frame rate, and then the frame rate is calculated according to the current depth of field to determine whether the current preview image is the target image. If yes, the depth information of the background area of the target image is obtained, so that the current preview is based on the depth information.
  • the image is blurred.
  • the processing frame rate according to the current motion speed of the mobile device and the depth of field of the mobile device, determining the current depth of field calculation frame rate, and determining the current preview image as the target image when calculating the frame rate according to the depth of field, according to the background of the target image
  • the depth information of the area is used to blur the current preview image, which improves the followability of the blur effect and improves the user experience.
  • the current depth of field calculation frame rate can be determined according to the current motion speed of the mobile device, so that when the current preview image is determined as the target image according to the depth of field, the depth information of the background region of the target image is determined. Blur the level to blur the preview image.
  • the current moving speed of the mobile device may also be combined to determine the blur level to perform blurring processing on the current preview image.
  • FIG. 5 is a flowchart of an image blurring processing method according to another embodiment of the present application.
  • the image blurring processing method includes:
  • Step 301 Determine a current moving speed of the mobile device.
  • Step 302 Determine a current depth of field calculation frame rate according to the current motion speed.
  • Step 303 Calculate a frame rate according to the depth of field, and determine whether the current preview image is a target image.
  • Step 304 if yes, acquiring depth information of the background area of the target image.
  • Step 305 Determine a first blur level according to the current motion speed.
  • the corresponding relationship between the motion speed of the mobile device and the blur level may be preset, so that after determining the current motion speed of the mobile device, the first blur level may be determined according to the preset correspondence.
  • the degree is set in inverse proportion to the speed of the mobile device.
  • Step 306 Determine a second blur level according to depth information of a background area of the target image.
  • different depth ranges may be preset, corresponding to different ambiguity levels, so that after determining the depth information of the background area of the target image, the second ambiguity may be determined according to the determined depth information and the preset correspondence relationship. grade.
  • Step 307 Perform a blurring process on the preview image according to the second blurring level and the blurring level with a lower degree of blurring in the first blurring level.
  • the preview image may be blurred according to the second ambiguity level and a ambiguity level with a lower degree of ambiguity in the first ambiguity level. deal with.
  • the second blur level may be determined according to the depth information of the background area of the target image, and then the second blur level is adjusted according to the current motion speed of the mobile device. If the current moving speed of the mobile device is large, the degree of blurring of the second blurring level is reduced to obtain a final blurring level, thereby blurring the preview image according to the final blurring level.
  • the image blurring processing method provided by the embodiment of the present invention reduces the depth of the depth of field calculation time and the process of the blurring process by extracting the target image for depth of field calculation, and improves the followability of the blurring effect. Improved user experience.
  • the blur level is determined according to the current motion speed of the mobile device and the depth information of the background region of the target image, and the degree of blur is reduced as the motion speed of the mobile device increases. The difference in the degree of blurring between the subject area that has not been blurred and the background area after the blurring is reduced, thereby masking the problem of poor followability of the blurring effect when the mobile device moves.
  • the present application also proposes an image blurring processing apparatus.
  • FIG. 6 is a schematic structural diagram of an image blurring processing apparatus according to an embodiment of the present application.
  • the image blurring processing device is applied to a mobile device including a camera assembly, and includes:
  • a first determining module 41 configured to determine a current moving speed of the mobile device
  • a second determining module 42 configured to determine a current depth of field calculation frame rate according to the current motion speed
  • the determining module 43 is configured to calculate a frame rate according to the depth of field, and determine whether the current preview image is a target image;
  • the first obtaining module 44 is configured to acquire depth information of a background area of the target image when the current preview image is the target image;
  • the first processing module 45 is configured to perform a blurring process on the current preview image according to the depth information.
  • the image blurring processing device may perform the image blurring processing method provided by the embodiment of the present application, where the device may be configured in a mobile device including the camera component to perform the captured image.
  • Blurring processing there are many types of mobile devices, such as mobile phones, tablets, and notebook computers.
  • Figure 6 shows an example of a mobile device as a mobile phone.
  • the device further includes:
  • a second processing module configured to perform a blurring process on the current preview image according to depth information of the target image before the current preview image when the current preview image is not the target image;
  • the third processing module is configured to determine a first blur level according to the current motion speed when the current preview image is not the target image, and perform blur processing on the current preview image according to the first blur level.
  • the apparatus further includes:
  • a third determining module configured to calculate a processing speed according to a depth of field of the mobile device, and determine an initial depth of field calculation frame rate
  • the second determining module 42 is specifically configured to:
  • the initial depth of field calculation frame rate is adjusted to obtain the current depth of field calculation frame rate.
  • the foregoing second determining module 42 is further configured to:
  • the initial depth of field calculation frame rate is used as the current depth of field to calculate the frame rate.
  • the target image may include a portrait
  • the device may further include:
  • a fourth determining module configured to perform face recognition on the target image, and determine a face region included in the target image
  • a second acquiring module configured to acquire depth information of a face region
  • a fifth determining module configured to determine a portrait area according to a current posture of the mobile device and depth information of the face area
  • the sixth determining module is configured to perform area segmentation on the target image according to the portrait area, and determine the background area.
  • the device may further include:
  • a seventh determining module configured to determine a first blur level according to a current motion speed
  • An eighth determining module configured to determine a second ambiguity level according to depth information of a background area of the target image
  • the fourth processing module is configured to perform a blurring process on the preview image according to the second blurring level and the blurring level with a lower degree of blurring in the first blurring level.
  • each module in the image blurring processing apparatus described above is for illustrative purposes only. In other embodiments, the image blurring processing apparatus may be divided into different modules as needed to complete all or part of the image blurring processing apparatus.
  • the image blurring processing apparatus determines the current depth of field calculation frame rate according to the current motion speed of the mobile device, and determines the current preview image as the target image when calculating the frame rate according to the depth of field. According to the depth information of the background area of the target image, the current preview image is blurred, which improves the followability of the blur effect and improves the user experience.
  • the present application further provides a mobile device, including: a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the program, The image blurring processing method of the first aspect.
  • the above mobile device may further include an image processing circuit, and the image processing circuit may be implemented by using hardware and/or software components, and may include various processing units defining an ISP (Image Signal Processing) pipeline.
  • ISP Image Signal Processing
  • FIG. 7 is a schematic illustration of an image processing circuit in one embodiment. As shown in FIG. 7, for convenience of explanation, only various aspects of the image processing technique related to the embodiment of the present application are shown.
  • the image processing circuit includes an ISP processor 540 and a control logic 550.
  • the image data captured by camera assembly 510 is first processed by ISP processor 540, which analyzes the image data to capture image statistics that can be used to determine and/or control one or more control parameters of camera assembly 510.
  • Camera assembly 510 can include a camera having one or more lenses 512 and image sensors 514.
  • Image sensor 514 may include a color filter array (such as a Bayer filter) that may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 514 and provide a set of primitives that may be processed by ISP processor 540 Image data.
  • Sensor 520 can provide raw image data to ISP processor 540 based on sensor 520 interface type.
  • the sensor 520 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
  • SMIA Standard Mobile Imaging Architecture
  • the ISP processor 540 processes the raw image data pixel by pixel in a variety of formats.
  • each image pixel can have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 540 can perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, image processing operations can be performed with the same or different bit depth precision.
  • ISP processor 540 can also receive pixel data from image memory 530. For example, raw pixel data is sent from the sensor 520 interface to image memory 530, which is then provided to ISP processor 540 for processing.
  • Image memory 530 can be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and can include DMA (Direct Memory Access) features.
  • ISP processor 540 can perform one or more image processing operations, such as time domain filtering.
  • the processed image data can be sent to image memory 530 for additional processing prior to being displayed.
  • the ISP processor 540 receives the processed data from the image memory 530 and performs image data processing in the original domain and in the RGB and YCbCr color spaces.
  • the processed image data can be output to display 570 for viewing by a user and/or further processed by a graphics engine or GPU (Graphics Processing Unit). Additionally, the output of ISP processor 540 can also be sent to image memory 530, and display 570 can read image data from image memory 530.
  • image memory 530 can be configured to implement one or more frame buffers.
  • ISP processor 540 can be sent to encoder/decoder 560 to encode/decode image data.
  • the encoded image data can be saved and decompressed before being displayed on the display 570 device.
  • Encoder/decoder 560 can be implemented by a CPU or GPU or coprocessor.
  • the statistics determined by the ISP processor 540 can be sent to the control logic 550 unit.
  • the statistics may include image sensor 514 statistics such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens 512 shading correction, and the like.
  • Control logic 550 can include a processor and/or a microcontroller that executes one or more routines, such as firmware, and one or more routines can determine control parameters and control of camera assembly 510 based on received statistical data.
  • the control parameters may include sensor 520 control parameters (eg, gain, integration time for exposure control), camera flash control parameters, lens 512 control parameters (eg, focus or zoom focal length), or a combination of these parameters.
  • the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), as well as lens 512 shading correction parameters.
  • the present application also proposes a computer readable storage medium that enables execution of an image blurring processing method as described in the above embodiments when instructions in the storage medium are executed by a processor.
  • the present application also proposes a computer program that, when executed by a processor, enables execution of an image blurring processing method as described in the above embodiments.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请提出了一种图像虚化处理方法、装置、移动设备及存储介质,其中,图像虚化处理方法应用于包括摄像组件的移动设备中,包括:确定所述移动设备的当前运动速度;根据所述当前运动速度,确定当前景深计算帧率;根据所述景深计算帧率,判断当前预览图像是否为目标图像;若是,则获取所述目标图像的背景区域的深度信息;根据所述深度信息,对所述当前预览图像进行虚化处理。由此,提高了虚化效果跟随性,改善了用户体验。

Description

图像虚化处理方法、装置、移动设备及存储介质
相关申请的交叉引用
本申请要求广东欧珀移动通信有限公司于2017年11月30日提交的、申请名称为“图像虚化处理方法、装置及移动设备”的、中国专利申请号“201711240101.6”的优先权。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像虚化处理方法、装置、移动设备及存储介质。
背景技术
随着科技的发展,相机、摄像机等摄像装置被广泛应用于人们的日常生活、工作、学习中,在人们生活中扮演的角色越来越重要。利用摄像装置拍摄图像时,为了突出拍照的主体,对拍照的背景区域进行虚化处理是一种经常使用的手法。
通常,在拍照时,摄像装置所在的移动设备或拍照的主体会发生移动,由于虚化处理过程需要计算景深,而景深计算耗时久,这就造成了因移动设备或拍照的主体的移动需要重新计算景深时,处理器的处理速度可能跟不上移动设备或拍照主体的移动速度,导致无法及时确定景深,虚化效果跟随性差,用户体验差。
发明内容
本申请提供一种图像虚化处理方法、装置、移动设备及存储介质,通过根据移动设备的当前运动速度,确定当前景深计算帧率,并在根据景深计算帧率,确定当前预览图像为目标图像时,根据目标图像的背景区域的深度信息,对当前预览图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验。
本申请实施例提供一种图像虚化处理方法,应用于包括摄像组件的移动设备中,包括:确定所述移动设备的当前运动速度;根据所述当前运动速度,确定当前景深计算帧率;根据所述景深计算帧率,判断当前预览图像是否为目标图像;若是,则获取所述目标图像的背景区域的深度信息;根据所述深度信息,对所述当前预览图像进行虚化处理。
本申请另一实施例提供一种图像虚化处理装置,应用于包括摄像组件的移动设备中,包括:第一确定模块,用于确定所述移动设备的当前运动速度;第二确定模块,用于根据所述当前运动速度,确定当前景深计算帧率;判断模块,用于根据所述景深计算帧率,判 断当前预览图像是否为目标图像;第一获取模块,用于在当前预览图像为目标图像时,获取所述目标图像的背景区域的深度信息;第一处理模块,用于根据所述深度信息,对所述当前预览图像进行虚化处理。
本申请又一实施例提供一种移动设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如第一方面所述的图像虚化处理方法。
本申请还一实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如本申请上述实施例所述的图像虚化处理方法。
本申请还一实施例提供一种计算机程序,当所述计算机程序被处理器执行时,以实现如本申请上述实施例所述的图像虚化处理方法。
本申请实施例提供的技术方案可以包括以下有益效果:
通过根据移动设备的当前运动速度,确定当前景深计算帧率,并在根据景深计算帧率,确定当前预览图像为目标图像时,根据目标图像的背景区域的深度信息,对当前预览图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1是根据本申请一个实施例的图像虚化处理方法的流程图;
图2是根据本申请另一实施例的图像虚化处理方法的流程图;
图3是根据本申请一个实施例的图像虚化处理方法的示意图;
图4是根据本申请另一个实施例的图像虚化处理方法的示例图;
图5是根据本申请另一实施例的图像虚化处理方法的流程图;
图6是根据本申请一个实施例的图像虚化处理装置的结构示意图;以及
图7是根据本申请一个实施例的图像处理电路的示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
本申请各实施例针对相关技术中,在拍照时,摄像装置所在的移动设备或拍照的主体会发生移动,由于虚化处理过程需要计算景深,而景深计算耗时久,这就造成了移动设备 或拍照的主体移动需要重新计算景深时,处理器的处理速度可能跟不上移动设备或拍照主体的移动速度,导致无法及时确定景深,虚化效果跟随性差,用户体验差的问题,提出一种图像虚化处理方法。
本申请实施例提供的图像虚化处理方法,通过根据移动设备的当前运动速度,确定当前景深计算帧率,并在根据景深计算帧率,确定当前预览图像为目标图像时,根据目标图像的背景区域的深度信息,对当前预览图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验。
下面参考附图描述本申请实施例的图像虚化处理方法、装置、移动设备及存储介质。
图1是根据本申请一个实施例的图像虚化处理方法的流程图。
如图1所示,该图像虚化处理方法应用于包括摄像组件的移动设备中,该方法包括:
步骤101,确定移动设备的当前运动速度。
其中,本申请实施例提供的图像虚化处理方法的执行主体,为本申请实施例提供的图像虚化处理装置,该装置可以被配置在包括摄像组件的移动设备中,以对采集的图像进行虚化处理。其中,移动设备的类型很多,可以为手机、平板电脑、笔记本电脑等。
可选的,在获取到虚化处理指令时,可以通过在移动设备中设置的如陀螺仪、加速度计、速度传感器等传感器,确定移动设备的当前运动速度。
步骤102,根据当前运动速度,确定当前景深计算帧率。
步骤103,根据景深计算帧率,判断当前预览图像是否为目标图像。
可以理解的是,在移动设备移动过程中,摄像模组在不停的采集图像,即采集的图像为多帧图像,相关技术,对采集的图像进行虚化处理时,需要对每帧图像进行景深计算,而由于景深计算耗时久,因此在移动设备移动过程中,处理器的处理速度可能跟不上移动设备或拍照主体的移动速度,导致无法及时确定景深,虚化效果跟随性差。
为了解决上述问题,在本申请实施例中,可以不对摄像组件采集的每帧图像进行景深计算,而是根据移动设备的当前运动速度,确定当前景深计算帧率,并根据景深计算帧率,从采集的多帧图像中抽取目标图像进行景深计算,而对于目标图像之外的帧图像,则直接利用之前最近抽取的目标图像的景深计算结果,从而减小景深计算的时间,提高虚化效果跟随性,改善用户体验。
其中,景深计算帧率,可以指从采集的图像中抽取目标图像时的帧间隔。比如,景深计算帧率为2,若第一次抽取的目标图像为第1帧图像,则第二次抽取的目标图像为第4帧图像。
可选的,可以预先设置移动设备的运动速度与景深计算帧率的对应关系,从而在确定移动设备的当前运动速度后,可以根据预设的对应关系,确定当前景深计算帧率。
需要说明的是,在设置移动设备的运动速度与景深计算帧率的对应关系时,可以按照移动设备的运动速度越快,对应的景深计算帧率越大,即景深计算帧率大小与移动设备的运动速度快慢正比的原则进行设置。
步骤104,若是,则获取目标图像的背景区域的深度信息。
步骤105,根据深度信息,对当前预览图像进行虚化处理。
其中,背景区域是指除拍照主体所在区域外的其它区域。
可选的,在根据当前景深计算帧率,从采集的图像中抽取目标图像后,若当前预览图像为目标图像,则可以获取目标图像的背景区域的深度信息,并根据深度信息,确定虚化等级,从而根据虚化等级,对当前预览图像进行虚化处理。
其中,确定目标图像的背景区域的深度信息的过程,将在下述实施例中说明,此处不作介绍。
需要说明的是,背景区域可能包含不同的人或物,而不同的人或物对应的深度数据可能是不同的,因此上述背景区域的深度信息可能为一个数值或一个数值范围。其中,当背景区域的深度信息为一个数值时,该数值可以通过对背景区域的深度数据取平均值得到;或者,可以通过对背景区域的深度数据取中值得到。
可选的,可以预先设置不同的深度范围,对应不同的虚化等级,从而在确定目标图像的背景区域的深度信息后,可以根据确定的深度信息及预设的对应关系,确定对应的虚化等级,以对当前预览图像进行虚化处理。
作为一种可选的实现方式,可以采用高斯核函数,对当前预览图像进行虚化处理。其中,高斯核可以看作为权重矩阵,通过利用权重矩阵对当前预览图像中的像素进行高斯模糊值计算,即可对当前预览图像进行虚化处理。计算像素的高斯模糊值时,将所要计算的像素作为中心像素,并采用权重矩阵对中心像素周边的像素点的像素值进行加权计算,最终得到所要计算的像素的高斯模糊值。
作为一种可选的实现方式,对相同像素采用不同的权重矩阵进行高斯模糊值计算,即可得到不同程度的虚化效果。而权重矩阵与高斯核函数的方差有关,方差越大,表示高斯核函数的径向作用范围越宽,平滑效果越好即模糊程度越高。因此,可以预先设置虚化等级与高斯核函数的方差的对应关系,从而在确定目标图像的虚化等级后,可以根据预设的对应关系,确定高斯核函数的方差,进而确定权重矩阵,从而对当前预览图像进行对应程度的虚化处理。
需要说明的是,对当前预览图像的背景区域进行虚化时,由于背景区域可能包含不同的人或物,从而背景区域的深度信息的梯度可能较大,比如背景区域中某区域的深度数据很大,某区域的深度数据很小,若对整个背景区域均根据同一虚化等级进行虚化处理,可 能会导致虚化效果不自然。因此,在本申请实施例中,还可以将背景区域分为不同的区域,对不同的区域进行不同等级的虚化处理。
可选的,可以根据背景区域的深度信息,将背景区域划分为多个区域,每个区域的深度范围的跨度随该区域所处的深度位置的增加而增大,从而对不同区域,进行不同程度的虚化,使得图像的虚化效果更加自然、更接近光学聚焦效果,提升用户的视觉感受。
在一种可能的实现形式中,根据当前景深计算帧率,从采集的图像中抽取目标图像后,若当前预览图像不是目标图像,则可以采用以下多种方式,对当前预览图像进行虚化处理。
方式一
根据当前预览图像之前的目标图像的深度信息,对当前预览图像进行虚化处理。
可选的,当前预览图像不是目标图像时,可以根据当前预览图像之前的目标图像的深度信息,确定虚化等级,以对当前预览图像进行虚化处理。
举例来说,假设景深计算帧率为2,根据景深计算帧率,确定抽取第1帧图像、第4帧图像等作为目标图像。当前预览图像为第1帧图像时,由于第1帧图像为目标图像,则可以根据第1帧图像的背景区域的深度信息,确定虚化等级,以对第1帧图像进行虚化处理。当前预览图像为第2帧图像时,由于第2帧图像不是目标图像,则可以根据由之前的目标图像,即第1帧图像的背景区域的深度信息确定的虚化等级,对第2帧图像进行虚化处理。
方式二
根据当前运动速度,确定第一虚化等级,并根据第一虚化等级,对当前预览图像进行虚化处理。
可选的,可以预先设置运动速度与虚化等级的对应关系,从而在当前预览图像不是目标图像时,可以根据当前运动速度及预设的对应关系,确定第一虚化等级,以根据第一虚化等级,对当前预览图像进行虚化处理。
需要说明的是,在设置移动设备的运动速度与虚化等级的对应关系时,可以按照移动设备的运动速度越快,对应的虚化等级的虚化程度越低,即虚化等级的虚化程度高低与移动设备的运动速度快慢成反比的原则进行设置。
举例来说,假设预先设置移动设备的运动速度小于0.5米/秒(m/s),对应虚化等级A,运动速度大于等于0.5m/s,对应虚化等级B。景深计算帧率为2,根据景深计算帧率,确定抽取第1帧图像、第4帧图像等作为目标图像。当前预览图像为第2帧图像时,若当前运动速度为0.4m/s,由于第2帧图像不是目标图像,则可以根据当前运动速度及预设的对应关系,确定虚化等级A,从而根据虚化等级A,对第2帧图像进行虚化处理。
需要说明的是,在本申请实施例中,当前预览图像不是目标图像时,也可以在根据当 前预览图像之前的目标图像的深度信息,确定虚化等级,及根据当前运动速度,确定第一虚化等级后,根据两个虚化等级中较低的虚化等级,对当前预览图像进行虚化处理。
可以理解的是,相关技术,对采集的图像进行虚化处理时,对每帧图像进行景深计算,需要耗费较大的功耗,而本申请实施例中,通过根据移动设备的当前运动速度,确定当前景深计算帧率,从而根据景深计算帧率,从采集的多帧图像中抽取目标图像进行景深计算,减小了虚化处理过程中的功耗。
本申请实施例提供的图像虚化处理方法,通过根据移动设备的当前运动速度后,确定当前景深计算帧率,并在根据景深计算帧率,确定当前预览图像为目标图像时,根据目标图像的背景区域的深度信息,对当前预览图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验。
通过上述分析可知,可以根据移动设备的当前运动速度,确定当前景深计算帧率,从而在根据景深计算帧率,确定当前预览图像为目标图像时,根据目标图像的背景区域的深度信息,对当前预览图像进行虚化处理。在一种可能的实现形式中,还可以结合移动设备的景深计算处理速度,确定当前景深计算帧率,下面结合图2,对本申请实施例提供的图像虚化处理方法进行进一步说明。
图2是根据本申请另一实施例的图像虚化处理方法的流程图。
如图2所示,该图像虚化处理方法,包括:
步骤201,确定移动设备的当前运动速度。
可选的,可以通过在移动设备中设置的如陀螺仪、加速度计、速度传感器等传感器,确定移动设备的当前运动速度。
步骤202,根据移动设备的景深计算处理速度,确定初始景深计算帧率。
可选的,可以预先设置不同的景深计算处理速度,对应不同的初始景深计算帧率,从而在确定移动设备的景深计算处理速度后,可以根据确定的景深计算处理速度及预设的对应关系,确定初始景深计算帧率。
需要说明的是,移动设备的景深计算处理速度,可以是根据移动设备出厂时的处理器性能确定的;或者,由于移动设备中运行的软件不同时,移动设备的处理器处理速度可能不同,因此,移动设备的景深计算处理速度,也可以是根据移动设备的使用状态确定的,此处不作限制。
步骤203,根据当前运动速度,对初始景深计算帧率进行调整,以得到当前景深计算帧率。
步骤204,根据当前景深计算帧率,判断当前预览图像是否为目标图像。
可选的,可以通过以下方式,对初始景深计算帧率进行调整。即步骤203可以用以下步骤代替:
步骤203a,判断移动设备的当前运动速度是否大于阈值,若是,则执行步骤203b,否则,执行步骤203c。
步骤203b,增加初始景深计算帧率。
步骤203c,将初始景深计算帧率作为当前景深计算帧率。
其中,阈值可以根据需要设置。
可选的,若移动设备的当前运动速度大于阈值,则可以增加初始景深计算帧率,若移动设备的当前运动速度小于或等于阈值,则将初始景深计算帧率作为当前景深计算帧率。
作为一种可选的实现方式,若移动设备的当前运动速度大于阈值,则可以根据移动设备的当前运动速度与阈值的差值,确定初始景深计算帧率的增加程度。移动设备的当前运动速度与阈值的差值越大,则初始景深计算帧率的增加程度越大,移动设备的当前运动速度与阈值的差值越小,初始景深计算帧率的增加程度越小。
通过根据移动设备的当前运动速度,对初始景深计算帧率进行调整,确定当前景深计算帧率,可以使移动设备的当前运动速度越快,当前景深计算帧率越大。
步骤205,若是,则获取目标图像的背景区域的深度信息。
步骤206,根据深度信息,对当前预览图像进行虚化处理。
可选的,可以采用下面的方法,确定目标图像的背景区域的深度信息,即,步骤205可以包括:
步骤205a,根据目标图像及对应的深度图像,确定目标图像的图像深度信息。其中,目标图像为RGB彩色图像,深度图像包含目标图像中各个人或物体的深度信息。可选的,可以利用深度摄像头来获取深度图像。其中,深度摄像头包括基于结构光深度测距的深度摄像头和基于飞行时间(time of flight,简称TOF)测距的深度摄像头。
由于目标图像的色彩信息与深度图像的深度信息是一一对应的关系,因此可以根据深度图像获取到目标图像的图像深度信息。
步骤205b,根据图像深度信息确定目标图像的背景区域。
可选的,可以根据图像深度信息,获得目标图像的最前点,最前点相当于主体的开端,从最前点进行扩散,获取与最前点邻接并且深度连续变化的区域,这些区域和最前点归并为主体所在区域,目标图像中除主体外的区域即为背景区域。
步骤205c,根据背景区域的色彩信息及深度图像的深度信息的对应关系,即可确定背景区域的深度信息。
在一种可能的实现形式中,目标图像中可能包括人像,此时,可以采用下面的方法, 确定目标图像的背景区域,进而确定背景区域的深度信息。即,步骤205中,获取目标图像的背景区域的深度信息之前,还可以包括:
对目标图像进行人脸识别,确定目标图像中包括的人脸区域;
获取人脸区域的深度信息;
根据移动设备当前的姿态及人脸区域的深度信息,确定人像区域;
根据人像区域,对目标图像进行区域分割,确定背景区域。
可选的,首先可采用已训练好的深度学***均值得到;或者,可以通过对人脸区域的深度数据取中值得到。
由于人像区域包含人脸区域,也即是说,人像区域与人脸区域同处于某一个深度范围内,因此,确定出人脸区域的深度信息后,可以根据人脸区域的深度信息设定人像区域的深度范围,再根据人像区域的深度范围提取落入该深度范围内且与人脸区域相连接的区域以获得人像区域。
需要说明的是,由于移动设备的摄像组件中,图像传感器包括多个感光单元,每个感光单元对应一个像素,而摄像组件是相对移动设备固定设置的,因此当移动设备以不同的姿态拍摄图像时,被摄物上的相同点会对应图像传感器上的不同像素点。
举例来说,假设图3和图4中椭圆区域分别为移动终端以竖屏方式和横屏方式拍摄图像时,被摄物所在区域。如图3和图4可知,当移动设备以竖屏方式拍摄图像时,被摄物上a点和b点分别对应像素点10和像素点11,而当移动设备以横屏方式拍摄图像时,被摄物上a点和b点分别对应像素点11和像素点8。
那么,假设已知a点所在区域及b点所在区域的深度范围N,需要提取落入深度范围N内的b点所在区域时,若移动设备为竖屏状态,则根据a点与b点的位置关系,需要由像素点10到像素点11的方向提取,若移动设备为横屏状态,则需要由像素点11到像素点8的方向提取。也就是说,确定某一区域后,需要提取落入某一深度范围内的其它区域时,若移动设备的姿态不同,则需要向不同的方向提取。因此在本申请实施例中,根据人脸区域的深度信息设定人像区域的深度范围后,根据人像区域的深度范围,提取落入该深度范围内且与人脸区域相连接的区域时,可以根据移动设备当前的姿态,确定向哪个方向提取 与人脸相连接且落入设定的深度范围的区域,从而更快的确定人像区域。
可选的,确定了人像区域后,即可根据人像区域对目标图像进行区域分割,将除人像区域外的其它区域确定为背景区域,进而根据背景区域的色彩信息与深度图像的深度信息的对应关系,确定背景区域的深度信息。
确定了目标图像的背景区域的深度信息后,即可根据深度信息,对当前预览图像进行虚化处理。可选的实现过程及原理,可以参照上述实施例的详细描述,此处不再赘述。
通过根据移动设备的景深计算处理速度,确定初始景深计算帧率,再根据当前运动速度,对初始景深计算帧率进行调整,以确定当前景深计算帧率,使得确定的景深计算帧率更合理,从而使得虚化效果跟随性更好。
本申请实施例提供的图像虚化处理方法,在确定移动设备的当前运动速度后,通过根据移动设备的景深计算处理速度,确定初始景深计算帧率,然后根据当前运动速度,对初始景深计算帧率进行调整,以得到当前景深计算帧率,再根据当前景深计算帧率,判断当前预览图像是否为目标图像,若是,则获取目标图像的背景区域的深度信息,从而根据深度信息,对当前预览图像进行虚化处理。由此,通过根据移动设备的当前运动速度及移动设备的景深计算处理帧率,确定当前景深计算帧率,并在根据景深计算帧率,确定当前预览图像为目标图像时,根据目标图像的背景区域的深度信息,对当前预览图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验。
通过上述分析可知,可以根据移动设备的当前运动速度确定当前景深计算帧率,从而在根据景深计算帧率,确定当前预览图像为目标图像时,根据目标图像的背景区域的深度信息,确定对应的虚化等级,以对预览图像进行虚化处理。在一种可能的实现形式中,当前预览图像为目标图像时,还可以结合移动设备的当前运动速度,确定虚化等级,以对当前预览图像进行虚化处理。下面结合图5,对本申请实施例提供的图像虚化处理方法进行进一步说明。
图5是根据本申请另一个实施例的图像虚化处理方法的流程图。
如图5所示,该图像虚化处理方法,包括:
步骤301,确定移动设备的当前运动速度。
步骤302,根据当前运动速度,确定当前景深计算帧率。
步骤303,根据景深计算帧率,判断当前预览图像是否为目标图像。
步骤304,若是,则获取目标图像的背景区域的深度信息。
其中,上述步骤301-304的具体实现过程及原理,可以参照上述实施例的详细描述,此处不再赘述。
步骤305,根据当前运动速度,确定第一虚化等级。
其中,不同的虚化等级,对应的虚化程度不同。
可选的,可以预先设置移动设备的运动速度与虚化等级的对应关系,从而在确定移动设备的当前运动速度后,可以根据预设的对应关系,确定第一虚化等级。
需要说明的是,在设置移动设备的运动速度与虚化等级的对应关系时,可以按照移动设备的运动速度越快,对应的虚化等级的虚化程度越低,即虚化等级的虚化程度高低与移动设备的运动速度快慢成反比的原则进行设置。
步骤306,根据目标图像的背景区域的深度信息,确定第二虚化等级。
可选的,可以预先设置不同的深度范围,对应不同的虚化等级,从而在确定目标图像的背景区域的深度信息后,可以根据确定的深度信息及预设的对应关系,确定第二虚化等级。
步骤307,根据第二虚化等级与第一虚化等级中虚化程度较低的虚化等级,对预览图像进行虚化处理。
可选的,确定了第二虚化等级及第一虚化等级后,即可根据第二虚化等级与第一虚化等级中虚化程度较低的虚化等级,对预览图像进行虚化处理。
需要说明的是,在本申请实施例中,也可以先根据目标图像的背景区域的深度信息,确定第二虚化等级,然后根据移动设备的当前运动速度,对第二虚化等级进行调整,若移动设备的当前运动速度较大,则降低第二虚化等级的虚化程度得到最终的虚化等级,从而根据最终的虚化等级,对预览图像进行虚化处理。
可以理解的是,本申请实施例提供的图像虚化处理方法,通过抽取目标图像进行景深计算,减小了景深计算时间及虚化处理过程中的功耗,提高了虚化效果的跟随性,改善了用户体验。且通过在当前预览图像为目标图像时,根据移动设备的当前运动速度,及目标图像的背景区域的深度信息,确定虚化等级,且随移动设备的运动速度的增加而减小虚化程度,减小了没有经过虚化的主体区域与虚化后的背景区域之间的模糊程度的差距,从而在移动设备移动时,掩盖虚化效果的跟随性差的问题。
为了实现上述实施例,本申请还提出了一种图像虚化处理装置。
图6是根据本申请一个实施例的图像虚化处理装置的结构示意图。
如图6所示,该图像虚化处理装置应用于包括摄像组件的移动设备中,包括:
第一确定模块41,用于确定移动设备的当前运动速度;
第二确定模块42,用于根据当前运动速度,确定当前景深计算帧率;
判断模块43,用于根据景深计算帧率,判断当前预览图像是否为目标图像;
第一获取模块44,用于在当前预览图像为目标图像时,获取目标图像的背景区域的深度信息;
第一处理模块45,用于根据深度信息,对当前预览图像进行虚化处理。
可选的,本申请实施例提供的图像虚化处理装置,可以执行本申请实施例提供的图像虚化处理方法,该装置可以被配置在包括摄像组件的移动设备中,以对采集的图像进行虚化处理。其中,移动设备的类型很多,可以为手机、平板电脑、笔记本电脑等。图6以移动设备为手机进行示例。
在本申请的一个实施例中,该装置,还包括:
第二处理模块,用于在当前预览图像不是目标图像时,根据当前预览图像之前的目标图像的深度信息,对当前预览图像进行虚化处理;
或者,第三处理模块,用于在当前预览图像不是目标图像时,根据当前运动速度,确定第一虚化等级,并根据第一虚化等级,对当前预览图像进行虚化处理。
在本申请的另一个实施例中,该装置,还包括:
第三确定模块,用于根据移动设备的景深计算处理速度,确定初始景深计算帧率;
上述第二确定模块42,具体用于:
根据当前运动速度,对初始景深计算帧率进行调整,以得到当前景深计算帧率。
在本申请的另一个实施例中,上述第二确定模块42,还用于:
判断移动设备的当前运动速度是否大于阈值;
若是,则增加初始景深计算帧率;
否则,将初始景深计算帧率作为当前景深计算帧率。
在本申请的另一个实施例中,目标图像中可以包括人像,相应的,该装置,还可以包括:
第四确定模块,用于对目标图像进行人脸识别,确定目标图像中包括的人脸区域;
第二获取模块,用于获取人脸区域的深度信息;
第五确定模块,用于根据移动设备当前的姿态及人脸区域的深度信息,确定人像区域;
第六确定模块,用于根据人像区域,对目标图像进行区域分割,确定背景区域。
在本申请的另一个实施例中,该装置,还可以包括:
第七确定模块,用于根据当前运动速度,确定第一虚化等级;
第八确定模块,用于根据目标图像的背景区域的深度信息,确定第二虚化等级;
第四处理模块,用于根据第二虚化等级与第一虚化等级中虚化程度较低的虚化等级,对预览图像进行虚化处理。
需要说明的是,前述对方法实施例的描述,也适用于本申请实施例的装置,其实现原理类似,在此不再赘述。
上述图像虚化处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将图 像虚化处理装置按照需要划分为不同的模块,以完成上述图像虚化处理装置的全部或部分功能。
综上所述,本申请实施例的图像虚化处理装置,通过根据移动设备的当前运动速度后,确定当前景深计算帧率,并在根据景深计算帧率,确定当前预览图像为目标图像时,根据目标图像的背景区域的深度信息,对当前预览图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验。
为了实现上述实施例,本申请还提出了一种移动设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如第一方面所述的图像虚化处理方法。
上述移动设备中还可以包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。
图7为一个实施例中图像处理电路的示意图。如图7所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。
如图7所示,图像处理电路包括ISP处理器540和控制逻辑器550。摄像组件510捕捉的图像数据首先由ISP处理器540处理,ISP处理器540对图像数据进行分析以捕捉可用于确定和/或摄像组件510的一个或多个控制参数的图像统计信息。摄像组件510可包括具有一个或多个透镜512和图像传感器514的照相机。图像传感器514可包括色彩滤镜阵列(如Bayer滤镜),图像传感器514可获取用图像传感器514的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器540处理的一组原始图像数据。传感器520可基于传感器520接口类型把原始图像数据提供给ISP处理器540。传感器520接口可以利用SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行照相机接口或上述接口的组合。
ISP处理器540按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器540可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。
ISP处理器540还可从图像存储器530接收像素数据。例如,从传感器520接口将原始像素数据发送给图像存储器530,图像存储器530中的原始像素数据再提供给ISP处理器540以供处理。图像存储器530可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。
当接收到来自传感器520接口或来自图像存储器530的原始图像数据时,ISP处理器 540可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器530,以便在被显示之前进行另外的处理。ISP处理器540从图像存储器530接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。处理后的图像数据可输出给显示器570,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器540的输出还可发送给图像存储器530,且显示器570可从图像存储器530读取图像数据。在一个实施例中,图像存储器530可被配置为实现一个或多个帧缓冲器。此外,ISP处理器540的输出可发送给编码器/解码器560,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器570设备上之前解压缩。编码器/解码器560可由CPU或GPU或协处理器实现。
ISP处理器540确定的统计数据可发送给控制逻辑器550单元。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜512阴影校正等图像传感器514统计信息。控制逻辑器550可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定摄像组件510的控制参数以及的控制参数。例如,控制参数可包括传感器520控制参数(例如增益、曝光控制的积分时间)、照相机闪光控制参数、透镜512控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜512阴影校正参数。
以下为运用图7中图像处理技术实现图像虚化处理方法的步骤:
确定所述移动设备的当前运动速度;
根据所述当前运动速度,确定当前景深计算帧率;
根据所述景深计算帧率,判断当前预览图像是否为目标图像;
若是,则获取所述目标图像的背景区域的深度信息;
根据所述深度信息,对所述当前预览图像进行虚化处理。
为了实现上述实施例,本申请还提出一种计算机可读存储介质,当所述存储介质中的指令由处理器被执行时,使得能够执行如上述实施例描述的图像虚化处理方法。
为了实现上述实施例,本申请还提出一种计算机程序,当所述计算机程序被处理器执行时,使得能够执行如上述实施例描述的图像虚化处理方法。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技 术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行***、装置或设备(如基于计算机的***、包括处理器的***或其他可以从指令执行***、装置或设备取指令并执行指令的***)使用,或结合这些指令执行***、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行***、装置或设备或结合这些指令执行***、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行***执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中, 该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (15)

  1. 一种图像虚化处理方法,应用于包括摄像组件的移动设备中,其特征在于,包括:
    确定所述移动设备的当前运动速度;
    根据所述当前运动速度,确定当前景深计算帧率;
    根据所述景深计算帧率,判断当前预览图像是否为目标图像;
    若是,则获取所述目标图像的背景区域的深度信息;
    根据所述深度信息,对所述当前预览图像进行虚化处理。
  2. 如权利要求1所述的方法,其特征在于,所述判断当前预览图像是否为目标图像之后,还包括:
    若否,则根据所述当前预览图像之前的目标图像的深度信息,对所述当前预览图像进行虚化处理;
    或者,若否,则根据所述当前运动速度,确定第一虚化等级,并根据所述第一虚化等级,对所述当前预览图像进行虚化处理。
  3. 如权利要求1所述的方法,其特征在于,所述根据所述当前运动速度,确定当前景深计算帧率之前,还包括:
    根据所述移动设备的景深计算处理速度,确定初始景深计算帧率;
    所述确定当前景深计算帧率,包括:
    根据所述当前运动速度,对所述初始景深计算帧率进行调整,以得到所述当前景深计算帧率。
  4. 如权利要求3所述的方法,其特征在于,所述对所述初始景深计算帧率进行调整,包括:
    判断所述移动设备的当前运动速度是否大于阈值;
    若是,则增加所述初始景深计算帧率;
    否则,将所述初始景深计算帧率作为所述当前景深计算帧率。
  5. 如权利要求1-4任一所述的方法,其特征在于,所述目标图像中包括人像;
    获取所述目标图像的背景区域的深度信息之前,还包括:
    对所述目标图像进行人脸识别,确定所述目标图像中包括的人脸区域;
    获取所述人脸区域的深度信息;
    根据所述移动设备当前的姿态及所述人脸区域的深度信息,确定人像区域;
    根据所述人像区域,对所述目标图像进行区域分割,确定所述背景区域。
  6. 如权利要求1-4任一所述的方法,其特征在于,所述获取所述目标图像的背景区域 的深度信息之后,还包括:
    根据所述当前运动速度,确定第一虚化等级;
    根据所述目标图像的背景区域的深度信息,确定第二虚化等级;
    根据所述第二虚化等级与所述第一虚化等级中虚化程度较低的虚化等级,对所述预览图像进行虚化处理。
  7. 一种图像虚化处理装置,应用于包括摄像组件的移动设备中,其特征在于,包括:
    第一确定模块,用于确定所述移动设备的当前运动速度;
    第二确定模块,用于根据所述当前运动速度,确定当前景深计算帧率;
    判断模块,用于根据所述景深计算帧率,判断当前预览图像是否为目标图像;
    第一获取模块,用于在当前预览图像为目标图像时,获取所述目标图像的背景区域的深度信息;
    第一处理模块,用于根据所述深度信息,对所述当前预览图像进行虚化处理。
  8. 如权利要求7所述的装置,其特征在于,还包括:
    第二处理模块,用于在当前预览图像不是目标图像时,根据所述当前预览图像之前的目标图像的深度信息,对所述当前预览图像进行虚化处理;
    或者,第三处理模块,用于在当前预览图像不是目标图像时,根据所述当前运动速度,确定第一虚化等级,并根据所述第一虚化等级,对所述当前预览图像进行虚化处理。
  9. 如权利要求1所述的装置,其特征在于,还包括:
    第三确定模块,用于根据所述移动设备的景深计算处理速度,确定初始景深计算帧率;
    所述第二确定模块,具体用于:
    根据所述当前运动速度,对所述初始景深计算帧率进行调整,以得到所述当前景深计算帧率。
  10. 如权利要求9所述的装置,其特征在于,所述第二确定模块,具体用于:
    判断所述移动设备的当前运动速度是否大于阈值;
    若是,则增加所述初始景深计算帧率;
    否则,将所述初始景深计算帧率作为所述当前景深计算帧率。
  11. 如权利要求7-10任一所述的装置,其特征在于,所述目标图像中包括人像;
    还包括:
    第四确定模块,用于对所述目标图像进行人脸识别,确定所述目标图像中包括的人脸区域;
    第二获取模块,用于获取所述人脸区域的深度信息;
    第五确定模块,用于根据所述移动设备当前的姿态及所述人脸区域的深度信息,确定 人像区域;
    第六确定模块,用于根据所述人像区域,对所述目标图像进行区域分割,确定所述背景区域。
  12. 如权利要求7-10任一所述的装置,其特征在于,还包括:
    第七确定模块,用于根据所述当前运动速度,确定第一虚化等级;
    第八确定模块,用于根据所述目标图像的背景区域的深度信息,确定第二虚化等级;
    第四处理模块,用于根据所述第二虚化等级与所述第一虚化等级中虚化程度较低的虚化等级,对所述预览图像进行虚化处理。
  13. 一种移动设备,其特征在于,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如权利要求1-6中任一所述的图像虚化处理方法。
  14. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-6中任一所述的图像虚化处理方法。
  15. 一种计算机程序,其特征在于,当所述计算机程序被处理器执行时,以实现如权利要求1-6任一所述的图像虚化处理方法。
PCT/CN2018/117195 2017-11-30 2018-11-23 图像虚化处理方法、装置、移动设备及存储介质 WO2019105297A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711240101.6A CN107948514B (zh) 2017-11-30 2017-11-30 图像虚化处理方法、装置、移动设备和计算机存储介质
CN201711240101.6 2017-11-30

Publications (1)

Publication Number Publication Date
WO2019105297A1 true WO2019105297A1 (zh) 2019-06-06

Family

ID=61948032

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/117195 WO2019105297A1 (zh) 2017-11-30 2018-11-23 图像虚化处理方法、装置、移动设备及存储介质

Country Status (2)

Country Link
CN (1) CN107948514B (zh)
WO (1) WO2019105297A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107948514B (zh) * 2017-11-30 2019-07-19 Oppo广东移动通信有限公司 图像虚化处理方法、装置、移动设备和计算机存储介质
TWI697861B (zh) * 2018-05-16 2020-07-01 緯創資通股份有限公司 服飾試穿方法及其顯示系統與電腦可讀取記錄媒體
CN109740337B (zh) * 2019-01-25 2020-12-22 宜人恒业科技发展(北京)有限公司 一种实现滑块验证码识别的方法及装置
CN110062157B (zh) * 2019-04-04 2021-09-17 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN110047126B (zh) * 2019-04-25 2023-11-24 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN110248096B (zh) * 2019-06-28 2021-03-12 Oppo广东移动通信有限公司 对焦方法和装置、电子设备、计算机可读存储介质
CN110266960B (zh) * 2019-07-19 2021-03-26 Oppo广东移动通信有限公司 预览画面处理方法、处理装置、摄像装置及可读存储介质
CN113784015A (zh) * 2020-06-10 2021-12-10 Oppo广东移动通信有限公司 图像处理电路、电子设备和图像处理方法
CN112016469A (zh) * 2020-08-28 2020-12-01 Oppo广东移动通信有限公司 图像处理方法及装置、终端及可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102821243A (zh) * 2011-06-07 2012-12-12 索尼公司 图像处理装置、控制图像处理装置的方法以及用于使计算机执行该方法的程序
CN103081455A (zh) * 2010-11-29 2013-05-01 数字光学欧洲有限公司 从手持设备所捕获的多个图像进行肖像图像合成
CN106454061A (zh) * 2015-08-04 2017-02-22 纬创资通股份有限公司 电子装置及影像处理方法
CN106791456A (zh) * 2017-03-31 2017-05-31 联想(北京)有限公司 一种拍照方法及电子设备
KR20170079935A (ko) * 2015-12-31 2017-07-10 (주)이더블유비엠 모션모델에 기반한 고속 깊이영역 확장방법 및 장치
CN107948514A (zh) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 图像虚化处理方法、装置及移动设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103081455A (zh) * 2010-11-29 2013-05-01 数字光学欧洲有限公司 从手持设备所捕获的多个图像进行肖像图像合成
CN102821243A (zh) * 2011-06-07 2012-12-12 索尼公司 图像处理装置、控制图像处理装置的方法以及用于使计算机执行该方法的程序
CN106454061A (zh) * 2015-08-04 2017-02-22 纬创资通股份有限公司 电子装置及影像处理方法
KR20170079935A (ko) * 2015-12-31 2017-07-10 (주)이더블유비엠 모션모델에 기반한 고속 깊이영역 확장방법 및 장치
CN106791456A (zh) * 2017-03-31 2017-05-31 联想(北京)有限公司 一种拍照方法及电子设备
CN107948514A (zh) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 图像虚化处理方法、装置及移动设备

Also Published As

Publication number Publication date
CN107948514B (zh) 2019-07-19
CN107948514A (zh) 2018-04-20

Similar Documents

Publication Publication Date Title
WO2019105297A1 (zh) 图像虚化处理方法、装置、移动设备及存储介质
JP7003238B2 (ja) 画像処理方法、装置、及び、デバイス
WO2019105298A1 (zh) 图像虚化处理方法、装置、移动设备及存储介质
CN111028189B (zh) 图像处理方法、装置、存储介质及电子设备
JP7015374B2 (ja) デュアルカメラを使用する画像処理のための方法および移動端末
WO2019105262A1 (zh) 背景虚化处理方法、装置及设备
CN109068058B (zh) 超级夜景模式下的拍摄控制方法、装置和电子设备
WO2019109805A1 (zh) 图像处理方法和装置
WO2019148978A1 (zh) 图像处理方法、装置、存储介质及电子设备
EP3480784B1 (en) Image processing method, and device
WO2019105214A1 (zh) 图像虚化方法、装置、移动终端和存储介质
CN108734676B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
JP2021530911A (ja) 夜景撮影方法、装置、電子機器および記憶媒体
WO2019085618A1 (en) Image-processing method, apparatus and device
CN107509031B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
US20200045219A1 (en) Control method, control apparatus, imaging device, and electronic device
WO2021057652A1 (zh) 对焦方法和装置、电子设备、计算机可读存储介质
US10897558B1 (en) Shallow depth of field (SDOF) rendering
WO2019105261A1 (zh) 背景虚化处理方法、装置及设备
CN108024057B (zh) 背景虚化处理方法、装置及设备
WO2019011154A1 (zh) 白平衡处理方法和装置
CN107872631B (zh) 基于双摄像头的图像拍摄方法、装置及移动终端
CN111246093B (zh) 图像处理方法、装置、存储介质及电子设备
WO2020029679A1 (zh) 控制方法、装置、成像设备、电子设备及可读存储介质
CN113298735A (zh) 图像处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18884321

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18884321

Country of ref document: EP

Kind code of ref document: A1