CN112614057A - Image blurring processing method and electronic equipment - Google Patents

Image blurring processing method and electronic equipment Download PDF

Info

Publication number
CN112614057A
CN112614057A CN201910880232.3A CN201910880232A CN112614057A CN 112614057 A CN112614057 A CN 112614057A CN 201910880232 A CN201910880232 A CN 201910880232A CN 112614057 A CN112614057 A CN 112614057A
Authority
CN
China
Prior art keywords
image
rgb
camera
foreground
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910880232.3A
Other languages
Chinese (zh)
Inventor
罗钢
廖桂明
李自亮
戴兰平
朱聪超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910880232.3A priority Critical patent/CN112614057A/en
Publication of CN112614057A publication Critical patent/CN112614057A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image blurring processing method and electronic equipment, which are used for optimizing an image blurring effect. The method is applied to electronic equipment, a first camera and a second camera are arranged on the electronic equipment, the first camera is a binocular camera or a multi-view camera, the second camera is a ToF camera or a structured light camera, and the method comprises the following steps: simultaneously starting a first camera and a second camera to carry out shooting operation on a first scene, wherein the first camera acquires an RGB (red, green and blue) image and a first depth image corresponding to the first scene, and the second camera acquires depth information of the first scene; carrying out densification processing on the first depth image by using the depth information of the first scene to obtain a second depth image, wherein the number of invalid pixel points in the second depth image is less than that of the invalid pixel points in the first depth image; performing foreground and background segmentation on the RGB image by using the second depth image; and blurring the background image or the foreground image in the RGB image to obtain the RGB image with a background or foreground blurring effect.

Description

Image blurring processing method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an image blurring processing method and an electronic device.
Background
Shooting an image with background blurring effect can highlight key objects in the image and ignore other accompanying objects, and is commonly used for shooting portraits, artworks and the like.
At present, electronic devices such as widely used mobile phones and tablet computers with a photographing function can perform large-aperture processing on a photographed image to simulate a single lens reflex camera in the later stage of image photographing to achieve a certain blurring effect, but the blurring effect of the prior art on the image is poor, and problems of mistaken blurring of a foreground, poor foreground edge effect or background flicker exist.
Disclosure of Invention
The embodiment of the application provides an image blurring processing method and electronic equipment, which are used for solving the technical problem that in the prior art, blurring processing effect of an image by the electronic equipment is poor.
In a first aspect, an embodiment of the present application provides an image blurring processing method, which is applied to an electronic device, where the electronic device is provided with a first camera and a second camera, where the first camera is a binocular camera or a multi-view camera, and the second camera is a time-of-flight ToF camera or a structured light camera, and the method includes: responding to a first instruction, starting the first camera and the second camera to execute shooting operation on a first scene, wherein the first camera acquires a red, green, blue, RGB (red, green, blue) image and a first depth image corresponding to the first scene, and the second camera acquires depth information of the first scene; carrying out densification processing on the first depth image by using the depth information of the first scene so as to supplement invalid pixel points in the first depth image to obtain a second depth image, wherein the number of the invalid pixel points in the second depth image is less than that of the invalid pixel points in the first depth image; performing foreground and background segmentation on the RGB image by using the second depth image to obtain a foreground image and a background image; and carrying out blurring processing on the background image or the foreground image to obtain an RGB image with a background or foreground blurring effect.
In the embodiment of the application, when a first camera shoots a first depth image of a first scene, a second camera is simultaneously controlled to shoot the first scene, an RGB image and a first depth image of the first scene are obtained based on the first camera, high-precision depth information of the first scene is obtained based on the second camera, then the high-precision depth information obtained by the second camera is used to perform densification processing on the first depth image shot by the first camera to obtain a second depth image, the precision of the second depth image is higher than that of the first depth image (namely, invalid pixel points in the second depth image are less than those in the first depth image), and then the high-precision second depth image is used to perform segmentation of a foreground and a background, so that the precision of edge recognition of the foreground and the background of the RGB image can be improved, even in scenes with similar textures, colors and the like of the foreground and the background of the image, and a good edge segmentation effect can be obtained, and the blurring effect of the image is improved.
In a possible design, when segmenting the foreground and the background of the RGB, the foreground and the background in the RGB image can be segmented by combining a trained Convolutional Neural Network (CNN) model on the basis of a second depth image; the input of the trained CNN model is an RGB image, such as a portrait, in which the subject is a subject of a specific type, and the output of the trained CNN model is an image portion, i.e., a portrait, corresponding to the subject in the image.
In the embodiment, when the shot main body is an image of a main body of a specific type, the shot main body of the specific type is further identified by combining with the CNN, the foreground and the background are intelligently segmented, the accuracy of segmenting the foreground and the background is further improved, and the edge segmentation effect of the foreground and the background is optimized so as to achieve different blurring effects.
In a possible design, the background image may be blurred according to the second depth image, so that the blurring degree corresponding to a pixel region with a larger depth value in the background image is higher, and then the foreground image and the blurred background image are fused to obtain an RGB image with a background blurring effect.
Therefore, the blurring strength of the part with larger scene depth in the RGB image background can be higher, the blurring effect is closer to the real optical virtual focus effect, and the visual experience of a user is improved.
In another possible design, the foreground image may also be subjected to blurring processing according to the second depth image, so that the blurring degree corresponding to the pixel region with the smaller depth value in the foreground image is higher, and then the background image and the blurred foreground image are fused to obtain the RGB image with the foreground blurring effect.
Therefore, the blurring strength of the part with smaller depth of field in the RGB image foreground can be higher, so as to achieve different blurring effects and improve the visual experience of the user.
In one possible design, after obtaining the RGB image of the background or foreground blurring effect, the RGB image of the background or foreground blurring effect may be weighted and fused with the original RGB image. Thus, the problems of image blurring, moire and the like can be effectively eliminated.
In a possible design, the technical solution of the embodiment of the present application may be used for a scene in which a video image is subjected to blurring processing. Specifically, the first camera may capture a video corresponding to the first scene, where the video includes multiple frames of continuous RGB images and obtains a first depth image corresponding to each frame of RGB image in the multiple frames of continuous RGB images, and the second camera may obtain depth information corresponding to each frame of RGB image in the multiple frames of continuous RGB images; then, for each frame of RGB image in the multiple frames of continuous RGB images, performing densification processing on a first depth image corresponding to the frame of RGB image by using the depth information corresponding to the frame of RGB image acquired by the second camera, so as to supplement invalid pixel points in the first depth image corresponding to the frame of RGB image, and obtain a second depth image corresponding to the frame of RGB image; then, performing foreground and background segmentation on each frame of RGB image by using a second depth image corresponding to each frame of RGB image in the plurality of frames of continuous RGB images to obtain a foreground image and a background image corresponding to each frame of RGB image; and finally, blurring the foreground image or the background image corresponding to each frame of RGB image in the multiple frames of continuous RGB images to obtain a depth image with a background or foreground blurring effect corresponding to each frame of RGB image.
According to the scheme, on the basis that the first camera shoots the video image, the depth information of the shot scene is obtained by combining the second camera, the high-precision depth information obtained by the second camera is utilized to carry out thickening processing and segmentation of the foreground and the background on the first depth image corresponding to each frame of RGB image shot by the first camera, the accuracy of video foreground edge identification is improved, even under the scenes with similar textures, colors and the like of the foreground and the background, a good edge segmentation effect can be obtained, and the visual experience of video blurring can be improved.
In a possible design, before blurring a foreground image or a background image corresponding to each frame of RGB image in the multiple frames of continuous RGB images, N adjacent RGB images in the multiple frames of continuous RGB images may be weighted and fused, where N is a positive integer greater than 1, and a weighting coefficient corresponding to each frame of RGB image in the N adjacent RGB images is determined based on depth information corresponding to the frame of RGB image acquired by the second camera and/or foreground and background edge information of the frame of RGB image identified by CNN. Then, when blurring the background image or the foreground image, blurring the background image or the foreground image corresponding to the weighted and fused RGB image may be specifically performed.
In this embodiment, because the depth information between the adjacent video frames obtained by the second camera has better stability, and the edge information between the adjacent frame RGB images obtained by CNN detection has better stability, the adjacent video frames are weighted and fused based on the depth information corresponding to the adjacent frame RGB images obtained by the second camera and/or the foreground and background edge information of the frame RGB images identified by CNN, so that the image transition between the adjacent frame RGB images after the weighted fusion processing is more natural, and the problem that when the binocular camera in the prior art shoots a video with a blurring effect, the shot video frames have background flicker due to poor stability between frames of the binocular camera is solved.
In a second aspect, an electronic device is provided that includes a first camera, a second camera, and at least one processor, wherein the first camera is a binocular camera or a multi-view camera, and the second camera is a time-of-flight, ToF, camera or a structured light camera; the processor is used for responding to a first instruction, and starting the first camera and the second camera to carry out shooting operation on a first scene; the first camera is used for executing shooting operation on the first scene to acquire a red, green and blue (RGB) image and a first depth image corresponding to the first scene; the second camera is used for executing shooting operation on the first scene and acquiring depth information of the first scene; the processor is further configured to perform densification processing on the first depth image by using the depth information of the first scene to supplement invalid pixel points in the first depth image, so as to obtain a second depth image, where the number of the invalid pixel points in the second depth image is less than that of the invalid pixel points in the first depth image; performing foreground and background segmentation on the RGB image by using the second depth image to obtain a foreground image and a background image; and carrying out blurring processing on the background image or the foreground image to obtain an RGB image with a background or foreground blurring effect.
In a possible design, when the processor performs foreground and background segmentation on the RGB image by using the second depth image, specifically, the processor may perform segmentation on the foreground and background in the second depth image by using the second depth image and a trained convolutional neural network CNN model; the trained CNN model inputs the RGB image of which the subject is the subject of the specific type and outputs the RGB image part corresponding to the subject in the image. Wherein the specific type of subject may be a portrait.
In a possible design, when the processor performs blurring processing on the background image or the foreground image, specifically, the following may be performed: blurring the background image by utilizing the second depth image to the RGB image, wherein the blurring degree corresponding to the pixel area with the larger depth value in the background image is higher; fusing the foreground image and the background image after blurring to obtain an RGB image with background blurring effect; or the following steps: blurring the foreground image, wherein the blurring degree corresponding to a pixel region with a smaller depth value in the foreground image is higher; and fusing the background image and the foreground image after blurring to obtain an RGB image with a foreground blurring effect.
In a possible design, the processor may further perform weighted fusion of the RGB image of the background or foreground blurring effect with the RGB image after obtaining the RGB image of the background or foreground blurring effect.
In one possible design, the electronic device may be used for blurring video images.
For example, the first camera is for: shooting a video corresponding to the first scene, wherein the video comprises a plurality of frames of continuous RGB images, and acquiring a first depth image corresponding to each frame of RGB image in the plurality of frames of continuous RGB images. The second camera is for: and acquiring depth information corresponding to each frame of RGB image in the plurality of frames of continuous RGB images. The processor is configured to: for each frame of RGB image in the multiple frames of continuous RGB images, performing densification processing on a first depth image corresponding to the frame of RGB image by using depth information corresponding to the frame of RGB image acquired by the second camera so as to supplement invalid pixel points in the first depth image corresponding to the frame of RGB image, and obtaining a second depth image corresponding to the frame of RGB image; performing foreground and background segmentation on each frame of RGB image by using a second depth image corresponding to each frame of RGB image in the plurality of frames of continuous RGB images to obtain a foreground image and a background image corresponding to each frame of RGB image; and blurring the foreground image or the background image corresponding to each frame of RGB image in the multiple frames of continuous RGB images to obtain a depth image with a background or foreground blurring effect corresponding to each frame of RGB image.
In a possible design, before blurring a foreground image or a background image corresponding to each frame of RGB image in the multiple frames of continuous RGB images, the processor may further perform weighted fusion on N adjacent RGB images in the multiple frames of continuous RGB images, where N is a positive integer greater than 1, and a weighting coefficient corresponding to each frame of RGB image in the N adjacent RGB images is determined based on depth information corresponding to the frame of RGB image acquired by the second camera and/or foreground and background edge information of the frame of RGB image identified by CNN. Then, when blurring the background image or the foreground image, the processor may specifically perform blurring on the background image or the foreground image corresponding to the weighted and fused RGB image.
In a third aspect, an electronic device is provided that includes a first camera, a second camera, at least one processor, and a memory, wherein the first camera is a binocular camera or a multi-view camera, and the second camera is a time-of-flight, ToF, camera or a structured light camera; the memory for storing one or more computer programs; the one or more computer programs stored in the memory, when executed by the at least one processor, enable the electronic device to implement the aspects of the first aspect or any of the possible designs of the first aspect of the embodiments of the present application.
In a fourth aspect, a computer-readable storage medium is provided, where the computer-readable storage medium includes a computer program, and when the computer program runs on an electronic device, the electronic device is caused to perform the technical solution of the first aspect or any one of the possible designs of the first aspect of the embodiments of the present application.
In a fifth aspect, there is provided a program product comprising instructions which, when executed on a computer, cause the computer to perform the features of the first aspect or any possible design of the first aspect of the embodiments of the present application.
In a sixth aspect, a circuit system is provided, where the circuit system is configured to generate a first control signal, where the first control signal is used to control a first camera to perform a shooting operation on a first scene, and acquire a first depth image corresponding to the first scene; the circuit system is further configured to generate a second control signal, where the second control signal is used to control the second camera to perform a shooting operation on the first scene, and obtain depth information of the first scene.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device in an embodiment of the present application;
FIG. 2 is a schematic flowchart illustrating an image blurring processing method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an image capture scene in an embodiment of the present application;
fig. 4A and 4B are schematic diagrams of the first depth image portion after pixel densification before pixel densification;
FIGS. 5A and 5B are schematic diagrams of an RGB image and a foreground and background segmentation edge of the RGB image, respectively;
FIG. 6 is a diagram illustrating a template corresponding to a pixel point;
FIG. 7 is a schematic diagram illustrating the blurring effect of a background image;
FIG. 8 is a schematic flowchart of another image blurring processing method according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating the blurring effect of a foreground image;
FIG. 10 is a schematic flowchart of another image blurring processing method according to an embodiment of the present application;
fig. 11A is a schematic view of a portrait shooting scene in an embodiment of the present application;
FIG. 11B is a schematic view of an image taken of the scene shown in FIG. 11A;
FIG. 11C is a diagram illustrating the effect of background blurring of the image shown in FIG. 11B;
fig. 12 is a flowchart illustrating a method for blurring a video image.
Detailed Description
The technical solution in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
At present, a single lens reflex camera shoots a virtual background image, optical imaging is directly changed by using a single lens reflex telephoto lens, so that the imaging is clear only in a certain object distance range, and the imaging can be blurred to different degrees outside the range. The blurring effect is directly adjusted by using the aperture, the focal distance (object distance) and the focal length variability in the lens, and the blurring effect is stronger when the aperture is larger, the focal distance is closer or the focal length is larger.
At present, electronic devices such as mobile phones and tablet computers which are widely used and have a photographing function are limited by size, cost, use environment and the like, and the matched lenses basically belong to lenses with small apertures, so that pictures with large aperture photographing effects like a single lens reflex camera are difficult to photograph. In order to realize that images shot by electronic equipment such as a mobile phone and a tablet personal computer can also have a background blurring effect, the images can be subjected to large-aperture simulation processing in the later shooting stage to achieve a certain blurring effect. In the prior art, the following two methods are mainly used for performing background blurring on an image in the later period:
in the 1 st type, two images of the same scene with different visual angles are simultaneously acquired through two cameras (of a binocular camera), the depth of field information of the scene is calculated by using a binocular stereo vision principle, then the foreground and the background of the images are separated by using the depth of field information, and then the background is blurred.
However, the binocular camera is sensitive to the ambient light intensity and depends on the texture characteristics of the image, so that it is difficult to accurately distinguish the foreground from the background when the shot scene is far away, the scene is not illuminated sufficiently or lacks texture (or the texture difference between the foreground and the background is small), and the like, so that the background blurring image shot by the method has poor edge effect of the foreground; in addition, when a picture including consecutive frames such as a moving picture or a video is taken, the stability between adjacent image frames acquired based on the principle of binocular stereopsis is poor, and thus a problem of flickering of the background of the image also often occurs.
In the second category, a single camera is used to capture an image, then a Convolutional Neural Network (CNN) is used to detect the edges of image elements of a subject (such as a portrait) to be captured in the image, then the foreground and the background are separated based on the detected edges, and finally the background is blurred.
However, in the blurred image processed by the method, the blurring degree of each pixel position in the image background is hardly different, so that the layering sense is weak and the visual effect is poor; further, if the color of the subject is similar to that of the background, it is difficult to accurately distinguish the foreground from the background, which causes a problem that the subject is also blurred or the background is not blurred.
In order to solve the above problems in the prior art, embodiments of the present application provide an image blurring processing method and an electronic device, so as to improve a visual effect of image blurring. The method comprises the steps that a binocular camera and a time of flight (ToF) camera are arranged on electronic equipment, when the electronic equipment shoots an image of a first scene, the binocular camera and the ToF camera are used for simultaneously shooting the first scene, a red-green-blue (RGB) image corresponding to the first scene is shot based on the binocular camera, a first depth image corresponding to the first scene is obtained based on binocular parallax of the binocular camera (wherein values of all pixel points in the first depth image are depth values, and the distances between the pixel points corresponding to the pixel points and the electronic equipment are represented), and meanwhile, depth information in the first scene is obtained by the ToF camera; then, the first depth image is densified by combining depth information in the first scene acquired by the ToF camera to supplement invalid pixel points (i.e., pixel points without depth values) in the first depth image, so as to obtain a second depth image with higher precision (i.e., the second depth image has more valid pixel points compared with the first depth image); then, segmenting the foreground and background of the RGB image by using a second depth image to obtain a foreground image and a background image; then, blurring the background image, wherein the blurring strength can be higher for places with larger depth of field (i.e. places with larger depth value, i.e. places farther away from the electronic device); and finally, fusing the foreground image and the background image subjected to virtualization processing to obtain an RGB image with a background virtualization effect. According to the embodiment of the application, the ToF camera is combined on the basis of the binocular camera, the ToF camera is used for obtaining the depth information in the first scene to perform densification on the first depth image shot by the binocular camera, so that the second depth image with higher precision (namely the depth information of the first scene with higher precision) is obtained, and the second depth image is used for performing foreground and background segmentation, so that the accuracy of foreground and background segmentation can be optimized, and the edge effect of optimizing background blurring is improved; in addition, in the blurring process, blurring processing is performed on the background of the RGB image by using the second depth image, so that the blurring strength can be higher at a place with a larger depth of field, and thus the blurring effect can be closer to a real optical virtual focus effect. The specific embodiments will be described later.
In the embodiment of the application, if the image shot by the electronic device is a video or a moving picture, the image blurring processing method can be simultaneously executed on each frame of image in the video or the moving picture, so that the moving picture or the video with the background blurring effect is realized. In addition, before blurring the background of each frame of image, time domain smoothing processing may be performed on each frame of image by using high inter-frame stability of the ToF camera (i.e., the difference between adjacent image frames acquired by the ToF camera is small, and inter-frame stability is high), so as to improve the problem that the background flicker exists in the blurring picture shot by the binocular camera in the prior art. The specific embodiments will be described later.
In the embodiment of the application, if the person image is shot in the first scene, the edge, the segmentation background and the like of the shot subject can be further identified by combining with the CNN, so that the foreground edge effect of the blurred image is further optimized; meanwhile, when the continuous multi-frame images are subjected to time domain smoothing processing, the problem of background flicker can be better improved by further combining the high inter-frame stability of CNN, so that the transition of the adjacent frame images is more natural. The specific embodiments will be described later.
Of course, the technical concept of the present application can also be applied to a scene in which the image foreground is blurred. Different from shooting an image with a background blurring effect, after the electronic device segments a foreground and a background in a first depth image by using depth information of the first scene acquired by a ToF camera, blurring the foreground image. And the blurring strength can be higher at the place with smaller depth of field, and finally the background image and the blurred foreground image are fused to obtain the depth image with the foreground blurring effect.
The technical scheme of the embodiment of the application can be applied to any electronic equipment with an image shooting function and used for executing the image blurring processing method. The electronic device may be, for example, a mobile phone, a mobile computer, a tablet computer, a camcorder, a Personal Digital Assistant (PDA), a media player, a smart television, a smart wearable device (such as a smart watch, smart glasses, and a smart bracelet), an electronic reader, a handheld game machine, a point of sale (POS), a vehicle-mounted electronic device (a vehicle-mounted computer), and the like. In an embodiment of the present application, a plurality of applications, for example, a camera application, an aesthetic application, a video player application, a music player application, a system setup application, a desktop application, a drawing application, a presentation application, a game application, a telephone application, an email application, an instant messaging application, a photo management application, a browser application, a calendar application, a clock application, a payment application, a health management application, and the like, may be installed in the electronic device. In addition, the image blurring processing method provided by the embodiment of the application is applied to any image shooting scene, such as a scene for shooting a photo, a motion picture or a video.
The following description will be given of a schematic structural diagram of an electronic device applied in the embodiments of the present application, taking the electronic device as a mobile phone as an example.
As shown in fig. 1, the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. The execution of the image blurring processing method in the embodiment of the present application may be controlled by the processor 110 or may be completed by calling another component, for example, calling a processing program of the embodiment of the present application stored in the internal memory 121, or calling a processing program of the embodiment of the present application stored in a third-party device through the external memory interface 120, so as to implement the post-blurring processing on the captured image.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the cell phone 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the mobile phone 100 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the handset 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to the mobile phone 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
The mobile phone 100 can implement the display function through the GPU, the display screen 194, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 can be used to display information input by the user (e.g., display text information, voice information, etc.) or information provided to the user (e.g., display captured images, video, etc.) as well as various menus of the cellular phone 100, and can additionally accept user input such as a touch operation by the user.
The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like.
The display screen 194 may further include a touch panel, which is also referred to as a touch screen, a touch-sensitive screen, etc., and may collect contact or non-contact operations (such as operations performed by a user on or near the touch panel using any suitable object or accessory, such as a finger, a stylus, etc., and may also include body-sensing operations; including single-point control operations, multi-point control operations, etc.) on or near the touch panel, and drive the corresponding connection device according to a preset program.
Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction and gesture of a user, detects signals brought by input operation and transmits the signals to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into information that can be processed by the processor, sends the information to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, a surface acoustic wave, and the like, and may also be implemented by any technology developed in the future. Further, the touch panel may cover the display panel, a user may operate on or near the touch panel covered on the display panel according to the content displayed on the display panel (the display content includes, but is not limited to, a soft keyboard, a virtual mouse, virtual keys, icons, etc.), the touch panel detects the operation on or near the touch panel and transmits the operation to the processor 110 to determine the user input, and then the processor 110 provides the corresponding visual output on the display panel according to the user input.
For example, in the embodiment of the present application, after a touch detection device in a touch panel detects a touch operation input by a user, the touch detection device sends a signal corresponding to the detected touch operation in real time; the touch controller converts the signal into a touch point coordinate and sends the touch point coordinate to the processor 110; the processor 110 determines that the touch operation is specifically a click operation of clicking a "shooting" button on a shooting interface of the camera application according to the received touch point coordinates; the processor 110 responds to a click operation input by a user, controls the binocular camera to shoot a left-view image and a right-view image of a current scene (wherein the left-view image and the right-view image are both RGB images), and controls the ToF camera to acquire depth information in the current scene; the processor 110 calculates the left view image and the right view image based on the parallax of the two cameras of the binocular camera to obtain a sparse depth image, namely a first depth image with low precision; then, the processor 110 performs densification on the first depth image by combining the depth information in the first scene acquired by the ToF camera to obtain a dense depth image (having more effective pixel points than the sparse depth image), that is, a second depth image with higher precision; then, the processor 110 performs processing such as segmentation, background or foreground blurring, foreground and background fusion on the GRB image (the GRB image may be a left-view image captured by the binocular camera, a right-view image captured by the binocular camera, or an RGB image obtained by fusing the left-view image and the right-view image, which is not limited in this embodiment of the present invention) captured by the binocular camera to obtain an RGB image with a background or foreground blurring effect; finally, the processor 110 may further control the touch panel to visually output the RGB image with the background or foreground blurring effect. Optionally, when the visual output is performed, the RGB image may be displayed by fusing the second depth image, for example, a 3D image may be displayed.
In some embodiments, the cell phone 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The mobile phone 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats.
In the embodiment of the present application, the mobile phone 100 may include a plurality of cameras 193, for example, a first camera to implement a binocular camera, a second camera, and a third camera to implement a ToF camera. Of course, other cameras may also be included, such as a fourth camera for implementing a structured light camera, and the number and the type of the cameras are not specifically limited in the present application.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the cellular phone 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. Wherein the storage program area may store an operating system, software codes of application programs (such as a camera application, an album application, a WeChat application) required for at least one function. The data storage area may store data created during use of the mobile phone 100 (e.g., image data, audio data, a phonebook, etc.), and the like. The internal memory 121 may be used to store computer-executable program code of the image blurring processing method proposed by the embodiment of the present application, and the executable program code includes instructions. The processor 110 may call the computer executable program code of the image blurring processing method stored in the internal memory 121, so as to enable the mobile phone 100 to complete the image blurring processing scheme proposed in the embodiment of the present application.
The internal memory 121 may further store an image obtained after the blurring, and for example, the image after the blurring and the original image (i.e., the image before the blurring) may be stored correspondingly. For example, after detecting the instruction for opening the original image, the mobile phone 100 displays the original image, and a mark may be displayed on the original image, and when the mark is triggered, the mobile phone 100 opens the blurred image (the image after blurring the original image); alternatively, the mobile phone 100 displays the blurred image after detecting the instruction to open the blurred image, displays a mark on the blurred image, and opens the original image (the original image corresponding to the blurred image) when the mark is triggered.
The mobile phone 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The cellular phone 100 may receive a key input, and generate a key signal input related to user setting and function control of the cellular phone 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the cellular phone 100 by being inserted into the SIM card interface 195 or being pulled out from the SIM card interface 195. The handset 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The mobile phone 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the handset 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the mobile phone 100 and cannot be separated from the mobile phone 100.
It should be understood that the illustrated structure of the embodiment of the present application does not specifically limit the mobile phone 100. In other embodiments of the present application, the handset 100 may include more or fewer components than shown, or some components may be combined, some components may be separated, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Referring to fig. 2, a flow diagram of an image blurring processing method provided in the embodiment of the present application is shown, and the method may be applied to the mobile phone 100 shown in fig. 1 or other devices. Taking the method applied to the mobile phone 100 shown in fig. 1 as an example, software codes of the method may be stored in the internal memory 121, and the processor 110 of the mobile phone 100 runs the software codes to implement the image capturing process shown in fig. 2. As shown in fig. 2, the flow of the method includes:
s201: the mobile phone uses a binocular camera and a ToF camera to shoot an image of a first scene, obtains an RGB image and a first depth image corresponding to the first scene based on the binocular camera, and obtains depth information of the first scene based on the ToF camera.
Specifically, the processor 110 in the mobile phone starts the binocular camera and the ToF camera to perform a photographing operation in response to the first instruction. The first instruction is used to instruct the processor 110 to perform an operation of capturing an image, where the first instruction may be an instruction generated by triggering a function in an application (for example, a capturing function in WeChat is started, and a camera is started), or may be an instruction generated based on an input operation performed by a user, where the input operation may be a contact operation (for example, a click operation or a long-press operation) input by the user on the display screen 194, or a somatosensory operation or a non-contact operation input near a mobile phone, or an operation of inputting a voice instruction (for example, "capturing a picture", "123", and the like), and the embodiment of the present invention is not particularly limited.
For example, referring to fig. 3, a user points to a shooting button below a shooting picture on a shooting interface of a camera application, after a touch operation input by the user is detected in the display screen 194, a signal corresponding to the detected touch operation is sent to the touch controller in real time, and the touch controller converts the signal into a touch point coordinate and sends the touch point coordinate to the processor 110; the processor 110 determines that the touch operation is specifically to click a "shooting" button on a shooting interface of the camera application according to the received touch point coordinates, and then the processor 110 responds to the click operation input by the user, starts the binocular camera and the ToF camera, and executes a shooting operation on the first scene.
In this embodiment of the application, the binocular camera includes a first camera and a second camera, and when the binocular camera is started under the control of the processor 110, the first camera and the second camera respectively capture a first image of a first scene at a first viewing angle and a second image of the first scene at a second viewing angle, where the first viewing angle and the second viewing angle are different (for example, a left viewing angle and a right viewing angle, respectively); then, the first camera and the second camera transmit the shot first image and second image to the processor 110, and the processor 110 calculates the first image and the second image based on the binocular disparity principle, so as to obtain a sparse depth map.
In a specific implementation, the processor 110 may process the first image and the second image through a hardware module, for example, software codes for implementing functions of calibrating parameters of the binocular camera (including internal reference and external reference), correcting the binocular image (including distortion correction and stereo correction), matching binocular feature points, calculating a disparity map or a depth map, and synthesizing virtual viewpoints using the disparity map or the depth map are solidified on a prepared chip (for example, ISP). The ISP can realize the processing of the first image and the second image by running the part of software code to generate the first depth image, so that the computing speed of the system can be improved.
In other embodiments, the binocular camera may be replaced by a multi-view camera, and the multi-view camera may include more cameras than the binocular camera, such as a first camera, a second camera, and a third camera, where the first camera, the second camera, and the third camera are respectively used for capturing images of the first scene at three different viewing angles, such as the first image, the second image, and the third image. After obtaining the first image, the second image, and the third image, the processor 110 performs feature point matching and depth calculation on the first image, the second image, and the third image based on the principle of multi-view parallax to obtain a first depth image.
In the embodiment of the present application, when the ToF camera photographs the first scene, the ToF camera may continuously send light pulses to the target (for example, each object in the first scene), and then the sensor receives the light returned from the object, and the processor 110 may obtain the distance to the target object through the flight (round trip) time of the light. Specifically, the ToF camera may include a transmitting module and a receiving module, wherein the transmitting module and the receiving module are respectively used for emitting light and receiving light reflected from the surface of the object to be photographed under the control of the processor 110; the processor 110 obtains depth information for the first scene by calculating the distance to each point in the first scene and the cell phone by calculating the time for the ray to go out of the ToF camera to go back to the ToF camera. The sending module may be a visible light source, a laser light source (such as an infrared light source), and the like, the embodiment of the application is not particularly limited, and the receiving module may be a camera module, such as a third camera, or a first camera or a second camera of a multiplexing binocular camera. In some embodiments, the representation of the ToF camera obtaining depth information of the first scene may also be a depth map, such as a third depth map.
In other embodiments, if the object to be photographed is closer to the mobile phone, such as a scene of portrait self-photographing, the ToF camera may be replaced by a structured light camera, that is, the mobile phone may simultaneously use a binocular camera and the structured light camera to photograph an image of a first scene, obtain a first depth image of a first resolution based on binocular disparity of the binocular camera, obtain depth information of the first scene based on the structured light camera, and then process the first depth image using the depth information obtained by the structured light camera. The specific implementation method for acquiring the depth information of the first scene by the structured light camera comprises the following steps: after the structured light camera is started by the processor 110, a pre-designed pattern is projected to the first scene as a reference image (coded light source), then structured light is projected to the surface of an object in the first scene, and a camera is used to receive the structured light pattern reflected by the surface of the object, so that the structured light pattern reflected by the surface of the object obtained by the camera is compared with the reference image, the depth value of each point in the first scene is calculated according to the position and the deformation degree of the structured light pattern reflected by the surface of the object, and then the depth information (or called as a third depth map) of the first scene is obtained.
S202, the mobile phone densifies the first depth image by using the depth information of the first scene acquired by the ToF camera so as to supplement invalid pixel points in the first depth image and acquire a second depth image.
Specifically, the processor 110 in the mobile phone performs effective pixel expansion in the invalid depth area around each pixel in the first depth image (i.e., the area where the invalid pixel is located, where there is no depth value or no accurate depth value) according to the high-precision depth information obtained by the ToF.
For example, please refer to fig. 4A and 4B. Fig. 4A is a partial image of the first depth image before densification, and m2, m3, and m4 positions around the m1 position in fig. 4A are invalid pixel points without depth information; based on the depth information of the first scene obtained by ToF, depth values at m2, m3 and m4 are matched, and corresponding depth values are filled at positions m2, m3 and m 4. Similar approaches can be used to expand other invalid pixel regions, which are not illustrated herein. Fig. 4B is a partial image of the second depth image after being dense, and compared with fig. 4A, in fig. 4B, the number of effective pixels in fig. 4B is increased, that is, the refinement degree is higher.
S203: the mobile phone performs foreground and background segmentation on the RGB image acquired by the binocular camera by using the second depth image to acquire a first foreground image and a first background image.
The RGB image subjected to foreground and background segmentation may be a left view image or a right view image captured by a binocular camera, or a new RGB image obtained by performing fusion calculation on the left view image and the right view image, which is not limited herein.
The processor 110 in the mobile phone maps each pixel point in the second depth image into the RGB image, so that a depth value corresponding to each pixel point in the RGB image can be obtained; the foreground region may then be determined from a segmentation threshold for regions of the RGB image having depth values greater than or equal to the segmentation threshold, and the foreground region may be determined from regions of the RGB image having depth values less than the segmentation threshold, and the foreground edge may then be identified, and the foreground and background may then be segmented from the second depth image based on the edge. In some embodiments, the segmentation threshold may be preset by a designer and stored in the internal memory 121 of the mobile phone; in other embodiments, the segmentation threshold may also be obtained by observing the depth distribution of the RGB image by the processor 110, generally, the foreground point and the background point are usually significantly different, the foreground point is generally distributed more intensively, and the depth value is also generally much smaller than the background point, so the position with a larger change gradient of the depth value is generally the edge position of the foreground. For example, please refer to fig. 5A and 5B, wherein fig. 5A is an RGB image corresponding to the captured scene shown in fig. 3, and the edge of the foreground of the image is shown in fig. 5B.
S204: the mobile phone performs blurring processing on the first background image, and fuses the first foreground image and the blurred first background image to obtain an RGB image with a background blurring effect.
Specifically, the processor 110 in the mobile phone may refine the parameters of the design (spatial domain) filter by the second depth image, and perform blurring processing on the first background image by using the filter.
In this embodiment, the filter used for the image blurring process may be of various types, such as an average filter, a gaussian filter (gaussian blur), a median filter, or a bilateral filter, and the embodiment of the present application is not limited specifically. The parameters of the filter that need to be refined may include the size of the filter, the weighting coefficients corresponding to each pixel point within the filter, and the shape of the filter. The size of the filter and the weight coefficient corresponding to each pixel point in the filter determine the virtualization degrees of backgrounds with different depths, the virtualization degrees can be designed based on depth information, and the shape of the filter influences the shape of a virtualization light spot and can be preset by a technician.
Illustratively, an averaging filter is taken as an example. First, a template is given to each pixel point on the background image, and the template includes neighboring pixel points around the pixel point (i.e., 8 surrounding pixel points centered on the pixel point constitute a filtering template, as shown in fig. 6); then, the average value of all the pixels in the template (i.e. 9 pixels shown in fig. 6) is used to replace the original pixel value of the pixel. In a specific implementation, the size of the template may be adjusted, for example, may also be 5 × 5, 7 × 7, and the like, and specifically, the size may be adjusted according to a required blurring degree, which is not limited in this embodiment of the present application. One possible design is that the higher the intensity of blurring needed, the larger the template.
In the embodiment of the present application, the blurring degree corresponding to the area with the larger depth value may be higher. Following the second depth image shown in fig. 5A and 5B, as shown in fig. 5A and 5B, the first background image includes image elements corresponding to three trees from left to right, where a distance between a tree and a cell phone located at the left rear of the portrait is greater than a distance between a tree and a cell phone located at the right rear of the portrait, so that when blurring the background image, a blurring degree of blurring of image elements corresponding to a tree located at the left rear of the portrait may be greater than a blurring degree of image elements corresponding to a tree located at the right rear of the portrait, as shown in fig. 7.
According to the scheme, the ToF camera is controlled to shoot the first scene when the binocular camera shoots the depth image of the first scene, the RGB image and the first depth image of the first scene are obtained based on the binocular camera, the high-precision depth information of the first scene is obtained based on the ToF camera, then the high-precision depth information obtained by the ToF camera is used for carrying out densification processing on the first depth image shot by the binocular camera to obtain the second depth image, and then the second depth image is used for segmenting the front scene and the rear scene of the RGB image, so that the accuracy of foreground edge identification can be improved, and a good edge segmentation effect can be obtained even under scenes with similar foreground and background textures, colors and the like; in addition, in the blurring process, the blurring strength at the position with larger depth of field can be higher, so that the background after blurring has good layering sense and natural transition, and the blurring effect is closer to the real optical virtual focus effect.
Further, since the blurring process is performed in a small size, the background portion obtained by the final sampling may have problems such as blurring and moire. In view of this, after obtaining the RGB image with the background blurring effect, the mobile phone in the embodiment of the present application may further fuse the RGB image with the background blurring effect and the original RGB image (RGB image before blurring), so as to effectively eliminate the problems of foreground blurring and moire.
The specific implementation of the fusion may be to perform weighted addition on the pixel points in the RGB image with the background blurring effect and the pixel points in the original RGB image. The weighting coefficients corresponding to the pixels at different positions in the scene may be different, for example: the larger the corresponding depth value is, the larger the weight coefficient of the pixel point in the RGB image corresponding to the background blurring effect is, and the smaller the weight coefficient of the pixel point in the original RGB image is. Of course, other fusion modes may be available in specific implementation, and the embodiment of the present application is not limited.
In other embodiments of the present application, in step S204, the first foreground image may also be blurred, so as to implement a depth map with a foreground blurring effect. For example, referring to fig. 8, a method flow of blurring a foreground may include:
s301: the mobile phone shoots an image of a first scene by using a binocular camera and a ToF camera, obtains an RGB image based on shooting of the binocular camera, obtains a first depth image based on binocular parallax of the binocular camera, and obtains depth information of the first scene based on the ToF camera.
And S302, the mobile phone densifies the first depth image by using the depth information of the first scene acquired by the ToF camera to acquire a second depth image.
S303: the mobile phone uses the second depth image to segment the foreground and background of the RGB image, and a first foreground image and a first background image are obtained.
The specific implementation manners of steps S301 to S303 may refer to the specific implementation manners of steps S201 to S203, which are not described herein again.
S304: the mobile phone performs blurring processing on the first foreground image, and fuses the blurred first foreground image and the first background image to obtain an RGB image with a foreground blurring effect.
When the processor 110 in the mobile phone virtualizes the foreground, the blurring strength may be higher at a place with a smaller depth of field, so that the virtualization has a hierarchical sense. And the depth image with the foreground blurring effect can be obtained by fusing the blurred foreground and the background. For a specific implementation of blurring the foreground, reference may be made to the above-mentioned implementation of blurring the background, and details are not described here again.
For example, following the second depth image shown in fig. 5A and 5B, as shown in fig. 9, in order to blur the foreground image, the portrait in the foreground in fig. 9 is blurred, and the tree in the background still maintains the original sharpness.
According to the scheme, on the basis that the binocular camera shoots the depth image of the first scene, the ToF camera is combined to obtain the depth information with high first scene precision, the high-precision depth information obtained by the ToF camera is used for carrying out densification processing and foreground and background segmentation on the first depth image shot by the binocular camera, the precision of foreground edge identification is improved, and the blurring edge effect is optimized; in addition, in the blurring process, the foreground is blurred by using the high-precision second depth image, so that the blurring strength of the part with smaller depth of field is higher, and the layering sense of the blurring of the foreground can be improved.
Further, in the embodiment of the present application, if the subject photographed in the photographing scene is a subject of a specific type, the CNN may be used to identify the edge of the subject photographed, segment the divided scenes, and further optimize the foreground edge effect of the blurred image.
Taking background blurring as an example, referring to fig. 10, when a subject of a specified type is photographed, a flow of the image blurring processing method in the embodiment of the present application may include:
s401: the mobile phone shoots an image of a second scene by using a binocular camera and a ToF camera, obtains an RGB image based on shooting of the binocular camera, obtains a first depth image based on binocular parallax of the binocular camera, and obtains depth information of the second scene based on the ToF camera.
For a specific implementation of step S401, reference may be made to the above specific implementation of step 201, and details are not described here.
S402, extracting the features of the RGB image by using the trained CNN model, and extracting the feature image corresponding to the target object.
Specifically, the processor 110 uses a plurality of image instances of other subjects to be photographed of the same type as the target object to train the CNN in advance, so as to obtain a trained CNN model, and stores the trained CNN model in the internal memory 121, wherein the model is input as an image and output as a feature map corresponding to the subject to be photographed in the image. After obtaining the RGB image, the processor 110 reads the CNN model from the internal memory 121, runs the CNN model, inputs the RGB image into the CNN model, and calculates the RGB image by the CNN model to output a feature image corresponding to the target object in the RGB image. The target object may be a specific type of subject, such as a human face, and may also be other subjects, such as animals, flowers, and the like, which is not limited in this embodiment of the present application.
Exemplarily, a portrait is taken as an example. Referring to fig. 11A, the user a holds the mobile phone to take a self-timer, and the corresponding second scene is the scene where the user a is located. As can be seen from fig. 11A, in the second scene, there is another pedestrian B (located right behind the user a) in addition to the user a itself. After the processor 110 captures the RGB image corresponding to the second scene based on the binocular camera, the processor may perform feature extraction on the RGB image using the CNN model, extract a feature image corresponding to the target object, as shown in fig. 11B, and extract a portrait feature image of the user a (a portion indicated by a dotted line in fig. 11B).
And S403, densifying the first depth image by using the depth information of the second scene acquired by the ToF camera to acquire a second depth image.
For a specific implementation of step S403, reference may be made to the above specific implementation of step 202, and details are not described here.
As an optional implementation manner, when the processor 110 performs densification on the second depth image by using the depth information of the second scene acquired by the ToF camera, the processor may further combine the feature map extracted by the CNN model to distinguish a foreground from a background, and further perform densification on the foreground and the background, so as to improve the accuracy of the densification.
S404, identifying the characteristic image according to the CNN model, and segmenting the foreground and the background in the RGB image to obtain a second foreground image and a second background image.
Specifically, the processor 110 in the mobile phone takes the feature image obtained in step S402 as a foreground image, then identifies an edge of the foreground image, and performs foreground and background segmentation on the RGB image based on the edge to obtain a second foreground image and a second background image.
As an alternative embodiment, in order to further improve the accuracy of foreground and background segmentation, the foreground and background may be segmented by combining the depth information obtained by the ToF camera (or the structured light camera) or the above-mentioned densified second depth image. For example, the processor 110 initially determines the feature image obtained in step S402 as a foreground image, identifies an edge of the foreground image, and determines a boundary region (e.g., a region within a preset range from the edge) of the foreground image and the background image based on the identified edge; then, the processor 110 corrects the pixel points in the junction area of the foreground and background images based on the depth information or the second depth image obtained by the ToF camera (or the structured light camera), for example, the pixel points in the junction area with the depth value greater than or equal to the preset value are classified into the background, the pixel points in the junction area with the depth value less than the preset value are classified into the foreground, and the like; then, the processor 110 re-identifies the edge of the foreground image, and performs foreground and background segmentation on the RGB image based on the re-identified edge to obtain a second foreground image and a second background image.
S405, blurring the second background image, and fusing the second background image and the second foreground image after blurring to obtain an RGB image with a background blurring effect.
The specific implementation manner of step S405 may refer to the specific implementation manner of step S204, and is not described in detail in this embodiment of the application. Fig. 11C is an effect diagram of blurring the background of the image shown in fig. 11B, and it can be seen from fig. 11C that only the face of the user a is clear in the blurred image, and the body of the user a and the pedestrian B in the background are both blurred, so that the visual effect of highlighting the face of the user a can be achieved.
Based on the binocular camera and the ToF camera, the CNN is further combined to identify the shot subject of a specific type, and the foreground and the background are intelligently divided, so that different blurring effects are achieved.
For example, referring to the example shown in fig. 11A to 11C, if the blurring process is performed only based on the binocular camera and the ToF camera without combining the CNN, the depth values of the face and the body of the user a are small, and therefore the body of the user a is divided into the foreground, and the body of the user a is not blurred, and after the CNN recognition step is combined, the face feature map can be recognized, so that the effect of blurring other parts except the portrait can be achieved. Of course, in specific implementation, the portrait herein may also refer to all of a human body, that is, both the face and the body of the human body are recognized as the foreground, and the embodiment of the present invention is not limited herein.
The technical solution of the present application is described above by taking the blurring processing of a single frame image as an example, and in a specific implementation process, the technical solution of the present application is also applicable to a scene in which a continuous multi-frame image (such as a video or a motion picture) is blurred.
If the images shot by the mobile phone are continuous multi-frame images, the mobile phone can execute the steps S201 to S204 on each frame of image, so that the blurring effect of each frame of image can be optimized. Exemplarily, taking a video with a background blurring effect as an example, referring to fig. 12, a method for blurring a video according to an embodiment of the present application includes:
s501: the method comprises the steps that a mobile phone starts a binocular camera and a ToF camera to shoot videos, multiple frames of continuous RGB images are obtained based on shooting of the binocular camera, a first depth image corresponding to each frame of RGB image in the multiple frames of continuous RGB images is obtained based on binocular parallax of the binocular camera, and depth information corresponding to each frame of RGB image is obtained based on the ToF camera.
And S502, for each frame of RGB image in the multiple frames of continuous RGB images, the mobile phone performs densification processing on the first depth image corresponding to the frame of RGB image by using the depth information corresponding to the frame of RGB image acquired by the second camera, so as to supplement invalid pixel points in the first depth image corresponding to the frame of RGB image, and obtain a second depth image corresponding to the frame of RGB image.
S503: and the mobile phone performs foreground and background segmentation on each frame of RGB image by using the second depth image corresponding to each frame of RGB image in the plurality of frames of continuous RGB images to obtain a foreground image and a background image corresponding to each frame of RGB image.
S504: and the mobile phone performs blurring processing on the foreground image or the background image corresponding to each frame of RGB image in the multiple frames of continuous RGB images to obtain a background blurring effect or a RGB image with a foreground blurring effect corresponding to each frame of RGB image, wherein the RGB images with the background blurring effect or the foreground blurring effect form a video with the background blurring effect or the foreground blurring effect.
The specific processing method for each frame of RGB image in steps S501 to S504 may refer to the specific processing method for RGB image in steps S201 to S204, and is not described in detail in this application.
Similarly, in the blurring process, the processor 110 in the mobile phone may further perform blurring processing on the background of each frame of RGB image by using the second depth image corresponding to each frame of RGB image or the high-precision depth information acquired by the ToF camera, so that the blurring strength is higher at a position with a larger depth of field, and the blurred background has a good hierarchical sense. Certainly, the processor 110 may also perform blurring on the foreground of the video, and the specific implementation method for performing foreground blurring on each frame of RGB image may refer to the specific implementation method for performing foreground blurring on RGB images in steps S301 to S304, which is not described in detail in this embodiment of the present application.
On the basis that the binocular camera shoots the video images, the ToF camera is combined to obtain the depth information of the shot scene, the high-precision depth information obtained by the ToF camera is utilized to carry out densification processing and segmentation of the foreground and the background on the first depth image corresponding to each frame of RGB images shot by the binocular camera, the accuracy of video foreground edge identification is improved, and a good edge segmentation effect can be obtained even under scenes with similar foreground and background textures, colors and the like; in addition, in the process of blurring each frame of RGB image, the background or the foreground of the RGB image is also blurred by using the second depth image (or the high-precision depth information acquired by the ToF camera), so that blurring has good layering, transition is natural, and visual experience can be improved.
As an optional implementation, after performing the densification processing and before performing the blurring processing on each frame image, the mobile phone may further perform the time domain smoothing processing on each frame image by using the high inter-frame stability of the ToF camera and the CNN.
Specifically, the processor 110 in the mobile phone may perform weighted fusion on N adjacent RGB images in the multiple frames of continuous RGB images, where N is a positive integer greater than 1, and the weighting coefficient of each frame of RGB image in the previous N adjacent RGB images may be determined based on depth information corresponding to the RGB image acquired by the ToF camera, or determined according to foreground and background edge information of the RGB image identified by the CNN, or the fusion coefficient of the RGB image is determined by considering the depth information corresponding to the RGB image acquired by the ToF camera and the foreground and background edge information of the RGB image identified by the CNN at the same time, which is not limited herein in the embodiment of the present invention. Further, in specific implementation, all RGB images in the video may be divided into a plurality of groups of RGB images according to a time sequence, where each group includes N RGB images adjacent to each other, and then the weighted fusion operation is performed on each group of RGB images, so as to implement time domain smoothing of all RGB images. Correspondingly, when blurring the video, the processor 110 performs blurring on the RGB image obtained by weighting and fusing.
Because the depth information between the adjacent RGB images obtained by ToF has better stability and the edge information between the adjacent RGB images obtained by CNN detection has better stability, the adjacent RGB images are weighted and fused based on the depth information between the adjacent RGB images obtained by ToF and/or the edge information between the adjacent RGB images obtained by CNN detection, so that the image transition between the adjacent RGB images after weighted fusion processing is more natural, and the problem that the background flicker exists in the shot RGB images due to the poor stability between frames of the binocular camera when the binocular camera in the prior art shoots videos with blurring effects is further improved.
In the embodiments provided in the present application, the method provided in the embodiments of the present application is described from the perspective of the electronic device (the mobile phone 100) as an execution subject. In order to implement the functions in the method provided by the embodiment of the present application, the terminal may include a hardware structure and/or a software module, and implement the functions in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
Based on the same technical concept, embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium includes a computer program, and when the computer program is executed on an electronic device, the electronic device is caused to perform all or part of the steps described in the method embodiments shown in fig. 2, fig. 8, fig. 10, and fig. 12.
Based on the same technical concept, the embodiment of the present application further provides a program product, which includes instructions that, when executed on a computer, cause the computer to perform all or part of the steps described in the method embodiments shown in fig. 2, fig. 8, fig. 10, and fig. 12.
Based on the same technical concept, the embodiments of the present application also provide a circuit system, which may be one or more chips, such as a system on a chip. In some embodiments, the circuitry may be the cell phone 100 shown in fig. 1 or a component in the cell phone 100. The circuit system is used for generating a first control signal, and the first control signal is used for controlling a first camera to execute shooting operation on a first scene and acquiring a first depth image corresponding to the first scene; the circuit system is further configured to generate a second control signal, where the second control signal is used to control the second camera to perform a shooting operation on the first scene, and obtain depth information of the first scene.
The various embodiments of the present application can be combined arbitrarily to achieve different technical effects.
The above embodiments are only used to describe the technical solutions of the present application in detail, but the above embodiments are only used to help understanding the method of the embodiments of the present application, and should not be construed as limiting the embodiments of the present application. Modifications and substitutions that may be readily apparent to those skilled in the art are intended to be included within the scope of the embodiments of the present application.
As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to a determination of …" or "in response to a detection of …", depending on the context. Similarly, depending on the context, the phrase "at the time of determination …" or "if (a stated condition or event) is detected" may be interpreted to mean "if the determination …" or "in response to the determination …" or "upon detection (a stated condition or event)" or "in response to detection (a stated condition or event)".
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the exemplary discussions above are not intended to be exhaustive or to limit the application to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best utilize the application and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (18)

1. An image blurring processing method is applied to an electronic device, wherein a first camera and a second camera are arranged on the electronic device, the first camera is a binocular camera or a multi-view camera, and the second camera is a time-of-flight (ToF) camera or a structured light camera, and the method comprises the following steps:
responding to a first instruction, starting the first camera and the second camera to execute shooting operation on a first scene, wherein the first camera acquires a red, green, blue, RGB (red, green, blue) image and a first depth image corresponding to the first scene, and the second camera acquires depth information of the first scene;
carrying out densification processing on the first depth image by using the depth information of the first scene so as to supplement invalid pixel points in the first depth image to obtain a second depth image, wherein the number of the invalid pixel points in the second depth image is less than that of the invalid pixel points in the first depth image;
performing foreground and background segmentation on the RGB image by using the second depth image to obtain a foreground image and a background image;
and carrying out blurring processing on the background image or the foreground image to obtain an RGB image with a background or foreground blurring effect.
2. The method of claim 1, wherein using the second depth image for foreground and background segmentation of an RGB image comprises:
segmenting the foreground and the background in the RGB by utilizing the second depth image and a trained Convolutional Neural Network (CNN) model; the trained CNN model inputs the RGB image of which the subject is the subject of the specific type and outputs the RGB image part corresponding to the subject in the image.
3. The method of claim 2, wherein the particular type of subject is a portrait.
4. A method according to any one of claims 1 to 3, wherein blurring the background image or the foreground image to obtain a RGB image of background or foreground blurring effect comprises:
blurring the background image by utilizing the second depth image to the RGB image, wherein the blurring degree corresponding to the pixel area with the larger depth value in the background image is higher; fusing the foreground image and the background image after blurring to obtain an RGB image with background blurring effect; or
Blurring the foreground image, wherein the blurring degree corresponding to a pixel region with a smaller depth value in the foreground image is higher; and fusing the background image and the foreground image after blurring to obtain an RGB image with a foreground blurring effect.
5. A method as claimed in any one of claims 1 to 3, wherein after obtaining the RGB image of the background or foreground blurring effect, further comprising:
and carrying out weighted fusion on the RGB image with the background or foreground blurring effect and the RGB image.
6. The method of any one of claims 1-5, wherein the first camera acquiring the RGB image and the first depth image corresponding to the first scene comprises:
the first camera shoots a video corresponding to the first scene, wherein the video comprises a plurality of frames of continuous RGB images and acquires a first depth image corresponding to each frame of RGB image in the plurality of frames of continuous RGB images;
the second camera obtains depth information of the first scene, including:
the second camera acquires depth information corresponding to each frame of RGB image in the multiple frames of continuous RGB images;
performing densification processing on the first depth image by using the depth information of the first scene to obtain a second depth image, including:
for each frame of RGB image in the multiple frames of continuous RGB images, performing densification processing on a first depth image corresponding to the frame of RGB image by using depth information corresponding to the frame of RGB image acquired by the second camera so as to supplement invalid pixel points in the first depth image corresponding to the frame of RGB image, and obtaining a second depth image corresponding to the frame of RGB image;
performing foreground and background segmentation on the RGB image by using the second depth image to obtain a foreground image and a background image, wherein the foreground image and the background image comprise:
performing foreground and background segmentation on each frame of RGB image by using a second depth image corresponding to each frame of RGB image in the plurality of frames of continuous RGB images to obtain a foreground image and a background image corresponding to each frame of RGB image;
blurring the background image or the foreground image to obtain an RGB image with a background or foreground blurring effect, wherein the blurring process comprises the following steps:
and blurring the foreground image or the background image corresponding to each frame of RGB image in the multiple frames of continuous RGB images to obtain a depth image with a background or foreground blurring effect corresponding to each frame of RGB image.
7. The method as claimed in claim 6, wherein before blurring the foreground image or the background image corresponding to each frame of RGB images in the plurality of frames of consecutive RGB images, the method further comprises:
performing weighted fusion on N adjacent RGB images in the multiple frames of continuous RGB images, wherein N is a positive integer greater than 1, and a weighting coefficient corresponding to each frame of RGB image in the N adjacent RGB images is determined based on depth information corresponding to the frame of RGB image acquired by the second camera and/or foreground and background edge information of the frame of RGB image identified by CNN;
blurring the background image or the foreground image, including:
and blurring the background image or the foreground image corresponding to the RGB image after weighting and fusion.
8. An electronic device comprising a first camera, a second camera, and at least one processor, wherein the first camera is a binocular camera or a multi-view camera, and the second camera is a time-of-flight (ToF) camera or a structured light camera;
the processor is used for responding to a first instruction, and starting the first camera and the second camera to carry out shooting operation on a first scene;
the first camera is used for executing shooting operation on the first scene to acquire a red, green and blue (RGB) image and a first depth image corresponding to the first scene;
the second camera is used for executing shooting operation on the first scene and acquiring depth information of the first scene;
the processor is further configured to perform densification processing on the first depth image by using the depth information of the first scene to supplement invalid pixel points in the first depth image, so as to obtain a second depth image, where the number of the invalid pixel points in the second depth image is less than that of the invalid pixel points in the first depth image;
performing foreground and background segmentation on the RGB image by using the second depth image to obtain a foreground image and a background image;
and carrying out blurring processing on the background image or the foreground image to obtain an RGB image with a background or foreground blurring effect.
9. The electronic device of claim 8, wherein the processor, when performing foreground-background segmentation on the RGB image using the second depth image, is specifically configured to:
segmenting the foreground and the background in the RGB by utilizing the second depth image and a trained Convolutional Neural Network (CNN) model; the trained CNN model inputs the RGB image of which the subject is the subject of the specific type and outputs the RGB image part corresponding to the subject in the image.
10. The electronic device of claim 8, wherein the particular type of subject is a portrait.
11. The electronic device according to any of claims 8-10, wherein the processor, when blurring the background image or the foreground image, is specifically configured to:
blurring the background image by utilizing the second depth image to the RGB image, wherein the blurring degree corresponding to the pixel area with the larger depth value in the background image is higher; fusing the foreground image and the background image after blurring to obtain an RGB image with background blurring effect; or
Blurring the foreground image, wherein the blurring degree corresponding to a pixel region with a smaller depth value in the foreground image is higher; and fusing the background image and the foreground image after blurring to obtain an RGB image with a foreground blurring effect.
12. The electronic device of any of claims 8-10, wherein the processor is further to:
and after obtaining the RGB image with the background or foreground blurring effect, carrying out weighted fusion on the RGB image with the background or foreground blurring effect and the RGB image.
13. The electronic device of any of claims 8-12, wherein the first camera is to: shooting a video corresponding to the first scene, wherein the video comprises a plurality of frames of continuous RGB images, and acquiring a first depth image corresponding to each frame of RGB image in the plurality of frames of continuous RGB images;
the second camera is for: acquiring depth information corresponding to each frame of RGB image in the plurality of frames of continuous RGB images;
the processor is configured to: for each frame of RGB image in the multiple frames of continuous RGB images, performing densification processing on a first depth image corresponding to the frame of RGB image by using depth information corresponding to the frame of RGB image acquired by the second camera so as to supplement invalid pixel points in the first depth image corresponding to the frame of RGB image, and obtaining a second depth image corresponding to the frame of RGB image; performing foreground and background segmentation on each frame of RGB image by using a second depth image corresponding to each frame of RGB image in the plurality of frames of continuous RGB images to obtain a foreground image and a background image corresponding to each frame of RGB image; and blurring the foreground image or the background image corresponding to each frame of RGB image in the multiple frames of continuous RGB images to obtain a depth image with a background or foreground blurring effect corresponding to each frame of RGB image.
14. The electronic device of claim 13, wherein the processor is further configured to:
before blurring a foreground image or a background image corresponding to each frame of RGB image in the multiple frames of continuous RGB images, performing weighted fusion on N adjacent RGB images in the multiple frames of continuous RGB images, wherein N is a positive integer greater than 1, and a weighting coefficient corresponding to each frame of RGB image in the N adjacent RGB images is determined based on depth information corresponding to the frame of RGB image acquired by the second camera and/or foreground and background edge information of the frame of RGB image identified by CNN;
when the processor performs blurring processing on the background image or the foreground image, the processor is specifically configured to: and blurring the background image or the foreground image corresponding to the RGB image after weighting and fusion.
15. An electronic device comprising a first camera, a second camera, at least one processor, and a memory, wherein the first camera is a binocular camera or a multi-view camera, and the second camera is a time-of-flight (ToF) camera or a structured light camera;
the memory for storing one or more computer programs; the one or more computer programs stored in the memory, when executed by the at least one processor, enable the electronic device to implement the method of any of claims 1-7.
16. A computer-readable storage medium, comprising a computer program which, when run on an electronic device, causes the electronic device to perform the method of any of claims 1 to 7.
17. A program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-7.
18. A circuit system, characterized in that:
the circuit system is used for generating a first control signal, and the first control signal is used for controlling a first camera to execute shooting operation on a first scene and acquiring a first depth image corresponding to the first scene;
the circuit system is further configured to generate a second control signal, where the second control signal is used to control the second camera to perform a shooting operation on the first scene, and obtain depth information of the first scene.
CN201910880232.3A 2019-09-18 2019-09-18 Image blurring processing method and electronic equipment Pending CN112614057A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910880232.3A CN112614057A (en) 2019-09-18 2019-09-18 Image blurring processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910880232.3A CN112614057A (en) 2019-09-18 2019-09-18 Image blurring processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN112614057A true CN112614057A (en) 2021-04-06

Family

ID=75224225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910880232.3A Pending CN112614057A (en) 2019-09-18 2019-09-18 Image blurring processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN112614057A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113301320A (en) * 2021-04-07 2021-08-24 维沃移动通信(杭州)有限公司 Image information processing method and device and electronic equipment
CN113610884A (en) * 2021-07-08 2021-11-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114125296A (en) * 2021-11-24 2022-03-01 广东维沃软件技术有限公司 Image processing method, image processing device, electronic equipment and readable storage medium
CN114245011A (en) * 2021-12-10 2022-03-25 荣耀终端有限公司 Image processing method, user interface and electronic equipment
CN114363489A (en) * 2021-12-29 2022-04-15 珠海惠中智能技术有限公司 Augmented reality system with camera and eye display device direct coupling
CN115375827A (en) * 2022-07-21 2022-11-22 荣耀终端有限公司 Illumination estimation method and electronic equipment
CN115760986A (en) * 2022-11-30 2023-03-07 北京中环高科环境治理有限公司 Image processing method and device based on neural network model
US20230091313A1 (en) * 2021-09-21 2023-03-23 Qualcomm Incorporated Systems and methods for generating synthetic depth of field effects
CN116703995A (en) * 2022-10-31 2023-09-05 荣耀终端有限公司 Video blurring processing method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012049022A1 (en) * 2010-10-14 2012-04-19 Thomson Licensing Method and device for digital image processing
US20150022649A1 (en) * 2013-07-16 2015-01-22 Texas Instruments Incorporated Controlling Image Focus in Real-Time Using Gestures and Depth Sensor Data
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN107085825A (en) * 2017-05-27 2017-08-22 成都通甲优博科技有限责任公司 Image weakening method, device and electronic equipment
CN108154465A (en) * 2017-12-19 2018-06-12 北京小米移动软件有限公司 Image processing method and device
CN108322639A (en) * 2017-12-29 2018-07-24 维沃移动通信有限公司 A kind of method, apparatus and mobile terminal of image procossing
CN109068119A (en) * 2018-09-18 2018-12-21 信利光电股份有限公司 A kind of camera module structure
CN109146767A (en) * 2017-09-04 2019-01-04 成都通甲优博科技有限责任公司 Image weakening method and device based on depth map
CN110097590A (en) * 2019-04-24 2019-08-06 成都理工大学 Color depth image repair method based on depth adaptive filtering

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012049022A1 (en) * 2010-10-14 2012-04-19 Thomson Licensing Method and device for digital image processing
US20150022649A1 (en) * 2013-07-16 2015-01-22 Texas Instruments Incorporated Controlling Image Focus in Real-Time Using Gestures and Depth Sensor Data
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN107085825A (en) * 2017-05-27 2017-08-22 成都通甲优博科技有限责任公司 Image weakening method, device and electronic equipment
CN109146767A (en) * 2017-09-04 2019-01-04 成都通甲优博科技有限责任公司 Image weakening method and device based on depth map
CN108154465A (en) * 2017-12-19 2018-06-12 北京小米移动软件有限公司 Image processing method and device
CN108322639A (en) * 2017-12-29 2018-07-24 维沃移动通信有限公司 A kind of method, apparatus and mobile terminal of image procossing
CN109068119A (en) * 2018-09-18 2018-12-21 信利光电股份有限公司 A kind of camera module structure
CN110097590A (en) * 2019-04-24 2019-08-06 成都理工大学 Color depth image repair method based on depth adaptive filtering

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113301320A (en) * 2021-04-07 2021-08-24 维沃移动通信(杭州)有限公司 Image information processing method and device and electronic equipment
CN113610884A (en) * 2021-07-08 2021-11-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
US11863729B2 (en) * 2021-09-21 2024-01-02 Qualcomm Incorporated Systems and methods for generating synthetic depth of field effects
US20230091313A1 (en) * 2021-09-21 2023-03-23 Qualcomm Incorporated Systems and methods for generating synthetic depth of field effects
WO2023049651A1 (en) * 2021-09-21 2023-03-30 Qualcomm Incorporated Systems and methods for generating synthetic depth of field effects
CN114125296A (en) * 2021-11-24 2022-03-01 广东维沃软件技术有限公司 Image processing method, image processing device, electronic equipment and readable storage medium
CN114245011A (en) * 2021-12-10 2022-03-25 荣耀终端有限公司 Image processing method, user interface and electronic equipment
CN114245011B (en) * 2021-12-10 2022-11-08 荣耀终端有限公司 Image processing method, user interface and electronic equipment
CN114363489A (en) * 2021-12-29 2022-04-15 珠海惠中智能技术有限公司 Augmented reality system with camera and eye display device direct coupling
CN114363489B (en) * 2021-12-29 2022-11-15 珠海惠中智能技术有限公司 Augmented reality system with camera and eye display device direct coupling
CN115375827A (en) * 2022-07-21 2022-11-22 荣耀终端有限公司 Illumination estimation method and electronic equipment
CN115375827B (en) * 2022-07-21 2023-09-15 荣耀终端有限公司 Illumination estimation method and electronic equipment
CN116703995A (en) * 2022-10-31 2023-09-05 荣耀终端有限公司 Video blurring processing method and device
CN116703995B (en) * 2022-10-31 2024-05-14 荣耀终端有限公司 Video blurring processing method and device
CN115760986B (en) * 2022-11-30 2023-07-25 北京中环高科环境治理有限公司 Image processing method and device based on neural network model
CN115760986A (en) * 2022-11-30 2023-03-07 北京中环高科环境治理有限公司 Image processing method and device based on neural network model

Similar Documents

Publication Publication Date Title
CN112614057A (en) Image blurring processing method and electronic equipment
CN112333380B (en) Shooting method and equipment
AU2020250124B2 (en) Image processing method and head mounted display device
CN107580209B (en) Photographing imaging method and device of mobile terminal
KR20150077646A (en) Image processing apparatus and method
US20220086360A1 (en) Big aperture blurring method based on dual cameras and tof
CN114092364A (en) Image processing method and related device
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN111541907A (en) Article display method, apparatus, device and storage medium
CN112085647B (en) Face correction method and electronic equipment
US20240153209A1 (en) Object Reconstruction Method and Related Device
KR20200117695A (en) Electronic device and method for controlling camera using external electronic device
CN111447389A (en) Video generation method, device, terminal and storage medium
CN110266957A (en) Image shooting method and mobile terminal
CN107977636B (en) Face detection method and device, terminal and storage medium
CN113741681A (en) Image correction method and electronic equipment
CN113711123B (en) Focusing method and device and electronic equipment
CN110807769B (en) Image display control method and device
CN113364970A (en) Imaging method of non-line-of-sight object and electronic equipment
CN115150542B (en) Video anti-shake method and related equipment
CN111127541A (en) Vehicle size determination method and device and storage medium
CN113850709A (en) Image transformation method and device
CN113364969A (en) Imaging method of non-line-of-sight object and electronic equipment
CN114257737B (en) Shooting mode switching method and related equipment
CN114302063B (en) Shooting method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination