WO2021136078A1 - Image processing method, image processing system, computer readable medium, and electronic apparatus - Google Patents

Image processing method, image processing system, computer readable medium, and electronic apparatus Download PDF

Info

Publication number
WO2021136078A1
WO2021136078A1 PCT/CN2020/139236 CN2020139236W WO2021136078A1 WO 2021136078 A1 WO2021136078 A1 WO 2021136078A1 CN 2020139236 W CN2020139236 W CN 2020139236W WO 2021136078 A1 WO2021136078 A1 WO 2021136078A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blurred
camera component
camera
transition area
Prior art date
Application number
PCT/CN2020/139236
Other languages
French (fr)
Chinese (zh)
Inventor
王照顺
Original Assignee
RealMe重庆移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by RealMe重庆移动通信有限公司 filed Critical RealMe重庆移动通信有限公司
Publication of WO2021136078A1 publication Critical patent/WO2021136078A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the embodiments of the present application relate to the field of image processing technology, and more specifically, to an image processing method, an image processing device, a computer-readable medium, and an electronic device.
  • the background area can be blurred, for example, the "portrait mode" can be used to take a photo with a blurred background.
  • smart mobile terminal devices such as mobile phones and tablet computers generally use image processing algorithms to implement blurring of the background, for example, based on binocular stereo vision matching algorithms. Based on the binocular stereo vision matching algorithm, the quality of the input image is very high. When the input image is inaccurate, it is easy to cause the feature point matching error and lead to the inaccurate calculation of the depth information, which leads to problems such as false and missing images in the blurred image .
  • the embodiments of the present application provide an image processing method, an image processing device, a computer-readable medium, and an electronic device, which are beneficial to improving the image quality of a blurred image.
  • an image processing method comprising: in response to a first trigger operation, activating a first camera assembly and a second camera assembly, and synchronizing the first camera assembly and the second camera assembly; And obtaining the first exposure parameter of the first camera component for the current scene, and querying a preset parameter table based on the first exposure parameter to obtain the second exposure parameter corresponding to the second camera component;
  • the camera component executes a first exposure parameter to obtain a first image
  • the second camera component executes a second exposure parameter to obtain a second image, so as to construct a blur corresponding to the current scene based on the first image and the second image image.
  • an image processing device in a second aspect, includes: a synchronization execution module for activating a first camera assembly and a second camera assembly in response to a first trigger operation, The two camera components are synchronized; and a parameter query module, configured to obtain the first exposure parameter of the first camera component to the current scene, and query a preset parameter table based on the first exposure parameter to obtain the second camera A second exposure parameter corresponding to the component; a parameter execution module for the first camera component to execute the first exposure parameter to obtain the first image, and the second camera component to execute the second exposure parameter to obtain the second image based on The first image and the second image construct a blurred image corresponding to the current scene.
  • a wireless communication terminal including: one or more processors; a storage device, configured to store one or more programs, when the one or more programs are used by the one or more processors When executed, the one or more processors are caused to execute the method in the foregoing first aspect.
  • a computer-readable medium for storing computer software instructions used to execute the method in the above-mentioned first aspect, which contains the programs designed to execute the above-mentioned aspects.
  • Fig. 1 shows a schematic diagram of an image processing method according to an embodiment of the present application.
  • Fig. 2 shows a schematic diagram of a method for constructing a blurred image according to an embodiment of the present application.
  • Fig. 3 shows a schematic diagram of a feature point matching method according to an embodiment of the present application.
  • Fig. 4 shows a schematic diagram of a depth information calculation method according to an embodiment of the present application.
  • Fig. 5 shows a schematic diagram of a first image collected by a first camera component of an embodiment of the present application.
  • FIG. 6 shows a schematic diagram of a depth image corresponding to the first image in an embodiment of the present application.
  • Fig. 7 shows a schematic diagram of a blurred image after fusion processing in an embodiment of the present application.
  • FIG. 8 shows a schematic diagram of a blurred transition area in an embodiment of the present application.
  • FIG. 9 shows a schematic diagram of the effect after the optimization processing of the blurred transition area in the embodiment of the present application.
  • FIG. 10 shows a schematic diagram of the composition of an image processing device according to an embodiment of the present application.
  • Fig. 11 shows a schematic block diagram of a computer system of an electronic device according to an embodiment of the present application.
  • an existing camera such as a single-lens reflex camera
  • electronic devices such as mobile phones and tablet computers
  • due to the limitation of the thickness of the device it cannot be put into the lens of a professional camera.
  • the cameras of each electronic device mostly use a fixed aperture, and it is impossible to use the hardware device to perform optical zoom and directly shoot a background blur similar to a SLR.
  • the picture with the effect of blurring requires the help of image processing algorithms to realize the blurring of the background. For example, based on binocular stereo vision matching algorithm.
  • such an algorithm has very high requirements on the quality of the input image of the dual camera.
  • the quality of the input image is low, it is easy to cause the inability to obtain accurate depth information, leading to problems such as false and false falsehoods in the blurring process.
  • the images captured by the two cameras do not match, or the brightness of the two images are inconsistent, or the outline definition of the input image is poor, etc., which may directly cause the background blur to be inaccurate.
  • this exemplary embodiment provides an image processing method, which can be applied to smart terminal devices equipped with camera functions such as mobile phones and tablet computers.
  • Fig. 1 shows a schematic diagram of an image processing method according to an embodiment of the present application. As shown in Figure 1, the method includes some or all of the following:
  • S12 Obtain a first exposure parameter of the first camera component for the current scene, and query a preset parameter table based on the first exposure parameter to obtain a second exposure parameter corresponding to the second camera component;
  • the first camera component executes a first exposure parameter to obtain a first image
  • the second camera component executes a second exposure parameter to obtain a second image, so as to construct the first image and the second image.
  • the above-mentioned image processing method can be applied to an electronic device configured with at least two camera components.
  • the electronic device can be a mobile smart terminal device such as a mobile phone and a tablet computer.
  • the electronic device can be equipped with two camera components, such as a main camera with higher pixels and a wide-angle camera. Or, it may be equipped with three, four or five camera components; for example, a main camera with higher pixels, a depth camera, a macro camera, a wide-angle camera, and a black and white camera.
  • the following embodiments take a main camera and a wide-angle camera as examples to describe the image processing method.
  • the above-mentioned first camera component may be a main camera
  • the second camera component may be a wide-angle camera.
  • the above-mentioned first trigger operation may be an operation of the user opening and entering the camera application on the terminal, or a touch operation of entering the background blur shooting mode.
  • the first camera component and the second camera component can be started.
  • the first camera component can be started first, and the image of the current scene collected by the first camera component can be recognized to obtain the current scene mode , And start another corresponding camera component as the second camera component based on the scene mode.
  • the two camera components can be synchronized.
  • the electronic device may send a synchronization control signal to the second camera component according to the current state information of the first camera component.
  • the second camera component receives the synchronization control signal, it can realize the frame synchronization on the hardware with the first camera component according to the synchronization control signal.
  • the synchronization control signal may include a clock signal or a timing task signal.
  • the parameter table between the two camera components can be configured in advance according to the hardware characteristics and imaging characteristics of the two camera components.
  • the parameter table may record the correspondence between the shooting parameters of the first camera component and the second camera component.
  • the parameter table may include parameters such as shutter speed, exposure, brightness, sensitivity, white balance, and color temperature.
  • the electronic device can read the first exposure parameter of the first camera component for the current scene. And use the first exposure parameter to retrieve the preset parameter table, so as to obtain the corresponding second exposure parameter in the current scene.
  • the AE Auto Exposure
  • the first camera component after determining the first exposure parameter of the first camera component and the second exposure parameter corresponding to the second camera component at the current moment and the current scene, the first camera component can be The first exposure parameter is executed, and the second exposure parameter is executed on the second camera component at the same time, so that the first camera component and the second camera component simultaneously collect images for the current scene, and obtain the first image and the second image.
  • the construction of the blurred image corresponding to the current scene based on the first image and the second image in the above step S13 may be include:
  • Step S21 Perform feature point matching on the first image and the second image to align the first image and the second image.
  • Step S22 Calculate the parallax value of the first camera component and the second camera component according to the aligned first image and second image, so as to calculate the depth data of each feature point based on the parallax value.
  • Step S23 Construct a depth image based on the depth data.
  • Step S24 Perform image segmentation on the first image according to the depth image to obtain a foreground image and a background image.
  • Step S25 performing blurring processing on the background image to obtain a blurring background image, and performing image fusion on the blurring background image and the foreground image to obtain a blurring image corresponding to the current scene.
  • depth information calculation may be performed based on a binocular ranging method.
  • a space-based coordinate system can be established, in which a polar plane is constructed according to the first camera component, the second camera component, and the target point; and the first image, the second image, and the polar plane can be obtained separately Intersecting epipolar lines; and then determining the feature points in the second image corresponding to the feature points in the first image based on the epipolar lines, so as to align the first image and the second image.
  • C1 and C2 are two cameras, and P is a point in space, such as any point in the current scene being photographed.
  • the point P and the two camera center points C1 and C2 form a plane PC1C2 in the three-dimensional space, which serves as an epipolar plane.
  • the polar plane and the two images m1 and m2 collected by the two cameras intersect at two straight lines respectively, and these two straight lines are called epipolar lines.
  • the imaging point of point P in camera C1 is P1
  • the imaging point in camera C2 is P2. According to the definition of epipolar constraint, we can intuitively observe that P2 must be on the epipolar line in the figure. Therefore, the corresponding point P2 with P1 can be searched along the epipolar line.
  • each feature point or pixel point in the first image and the second image can be matched, so as to align the two images.
  • the search range of feature points can be appropriately widened, the matching robustness can be increased, and the feature points can be improved. Accuracy of matching.
  • the position where a point in space is respectively imaged on the two images can be matched.
  • the difference is calculated using the aligned first image and second image.
  • P is a certain point on the object to be measured, such as a person, animal or any point on a certain object in the currently collected image
  • f is the focal length of the camera
  • OR and OT are two cameras respectively
  • the imaging points of point P on the photoreceptors of the two cameras are P and P'respectively
  • Z is the depth information.
  • the distance from P to point P’ is dis, and the formula can include:
  • the depth information Z can be obtained through formula transformation, and the formula can include:
  • f is the focal length of the camera
  • B is the baseline length of the dual camera
  • a depth image with a certain resolution is output according to the depth information Z of each feature point obtained in the foregoing steps.
  • the depth image can be an 8-bit grayscale image.
  • the obtained depth information Z of each feature point can be normalized to remove the mean value first, and then mapped to the range of [0, 255], the mapped value is the pixel value of the grayscale image, according to In this way, the depth images corresponding to the first image and the second image can be obtained.
  • FIG. 6 is the depth image of FIG. 5.
  • the depth image summary subject depth value X is determined.
  • the subject and background are segmented on the first image collected by the first camera component. For example, an area with depth information less than X is used as a foreground area, and an area with depth information greater than X is used as a background. In this way, accurate segmentation of the foreground area and the background area in the first image is realized.
  • the background area may be blurred to generate a blurred background image after the blurred processing. Then the foreground image and the blurred background image are image fused, and the fused image is output to obtain the blurred image. Display the blurred image on the preview interface, or output the current blurred image as the captured image in response to the user's operation.
  • the image described with reference to FIG. 7 is a blurred image after fusion processing.
  • the blurring processing of the background image and the fusion processing of the blurring background image and the foreground image can be realized by using existing algorithms.
  • a method of gradually increasing the degree of blurring from near to far can be performed on the background image, such as using algorithms such as Gaussian blur algorithm for blurring processing.
  • Gaussian pyramid, Laplacian pyramid, or weighted average-based fusion algorithm can be used for image fusion processing.
  • the specific algorithm process can be implemented by using a conventional method, which will not be repeated in this disclosure.
  • the method further includes:
  • Step S31 Perform area division on the blurred image based on the area boundary of the foreground image and the area boundary of the blurred background image to obtain a blurred transition area between the foreground image and the blurred background image;
  • step S32 the image quality of the blurred transition area is evaluated according to preset rules, and when the evaluation result is lower than the preset requirement, the blurred transition area is optimized.
  • the blurred image can also be divided into blurred transition areas, and the display effect of the blurred transition areas can be evaluated to determine whether the blurred transition area needs to be optimized.
  • the blurred image can be divided into the blurred transition area according to the boundary of the foreground area. For example, taking the boundary of the foreground area as the path, extending to the blurred background area, and dividing the blurred transition area along the path according to a certain size window.
  • the blurred transition area can also extend to the foreground image area, that is, the blurred transition area contains a certain proportion of the foreground image. For example, referring to the blurred image shown in FIG.
  • the dotted line is the edge of the foreground image.
  • the solid line area is the divided blur transition area, which contains a certain proportion of the foreground image area.
  • the blurred transition area can be divided into sub-regions to obtain multiple consecutive sub-transition areas, and then each sub-transition area can be identified and evaluated to determine whether each sub-transition area contains abnormal textures, burrs, and unnatural textures. display effect.
  • the blur transition area is divided into multiple continuous sub-areas. If the sub-transition area contains one or more of the above conditions, it is judged to be lower than the preset requirements, and the sub-transition area can be optimized, such as smoothing, so that the foreground image, the sub-transition area image, and the blurred background Smooth transition between images. Make the blur effect natural and smooth transition. As shown in the optimization result shown in FIG.
  • the size of the first image may be transformed to reduce the size of the first image.
  • the first image reduced in size is used for depth image acquisition, foreground image and background image division, and image fusion processing. After the fused blurred image is obtained, the blurred image is resized so that the size of the blurred image is located at the original size corresponding to the first image, thereby ensuring the display effect of the output image.
  • the two camera components are synchronized after the two camera components are started, so that the two camera components realize the frame synchronization of the hardware, ensuring that the two cameras simultaneously capture the same at the same time.
  • the image and the map By pre-configuring the parameter table of the corresponding relationship of the exposure parameters between the two camera components, after the current first exposure parameter of the first camera component is determined, the parameter table can be consulted to determine the second exposure parameter of the second camera component at the current moment.
  • the AE synchronization of the two camera components ensures that the exposure time, exposure position and frame rate of the two camera components are the same, and finally the EV brightness value of the two pictures output is the same.
  • the quality of the output images of the two camera components is effectively improved, and the quality of the output images of the two camera components is effectively controlled.
  • it greatly guarantees the consistency of image acquisition between the first camera and the second camera, and provides a reliable input source for feature point matching in the later stage of blurring.
  • the consistency of the input image can greatly reduce blurring errors. , The probability of leakage.
  • the range of the epipolar search is enlarged, and the robustness of the matching is greatly increased, making the depth map calculation more accurate, especially in the dark light, the blur effect is improved significantly.
  • a blur transition zone is added to the edge of the foreground image to make the blur effect more natural and smooth transition.
  • the blur effect is improved significantly.
  • the method provided by the present disclosure improves and optimizes the early, mid, and late stages of the blurring process, greatly improves the effect of the background blurring, improves the available scenes and range of the blurring, and significantly enhances the image quality of the blurring. Compared with users, users can have a better user experience, fewer scene restrictions, and better and more natural blurred images.
  • the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not be implemented in this application.
  • the implementation process of the example constitutes any limitation.
  • FIG. 10 shows a schematic block diagram of an image processing apparatus 100 according to an embodiment of the present application. As shown in FIG. 10, the device 100 includes:
  • the synchronization execution module 101 is configured to activate the first camera component and the second camera component in response to a first trigger operation, and synchronize the first camera component and the second camera component;
  • the parameter query module 102 is configured to obtain the first exposure parameter of the first camera component to the current scene, and query a preset parameter table based on the first exposure parameter to obtain the second exposure corresponding to the second camera component parameter;
  • the parameter execution module 103 is used for the first camera component to execute the first exposure parameter to obtain a first image, and the second camera component to execute the second exposure parameter to obtain a second image based on the first image and the first image.
  • the second image constructs a blurred image corresponding to the current scene.
  • the synchronization execution module 101 may include:
  • the synchronization information processing unit is configured to send a synchronization control signal to the second camera assembly based on the state information of the first camera assembly, so that the second camera assembly communicates with the first camera assembly according to the synchronization control signal Synchronize.
  • the device 100 further includes:
  • a feature point matching module configured to perform feature point matching on the first image and the second image to align the first image and the second image
  • the depth data calculation module is configured to calculate the disparity value of the first camera component and the second camera component according to the aligned first image and second image, so as to calculate the depth of each feature point based on the disparity value data;
  • a depth image construction module configured to construct a depth image based on the depth data
  • An image segmentation module configured to perform image segmentation on the first image according to the depth image to obtain a foreground image and a background image;
  • the image fusion module is used to perform blur processing on the background image to obtain a blur background image, and perform image fusion on the blur background image and the foreground image to obtain a blur image corresponding to the current scene .
  • the feature point matching module includes:
  • the polar plane construction unit is used to construct a polar plane according to the first camera component, the second camera component and the target point in the space coordinate system.
  • the epipolar line constructing unit is used to obtain the epipolar lines that intersect the polar plane with the first image and the second image respectively.
  • the feature point matching unit is configured to determine the feature point in the second image corresponding to the feature point in the first image based on the epipolar line, so as to align the first image and the second image.
  • the device 100 further includes:
  • the blur transition region dividing module is used to divide the blur image based on the region boundary of the foreground image and the region boundary of the blur background image to obtain the difference between the foreground image and the blur background image The blur transition area.
  • the optimization module is used to evaluate the image quality of the blurred transition area according to preset rules, and perform optimization processing on the blurred transition area when the evaluation result is lower than the preset requirement.
  • the optimization module includes:
  • the sub-transition area dividing unit is configured to divide the virtual transition area to obtain a plurality of sub-transition areas, and respectively perform image quality evaluation on each of the sub-transition areas.
  • the sub-transition area optimization unit is configured to perform smoothing processing on the sub-transition area to optimize the sub-transition area when the evaluation result of the sub-transition area is lower than a preset requirement.
  • the device 100 further includes:
  • the first image transformation module is configured to perform size transformation on the first image to transform the first image of the original size into the first image of the target size.
  • the second image transformation module is configured to, after acquiring the blurred image, perform size transformation on the blurred image to obtain the blurred image of the original size.
  • the blurred transition area dividing module is further configured to extend to the blurred background area by taking the boundary of the foreground area as a path, and divide the blurred transition area along the path according to a window of a preset size; wherein , The blurred transition area includes a foreground image of a preset ratio.
  • the synchronization execution module is further configured to activate the first camera component in response to the first trigger operation, and use the first camera component to collect the current scene image; image the current scene image Recognize to determine the current scene mode, and select the second camera component according to the current scene mode.
  • modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
  • FIG. 11 shows a schematic structural diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present invention.
  • the computer system 110 includes a central processing unit (CPU) 1101, which can be loaded into a random system according to a program stored in a read-only memory (Read-Only Memory, ROM) 1102 or from a storage part 1108. Access to the program in the memory (Random Access Memory, RAM) 1103 to execute various appropriate actions and processing. In RAM 1103, various programs and data required for system operation are also stored.
  • the CPU 1101, the ROM 1102, and the RAM 1103 are connected to each other through a bus 1104.
  • An input/output (Input/Output, I/O) interface 1105 is also connected to the bus 1104.
  • the following components are connected to the I/O interface 1105: input part 1106 including keyboard, mouse, etc.; including output part 1107 such as cathode ray tube (Cathode Ray Tube, CRT), liquid crystal display (LCD), and speakers A storage part 1108 including a hard disk, etc.; and a communication part 1109 including a network interface card such as a LAN (Local Area Network) card, a modem, and the like.
  • the communication section 1109 performs communication processing via a network such as the Internet.
  • the driver 1110 is also connected to the I/O interface 1105 as needed.
  • a removable medium 1111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 1110 as needed, so that the computer program read therefrom is installed into the storage portion 1108 as needed.
  • an embodiment of the present invention includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication part 1109, and/or installed from the removable medium 1111.
  • CPU central processing unit
  • the computer-readable medium shown in the embodiment of the present invention may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
  • Computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Erasable Programmable Read Only Memory (EPROM), flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable of the above The combination.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wired, etc., or any suitable combination of the above.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of the code, and the above-mentioned module, program segment, or part of the code contains one or more for realizing the specified logic function.
  • Executable instructions may also occur in a different order from the order marked in the drawings. For example, two blocks shown one after another can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram or flowchart, and the combination of blocks in the block diagram or flowchart can be implemented by a dedicated hardware-based system that performs the specified function or operation, or can be implemented by It is realized by a combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present invention may be implemented in software or hardware, and the described units may also be provided in a processor. Among them, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • this application also provides a computer-readable medium.
  • the computer-readable medium may be included in the electronic device described in the above-mentioned embodiment; or it may exist alone without being assembled into the electronic device. in.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by an electronic device, the electronic device realizes the method described in the following embodiments. For example, the electronic device can implement the steps shown in FIG. 1.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the unit is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • this function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

An image processing method, a device, a computer readable medium, and an electronic apparatus. The method comprises: activating, in response to a first trigger operation, a first camera component and a second camera component, and performing synchronization of the first camera component and the second camera component (S11); acquiring a first exposure parameter of the first camera component for a current scenario, and querying a preset parameter table on the basis of the first exposure parameter to so as to acquire a second exposure parameter corresponding to the second camera component (S12); and executing the first exposure parameter via the first camera component to obtain a first image, and executing the second exposure parameter via the second camera component to obtain a second image, so as to construct a blurred image corresponding to the current scenario on the basis of the first image and the second image (S13). The method ensures that the image capturing effect of images output by the two camera components remains consistent, thereby ensuring high image quality of final output blurred background images.

Description

图像处理方法、图像处理***、计算机可读介质和电子设备Image processing method, image processing system, computer readable medium and electronic equipment
本申请要求在2019年12月31日提交中国专利局、申请号为201911406556.X、发明名称为“图像处理方法及装置、计算机可读介质、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office, the application number is 201911406556.X, and the invention title is "Image processing methods and devices, computer-readable media, and electronic equipment" on December 31, 2019. All of them The content is incorporated in this application by reference.
技术领域Technical field
本申请实施例涉及图像处理技术领域,并且更具体地,涉及一种图像处理方法、图像处理装置、计算机可读介质和电子设备。The embodiments of the present application relate to the field of image processing technology, and more specifically, to an image processing method, an image processing device, a computer-readable medium, and an electronic device.
背景技术Background technique
在拍摄照片时,为了突出拍照的主体,可以对背景区域进行虚化处理,例如利用“人像模式”进行虚化背景的拍照。由于受到硬件设备的限制,手机、平板电脑等智能移动终端设备一般通过图像处理算法实现对背景的虚化处理,例如基于双目立体视觉匹配算法。基于双目立体视觉匹配算法对输入图像的质量要求非常高,当输入图像不准确时,容易导致特征点匹配错误进而导致深度信息计算不准确,从而导致虚化图像出现误虚、漏虚等问题。When taking a photo, in order to highlight the subject of the photo, the background area can be blurred, for example, the "portrait mode" can be used to take a photo with a blurred background. Due to the limitations of hardware devices, smart mobile terminal devices such as mobile phones and tablet computers generally use image processing algorithms to implement blurring of the background, for example, based on binocular stereo vision matching algorithms. Based on the binocular stereo vision matching algorithm, the quality of the input image is very high. When the input image is inaccurate, it is easy to cause the feature point matching error and lead to the inaccurate calculation of the depth information, which leads to problems such as false and missing images in the blurred image .
发明内容Summary of the invention
有鉴于此,本申请实施例提供了一种图像处理方法、图像处理装置、计算机可读介质和电子设备,有利于提升虚化图像的图像质量。In view of this, the embodiments of the present application provide an image processing method, an image processing device, a computer-readable medium, and an electronic device, which are beneficial to improving the image quality of a blurred image.
第一方面,提供了一种图像处理方法,该方法包括:响应于第一触发操作,启动第一摄像组件和第二摄像组件,并对所述第一摄像组件和第二摄像组件进行同步;以及获取所述第一摄像组件对当前场景的第一曝光参数,并基于所述第一曝光参数查询预设参数表,以获取所述第二摄像组件对应的第二曝光参数;所述第一摄像组件执行第一曝光参数以获取第一图像、所述第二摄像组件执行第二曝光参数以获取第二图像,以基于所述第一图像和第二图像构建所述当前场景对应的虚化图像。In a first aspect, an image processing method is provided, the method comprising: in response to a first trigger operation, activating a first camera assembly and a second camera assembly, and synchronizing the first camera assembly and the second camera assembly; And obtaining the first exposure parameter of the first camera component for the current scene, and querying a preset parameter table based on the first exposure parameter to obtain the second exposure parameter corresponding to the second camera component; The camera component executes a first exposure parameter to obtain a first image, and the second camera component executes a second exposure parameter to obtain a second image, so as to construct a blur corresponding to the current scene based on the first image and the second image image.
第二方面,提供了一种图像处理装置,该装置包括:同步执行模块,用于响应于第一触发操作,启动第一摄像组件和第二摄像组件,并对所述第一摄像组件和第二摄像组件进行同步;以及参数查询模块,用于获取所述第一摄像组件对当前场景的第一曝光参数,并基于所述第一曝光参数查询预设参 数表,以获取所述第二摄像组件对应的第二曝光参数;参数执行模块,用于所述第一摄像组件执行第一曝光参数以获取第一图像、所述第二摄像组件执行第二曝光参数以获取第二图像,以基于所述第一图像和第二图像构建所述当前场景对应的虚化图像。In a second aspect, an image processing device is provided. The device includes: a synchronization execution module for activating a first camera assembly and a second camera assembly in response to a first trigger operation, The two camera components are synchronized; and a parameter query module, configured to obtain the first exposure parameter of the first camera component to the current scene, and query a preset parameter table based on the first exposure parameter to obtain the second camera A second exposure parameter corresponding to the component; a parameter execution module for the first camera component to execute the first exposure parameter to obtain the first image, and the second camera component to execute the second exposure parameter to obtain the second image based on The first image and the second image construct a blurred image corresponding to the current scene.
第三方面,提供了一种无线通信终端,包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器执行上述第一方面中的方法。In a third aspect, a wireless communication terminal is provided, including: one or more processors; a storage device, configured to store one or more programs, when the one or more programs are used by the one or more processors When executed, the one or more processors are caused to execute the method in the foregoing first aspect.
第四方面,提供了一种计算机可读介质,用于储存为执行上述第一方面中的方法所用的计算机软件指令,其包含用于执行上述各方面所设计的程序。In a fourth aspect, a computer-readable medium is provided for storing computer software instructions used to execute the method in the above-mentioned first aspect, which contains the programs designed to execute the above-mentioned aspects.
本申请中,电子设备以及***等的名字对设备本身不构成限定,在实际实现中,这些设备可以以其他名称出现。只要各个设备的功能和本申请类似,属于本申请权利要求及其等同技术的范围之内。In this application, the names of electronic devices and systems do not limit the devices themselves. In actual implementation, these devices may appear under other names. As long as the function of each device is similar to that of this application, it falls within the scope of the claims of this application and its equivalent technologies.
本申请的这些方面或其他方面在以下实施例的描述中会更加简明易懂。These and other aspects of the application will be more concise and understandable in the description of the following embodiments.
附图说明Description of the drawings
图1示出了本申请实施例的图像处理方法的示意图。Fig. 1 shows a schematic diagram of an image processing method according to an embodiment of the present application.
图2示出了本申请实施例的构建虚化图像方法的示意图。Fig. 2 shows a schematic diagram of a method for constructing a blurred image according to an embodiment of the present application.
图3示出了本申请实施例的特征点匹配方式的示意图。Fig. 3 shows a schematic diagram of a feature point matching method according to an embodiment of the present application.
图4示出了本申请实施例的深度信息计算方式的示意图。Fig. 4 shows a schematic diagram of a depth information calculation method according to an embodiment of the present application.
图5示出了本申请实施例的第一摄像组件采集的第一图像的示意图。Fig. 5 shows a schematic diagram of a first image collected by a first camera component of an embodiment of the present application.
图6示出了本申请实施例的第一图像对应的深度图像的示意图。FIG. 6 shows a schematic diagram of a depth image corresponding to the first image in an embodiment of the present application.
图7示出了本申请实施例的融合处理后虚化图像的示意图。Fig. 7 shows a schematic diagram of a blurred image after fusion processing in an embodiment of the present application.
图8示出了本申请实施例的虚化过渡区域的示意图。FIG. 8 shows a schematic diagram of a blurred transition area in an embodiment of the present application.
图9示出了本申请实施例的虚化过渡区域优化处理后的效果示意图。FIG. 9 shows a schematic diagram of the effect after the optimization processing of the blurred transition area in the embodiment of the present application.
图10示出了本申请实施例的图像处理装置的组成示意图。FIG. 10 shows a schematic diagram of the composition of an image processing device according to an embodiment of the present application.
图11示出了本申请实施例的电子设备的计算机***的示意性框图。Fig. 11 shows a schematic block diagram of a computer system of an electronic device according to an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行 清楚、完整地描述。The technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application.
应理解,本申请实施例的技术方案可以应用于图像处理。It should be understood that the technical solutions of the embodiments of the present application can be applied to image processing.
现有技术中,现有的相机,例如单反相机可以通过大光圈拍出背景虚化效果的图像。对于手机、平板电脑等电子设备,因为受到设备厚度的限制,无法放入专业相机的镜头,各电子设备的摄像头大多采用固定光圈,无法利用硬件设备进行光学变焦并直接拍出类似单反的背景虚化效果的图片,需要借助图像处理算法实现对背景的虚化处理。例如,基于双目立体视觉匹配算法。但这样的算法对双摄像头的输入图像的质量要求非常高,当输入图像的质量较低时,容易导致无法获取准确的深度信息,导致虚化过程出现误虚、漏虚等问题。比如两个摄像头捕捉的图像出现不匹配、或者两张图片亮度不一致、又或者输入图轮廓清晰度较差等,都有可能直接导致背景虚化做的不准确。In the prior art, an existing camera, such as a single-lens reflex camera, can take an image with a background blur effect through a large aperture. For electronic devices such as mobile phones and tablet computers, due to the limitation of the thickness of the device, it cannot be put into the lens of a professional camera. The cameras of each electronic device mostly use a fixed aperture, and it is impossible to use the hardware device to perform optical zoom and directly shoot a background blur similar to a SLR. The picture with the effect of blurring requires the help of image processing algorithms to realize the blurring of the background. For example, based on binocular stereo vision matching algorithm. However, such an algorithm has very high requirements on the quality of the input image of the dual camera. When the quality of the input image is low, it is easy to cause the inability to obtain accurate depth information, leading to problems such as false and false falsehoods in the blurring process. For example, the images captured by the two cameras do not match, or the brightness of the two images are inconsistent, or the outline definition of the input image is poor, etc., which may directly cause the background blur to be inaccurate.
针对上述的现有技术的缺点和不足,本示例实施方式中提供了一种图像处理方法,可以应用于手机、平板电脑等配置有摄像功能的智能终端设备。In view of the aforementioned shortcomings and deficiencies of the prior art, this exemplary embodiment provides an image processing method, which can be applied to smart terminal devices equipped with camera functions such as mobile phones and tablet computers.
图1示出了本申请实施例的一种图像处理方法的示意性。如图1所示,该方法包括以下部分或全部内容:Fig. 1 shows a schematic diagram of an image processing method according to an embodiment of the present application. As shown in Figure 1, the method includes some or all of the following:
S11,响应于第一触发操作,启动第一摄像组件和第二摄像组件,并对所述第一摄像组件和第二摄像组件进行同步;S11, in response to the first triggering operation, start the first camera component and the second camera component, and synchronize the first camera component and the second camera component;
S12,获取所述第一摄像组件对当前场景的第一曝光参数,并基于所述第一曝光参数查询预设参数表,以获取所述第二摄像组件对应的第二曝光参数;S12: Obtain a first exposure parameter of the first camera component for the current scene, and query a preset parameter table based on the first exposure parameter to obtain a second exposure parameter corresponding to the second camera component;
S13,所述第一摄像组件执行第一曝光参数以获取第一图像、所述第二摄像组件执行第二曝光参数以获取第二图像,以基于所述第一图像和第二图像构建所述当前场景对应的虚化图像。S13. The first camera component executes a first exposure parameter to obtain a first image, and the second camera component executes a second exposure parameter to obtain a second image, so as to construct the first image and the second image. The blurred image corresponding to the current scene.
具体地,上述的图像处理方法可以应用于配置有至少两个摄像组件的电子设备。电子设备可以是手机、平板电脑等移动智能终端设备。电子设备可以装配有两个摄像组件,例如一具有较高像素的主摄像头,和一广角摄像头。或者,也可以是装配有三、四或五个摄像组件;例如,一具有较高像素的主摄像头、一景深摄像头、一微距摄像头、一广角摄像头以及一黑白摄像头等组合形式。Specifically, the above-mentioned image processing method can be applied to an electronic device configured with at least two camera components. The electronic device can be a mobile smart terminal device such as a mobile phone and a tablet computer. The electronic device can be equipped with two camera components, such as a main camera with higher pixels and a wide-angle camera. Or, it may be equipped with three, four or five camera components; for example, a main camera with higher pixels, a depth camera, a macro camera, a wide-angle camera, and a black and white camera.
可选地,在本申请实施例中,以下实施例以一主摄像头、一广角摄像头为例对图像处理方法进行说明。其中,上述的第一摄像组件可以是主摄像头,第二摄像组件可以是广角摄像头。Optionally, in the embodiments of the present application, the following embodiments take a main camera and a wide-angle camera as examples to describe the image processing method. Wherein, the above-mentioned first camera component may be a main camera, and the second camera component may be a wide-angle camera.
上述的第一触发操作可以是用户在终端开启并进入相机应用程序的操作,或者是进入背景虚化拍摄模式的触控操作。在识别到用户的触发操作后,便可以启动第一摄像组件和第二摄像组件。或者,在终端的摄像组件数量大于三时,还可以在识别到用户的触发操作后,首先启动第一摄像组件,并对第一摄像组件采集的当前场景的图像进行识别,获取当前的场景模式,并基于该场景模式启动对应的另一摄像组件作为第二摄像组件。The above-mentioned first trigger operation may be an operation of the user opening and entering the camera application on the terminal, or a touch operation of entering the background blur shooting mode. After the user's trigger operation is recognized, the first camera component and the second camera component can be started. Or, when the number of camera components of the terminal is greater than three, after recognizing the user's trigger operation, the first camera component can be started first, and the image of the current scene collected by the first camera component can be recognized to obtain the current scene mode , And start another corresponding camera component as the second camera component based on the scene mode.
在将两摄像组件启动之后,便可以对两摄像组件进行同步。具体来说,电子设备可以根据第一摄像组件的当前状态信息向第二摄像组件发送同步控制信号。当第二摄像组件接收到该同步控制信号后,便可以根据该同步控制信号与第一摄像组件实现硬件上的帧同步。保证两个摄像组件同时采集图像,保证在一个时刻捕捉到同一个画面。举例来说,同步控制信号中可以包括时钟信号或者定时任务信号。当开始拍摄背景虚化图像时,第一摄像组件对第二摄像组件发起同步信号,第二摄像组件收到同步信号后,两个摄像组件同时采集图像出图。After the two camera components are activated, the two camera components can be synchronized. Specifically, the electronic device may send a synchronization control signal to the second camera component according to the current state information of the first camera component. After the second camera component receives the synchronization control signal, it can realize the frame synchronization on the hardware with the first camera component according to the synchronization control signal. Ensure that the two camera components capture images at the same time, ensuring that the same image is captured at one moment. For example, the synchronization control signal may include a clock signal or a timing task signal. When the background blur image is started to be captured, the first camera component initiates a synchronization signal to the second camera component, and after the second camera component receives the synchronization signal, the two camera components simultaneously collect images and produce images.
可选地,在本申请实施例中,由于第一摄像组件和第二摄像组件本身存在感光性能、传感器差异等硬件设备的差别,在拍摄同一场景时,采集图像结果可能存在一定的差异,例如,两图像的亮度不同等等。或者,在拍摄相同场景时,即便使用相同的拍摄参数,也可能导致最终输出的采集图像效果不同。为了保证两摄像组件采集图像具有相同的拍摄质量和效果,可以预先根据两摄像组件的硬件特点、成像特性配置两摄像组件之间的参数表。该参数表中可以记载第一摄像组件和第二摄像组件之间的拍摄参数的对应关系。其中,参数表中可以包含快门速度、曝光、亮度、感光度、白平衡、色温等参数。Optionally, in the embodiments of the present application, due to the differences in the photosensitivity and sensor differences between the first camera component and the second camera component itself, there may be certain differences in the results of the captured images when shooting the same scene, for example, , The brightness of the two images is different, and so on. Or, when shooting the same scene, even if the same shooting parameters are used, the final output of the collected image may be different. In order to ensure that the images collected by the two camera components have the same shooting quality and effect, the parameter table between the two camera components can be configured in advance according to the hardware characteristics and imaging characteristics of the two camera components. The parameter table may record the correspondence between the shooting parameters of the first camera component and the second camera component. Among them, the parameter table may include parameters such as shutter speed, exposure, brightness, sensitivity, white balance, and color temperature.
具体来说,在背景虚化拍摄模式下,电子设备可以读取第一摄像组件对于当前场景下的第一曝光参数。并利用该第一曝光参数检索预设参数表,从而获取当前场景下对应的第二曝光参数。从而实现两摄像组件的AE(Auto exposure,自动曝光)同步,确保曝光时间、帧率等参数一致,最终要保证 两张图片输出的EV亮度值一致。Specifically, in the background blur shooting mode, the electronic device can read the first exposure parameter of the first camera component for the current scene. And use the first exposure parameter to retrieve the preset parameter table, so as to obtain the corresponding second exposure parameter in the current scene. In this way, the AE (Auto Exposure) synchronization of the two camera components is realized to ensure that the exposure time, frame rate and other parameters are the same, and finally the EV brightness value of the two pictures output is the same.
可选地,在本申请实施例中,在确定在当前时刻、当前场景下,第一摄像组件的第一曝光参数、第二摄像组件对应的第二曝光参数后,便可以对第一摄像组件执行第一曝光参数,同时对第二摄像组件执行第二曝光参数,使得第一摄像组件、第二摄像组件同时针对当前场景进行图像的采集,获取第一图像和第二图像。Optionally, in this embodiment of the present application, after determining the first exposure parameter of the first camera component and the second exposure parameter corresponding to the second camera component at the current moment and the current scene, the first camera component can be The first exposure parameter is executed, and the second exposure parameter is executed on the second camera component at the same time, so that the first camera component and the second camera component simultaneously collect images for the current scene, and obtain the first image and the second image.
可选地,在本申请实施例中,具体来说,参考图2所示,上述的步骤S13中所述的基于所述第一图像和第二图像构建所述当前场景对应的虚化图像可以包括:Optionally, in the embodiment of the present application, specifically, referring to FIG. 2, the construction of the blurred image corresponding to the current scene based on the first image and the second image in the above step S13 may be include:
步骤S21,对所述第一图像和第二图像进行特征点匹配以对齐所述第一图像和第二图像。Step S21: Perform feature point matching on the first image and the second image to align the first image and the second image.
步骤S22,根据对齐后的所述第一图像和第二图像计算所述第一摄像组件和第二摄像组件的视差值,以基于所述视差值计算各特征点的深度数据。Step S22: Calculate the parallax value of the first camera component and the second camera component according to the aligned first image and second image, so as to calculate the depth data of each feature point based on the parallax value.
步骤S23,基于所述深度数据构建深度图像。Step S23: Construct a depth image based on the depth data.
步骤S24,根据所述深度图像对所述第一图像进行图像分割以获取前景图像和背景图像。Step S24: Perform image segmentation on the first image according to the depth image to obtain a foreground image and a background image.
步骤S25,对所述背景图像进行虚化处理以获取虚化背景图像,并将所述虚化背景图像和所述前景图像进行图像融合,以获取所述当前场景对应的虚化图像。Step S25, performing blurring processing on the background image to obtain a blurring background image, and performing image fusion on the blurring background image and the foreground image to obtain a blurring image corresponding to the current scene.
具体来说,本示例实施方式中,可以基于双目测距的方法进行深度信息计算。首先可以建立一基于空间坐标系,在空间坐标系中根据所述第一摄像组件、第二摄像组件和目标点构建极平面;并分别获取所述第一图像、第二图像与所述极平面相交的极线;再基于所述极线确定所述第一图像中特征点对应的所述第二图像中的特征点,以对齐所述第一图像和第二图像。Specifically, in this exemplary embodiment, depth information calculation may be performed based on a binocular ranging method. First, a space-based coordinate system can be established, in which a polar plane is constructed according to the first camera component, the second camera component, and the target point; and the first image, the second image, and the polar plane can be obtained separately Intersecting epipolar lines; and then determining the feature points in the second image corresponding to the feature points in the first image based on the epipolar lines, so as to align the first image and the second image.
举例来说,参考图3所示,C1,C2是两个相机,P是空间中的一个点,例如被拍摄的当前场景中的任意一点。点P和两个相机中心点C1、C2形成了三维空间中的一个平面PC1C2,作为极平面(Epipolar plane)。极平面和两个相机采集的两幅图像m1、m2分别相交于两条直线,这两条直线称为极线(Epipolar line)。点P在相机C1中的成像点是P1,在相机C2中的成像点是P2,根据极线约束(epipolar constraint)的定义,我们可以在图中直观的 观察到P2一定在极线上。因此,可以沿着极线搜索和P1的对应点P2。For example, referring to FIG. 3, C1 and C2 are two cameras, and P is a point in space, such as any point in the current scene being photographed. The point P and the two camera center points C1 and C2 form a plane PC1C2 in the three-dimensional space, which serves as an epipolar plane. The polar plane and the two images m1 and m2 collected by the two cameras intersect at two straight lines respectively, and these two straight lines are called epipolar lines. The imaging point of point P in camera C1 is P1, and the imaging point in camera C2 is P2. According to the definition of epipolar constraint, we can intuitively observe that P2 must be on the epipolar line in the figure. Therefore, the corresponding point P2 with P1 can be searched along the epipolar line.
利用上述的方法便可以对将第一图像和第二图像中的各特征点或像素点进行匹配,进而使得两图像对齐。此外,由于单个特征点在匹配时容易受到噪点、视角等因素的影响,会产生误差,因此在极线约束的搜索基础上可以适当放宽特征点的搜索范围,增加匹配鲁棒性,提升特征点匹配的准确性。Using the above-mentioned method, each feature point or pixel point in the first image and the second image can be matched, so as to align the two images. In addition, since a single feature point is easily affected by factors such as noise and viewing angle during matching, errors will occur. Therefore, based on the search based on epipolar constraints, the search range of feature points can be appropriately widened, the matching robustness can be increased, and the feature points can be improved. Accuracy of matching.
可选地,在本申请实施例中,在将第一图像和第二图像对齐后,可以匹配空间一个点在两幅图像分别成像的位置。利用对齐后的第一图像和第二图像计算差值。Optionally, in the embodiment of the present application, after the first image and the second image are aligned, the position where a point in space is respectively imaged on the two images can be matched. The difference is calculated using the aligned first image and second image.
举例来说,参考图4所示,P是待测物体上的某一点,例如当前采集图像中人物、动物或某一物体上的任意一点;f是相机焦距,OR与OT分别是两个相机的光心,点P在两个相机感光器上的成像点分别为P和P’,Z为深度信息。b)P到点P’的距离为dis,公式可以包括:For example, referring to Figure 4, P is a certain point on the object to be measured, such as a person, animal or any point on a certain object in the currently collected image; f is the focal length of the camera, OR and OT are two cameras respectively The imaging points of point P on the photoreceptors of the two cameras are P and P'respectively, and Z is the depth information. b) The distance from P to point P’ is dis, and the formula can include:
dis=B-(X R-X T) dis=B-(X R -X T )
根据相似三角形定理可以得到如下公式:According to the similar triangle theorem, the following formula can be obtained:
Figure PCTCN2020139236-appb-000001
Figure PCTCN2020139236-appb-000001
通过公式变换可以获取深度信息Z,公式可以包括:The depth information Z can be obtained through formula transformation, and the formula can include:
Figure PCTCN2020139236-appb-000002
Figure PCTCN2020139236-appb-000002
其中,f为相机焦距,B是双摄像头baseline长度,这些都是已知信息。因此,只需要计算双摄像头的视差Xr-Xt,即可求得深度信息。Among them, f is the focal length of the camera, B is the baseline length of the dual camera, these are all known information. Therefore, it is only necessary to calculate the parallax Xr-Xt of the dual camera to obtain the depth information.
可选地,在本申请实施例中,根据上述步骤中获取的各特征点的深度信息Z,输出具有一定分辨率大小的深度图像。例如,深度图像可以为8bit的灰度图像。具体来说,可以首先对得到的各特征点的深度信息Z做去均值的归一化处理,然后映射到[0,255]的范围内,映射后的值为灰度图的像素值,据此可以得到第一图像和第二图像对应的深度图像。例如,参考图5、图6所示,图6为图5的深度图像。Optionally, in this embodiment of the present application, a depth image with a certain resolution is output according to the depth information Z of each feature point obtained in the foregoing steps. For example, the depth image can be an 8-bit grayscale image. Specifically, the obtained depth information Z of each feature point can be normalized to remove the mean value first, and then mapped to the range of [0, 255], the mapped value is the pixel value of the grayscale image, according to In this way, the depth images corresponding to the first image and the second image can be obtained. For example, referring to FIG. 5 and FIG. 6, FIG. 6 is the depth image of FIG. 5.
可选地,在本申请实施例中,在获取深度图像后,结合对焦平面信息,确定深度图像汇总主体深度值X。据此对第一摄像组件采集的第一图像进行主体和背景的分割。例如,将深度信息小于X区域作为前景区域,深度信息大于X的区域作为背景。从而实现对第一图像中前景区域和背景区域的准确分割。Optionally, in this embodiment of the present application, after the depth image is acquired, combined with the focus plane information, the depth image summary subject depth value X is determined. According to this, the subject and background are segmented on the first image collected by the first camera component. For example, an area with depth information less than X is used as a foreground area, and an area with depth information greater than X is used as a background. In this way, accurate segmentation of the foreground area and the background area in the first image is realized.
可选地,在本申请实施例中,在对第一图像中前景区域和背景区域分割后,便可以对背景区域进行虚化处理,生成虚化处理后的虚化背景图像。再将前景图像和虚化背景图像进行图像融合,将融合处理后的图像输出,得到虚化图像。将该虚化图像显示在预览界面,或响应用户的操作输出当前的虚化图像作为拍摄图像。例如,参考图7所述的图像,为融合处理后的虚化图像。Optionally, in the embodiment of the present application, after segmenting the foreground area and the background area in the first image, the background area may be blurred to generate a blurred background image after the blurred processing. Then the foreground image and the blurred background image are image fused, and the fused image is output to obtain the blurred image. Display the blurred image on the preview interface, or output the current blurred image as the captured image in response to the user's operation. For example, the image described with reference to FIG. 7 is a blurred image after fusion processing.
举例来说,对背景图像的虚化处理,以及对虚化背景图像和前景图像的融合处理采用现有算法即可实现。例如,可以针对背景图像进行由近至远虚化程度逐渐增加的方法,如利用高斯模糊算法等算法进行虚化处理。此外,可以利用高斯金字塔、拉普拉斯金字塔或者基于加权平均的融合算法等进行图像融合处理。具体的算法过程采用常规方法即可实现,本公开对此不再赘述。For example, the blurring processing of the background image and the fusion processing of the blurring background image and the foreground image can be realized by using existing algorithms. For example, a method of gradually increasing the degree of blurring from near to far can be performed on the background image, such as using algorithms such as Gaussian blur algorithm for blurring processing. In addition, Gaussian pyramid, Laplacian pyramid, or weighted average-based fusion algorithm can be used for image fusion processing. The specific algorithm process can be implemented by using a conventional method, which will not be repeated in this disclosure.
可选地,在本申请实施例中,获取所述当前场景的虚化图像后,所述方法还包括:Optionally, in the embodiment of the present application, after obtaining the blurred image of the current scene, the method further includes:
步骤S31,基于所述前景图像的区域边界和所述虚化背景图像的区域边界对所述虚化图像进行区域划分,以获取所述前景图像与虚化背景图像之间的虚化过渡区域;Step S31: Perform area division on the blurred image based on the area boundary of the foreground image and the area boundary of the blurred background image to obtain a blurred transition area between the foreground image and the blurred background image;
步骤S32,按预设规则对所述虚化过渡区域进行图像质量评估,并在评估结果低于预设要求时,对所述虚化过渡区域进行优化处理。In step S32, the image quality of the blurred transition area is evaluated according to preset rules, and when the evaluation result is lower than the preset requirement, the blurred transition area is optimized.
具体来说,在融合处理生成虚化图像后,还可以对虚化图像划分虚化过渡区域,并对虚化过渡区域的显示效果进行评估,判断是否需要对虚化过渡区域进行优化。举例来说,可以根据前景区域的边界对虚化图像进行虚化过渡区域的划分。例如,以前景区域的边界为路径,向虚化背景区域延伸,按一定大小的窗口沿路径划分虚化过渡区域。其中,虚化过渡区域也可以向前景图像的区域延伸,即虚化过渡区域包含一定比例的前景图像。例如,参考图8所示的虚化图像,前景图像和虚化背景之间具有明显的不自然区域。如图8中所示,虚线为前景图像的边缘。实线区域为划分的虚化过渡区域,其中包含一定比例的前景图像区域。Specifically, after the fusion process generates the blurred image, the blurred image can also be divided into blurred transition areas, and the display effect of the blurred transition areas can be evaluated to determine whether the blurred transition area needs to be optimized. For example, the blurred image can be divided into the blurred transition area according to the boundary of the foreground area. For example, taking the boundary of the foreground area as the path, extending to the blurred background area, and dividing the blurred transition area along the path according to a certain size window. Among them, the blurred transition area can also extend to the foreground image area, that is, the blurred transition area contains a certain proportion of the foreground image. For example, referring to the blurred image shown in FIG. 8, there is an obvious unnatural area between the foreground image and the blurred background. As shown in Figure 8, the dotted line is the edge of the foreground image. The solid line area is the divided blur transition area, which contains a certain proportion of the foreground image area.
此外,还可以对虚化过渡区域进行子区域划分,得到连续的多个子过渡区域,再对各子过渡区域进行识别和评估,判断各子过渡区域是否包含非正 常的纹理、毛刺以及不自然的显示效果。如图8所示,对虚化过渡区域划分为多个连续的子区域。若子过渡区域中包含上述的一种或多种情况,则判断为低于预设要求,便可以对该子过渡区域进行优化,例如进行平滑处理,使得前景图像、子过渡区域图像、虚化背景图像之间平滑过渡。使得虚化效果自然、平滑过渡。如图9所示的优化结果,对虚化过渡区域优化处理后,使得前景图像区域和虚化背景图像区域之间平滑过渡,提升图像质量。通过对主体边缘进行平滑处理,增加虚化过渡带,使得虚化效果自然、平滑过渡。In addition, the blurred transition area can be divided into sub-regions to obtain multiple consecutive sub-transition areas, and then each sub-transition area can be identified and evaluated to determine whether each sub-transition area contains abnormal textures, burrs, and unnatural textures. display effect. As shown in Figure 8, the blur transition area is divided into multiple continuous sub-areas. If the sub-transition area contains one or more of the above conditions, it is judged to be lower than the preset requirements, and the sub-transition area can be optimized, such as smoothing, so that the foreground image, the sub-transition area image, and the blurred background Smooth transition between images. Make the blur effect natural and smooth transition. As shown in the optimization result shown in FIG. 9, after optimizing the blurred transition area, a smooth transition is made between the foreground image area and the blurred background image area, and the image quality is improved. By smoothing the edges of the main body, the blur transition zone is added to make the blur effect natural and smooth transition.
可选地,在本申请实施例中,在第一摄像组件获取第一图像后,为了降低处理器运算压力,还可以与第一图像进行尺寸变换,缩小第一图像的尺寸。利用该尺寸缩小后的第一图像进行深度图像的获取、前景图像和背景图像和划分以及图像融合的处理。在获取融合后的虚化图像后,再对虚化图像进行尺寸变换,使虚化图像的尺寸位于第一图像对应的原始尺寸,从而保证输出图像的显示效果。Optionally, in the embodiment of the present application, after the first image capturing component acquires the first image, in order to reduce the computational pressure of the processor, the size of the first image may be transformed to reduce the size of the first image. The first image reduced in size is used for depth image acquisition, foreground image and background image division, and image fusion processing. After the fused blurred image is obtained, the blurred image is resized so that the size of the blurred image is located at the original size corresponding to the first image, thereby ensuring the display effect of the output image.
基于上述内容,在本申请实施例所提供的图像处理方法,通过在启动两摄像组件后首先使两摄像组件进行同步,使得两摄像组件实现硬件的帧同步,保证同一时刻两个摄像头同时采集相同的图像并出图。通过预先配置两摄像组件之间曝光参数的对应关系的参数表,在确定第一摄像组件当前的第一曝光参数后,可以查询参数表来确定当前时刻第二摄像组件的第二曝光参数,实现两摄像组件的AE同步,确保两摄像组件曝光时间、曝光位置和帧率一致,最终保证两张图片输出的EV亮度值一致。从而有效的实现对两摄像组件输出图像的质量的提升,有效控制了两摄像组件输出图的质量。相较于传统方法,极大保证第一摄像头和第二摄像头图像采集的一致性,为虚化后期做特征点匹配提供了可靠的输入源,输入图的一致性可以极大降低虚化误虚、漏虚的概率。其次在虚化处理中期特征点匹配时,扩大了极线搜索的范围,大大增加匹配的鲁棒性,使得深度图计算更加准确,尤其是暗光下虚化效果提升较为明显。最后在虚化处理后期时,对前景图像边缘增加虚化过渡带,使得虚化效果更加自然、平滑过渡,对背景纹理比较复杂的场景,虚化效果提升明显。本公开提供的方法从虚化流程前期、中期、后期的流程进行改善和优化,极大提升了背景虚化的效果,提高了虚化可用场景和范围,显著增强了虚化的图像质量。相对于用户可以有更好的用户体验,更少的场 景限制,获得更好更自然的虚化图像。Based on the above content, in the image processing method provided by the embodiments of the present application, the two camera components are synchronized after the two camera components are started, so that the two camera components realize the frame synchronization of the hardware, ensuring that the two cameras simultaneously capture the same at the same time. The image and the map. By pre-configuring the parameter table of the corresponding relationship of the exposure parameters between the two camera components, after the current first exposure parameter of the first camera component is determined, the parameter table can be consulted to determine the second exposure parameter of the second camera component at the current moment. The AE synchronization of the two camera components ensures that the exposure time, exposure position and frame rate of the two camera components are the same, and finally the EV brightness value of the two pictures output is the same. Therefore, the quality of the output images of the two camera components is effectively improved, and the quality of the output images of the two camera components is effectively controlled. Compared with the traditional method, it greatly guarantees the consistency of image acquisition between the first camera and the second camera, and provides a reliable input source for feature point matching in the later stage of blurring. The consistency of the input image can greatly reduce blurring errors. , The probability of leakage. Secondly, when the mid-term feature point matching is processed in the blur process, the range of the epipolar search is enlarged, and the robustness of the matching is greatly increased, making the depth map calculation more accurate, especially in the dark light, the blur effect is improved significantly. Finally, in the late stage of the blur processing, a blur transition zone is added to the edge of the foreground image to make the blur effect more natural and smooth transition. For scenes with more complex background textures, the blur effect is improved significantly. The method provided by the present disclosure improves and optimizes the early, mid, and late stages of the blurring process, greatly improves the effect of the background blurring, improves the available scenes and range of the blurring, and significantly enhances the image quality of the blurring. Compared with users, users can have a better user experience, fewer scene restrictions, and better and more natural blurred images.
应理解,本文中术语“***”和“网络”在本文中常被可互换使用。本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。It should be understood that the terms "system" and "network" in this article are often used interchangeably in this article. The term "and/or" in this article is only an association relationship describing the associated objects, which means that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, exist alone B these three situations. In addition, the character "/" in this text generally indicates that the associated objects before and after are in an "or" relationship.
还应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should also be understood that, in the various embodiments of the present application, the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not be implemented in this application. The implementation process of the example constitutes any limitation.
上文中详细描述了根据本申请实施例的图像处理方法,下面将结合附图,描述根据本申请实施例的图像处理装置,方法实施例所描述的技术特征适用于以下装置实施例。The image processing method according to the embodiment of the present application is described in detail above. The image processing apparatus according to the embodiment of the present application will be described below with reference to the accompanying drawings. The technical features described in the method embodiment are applicable to the following device embodiments.
图10示出了本申请实施例的图像处理装置100的示意性框图。如图10所示,该装置100包括:FIG. 10 shows a schematic block diagram of an image processing apparatus 100 according to an embodiment of the present application. As shown in FIG. 10, the device 100 includes:
同步执行模块101,用于响应于第一触发操作,启动第一摄像组件和第二摄像组件,并对所述第一摄像组件和第二摄像组件进行同步;The synchronization execution module 101 is configured to activate the first camera component and the second camera component in response to a first trigger operation, and synchronize the first camera component and the second camera component;
参数查询模块102,用于获取所述第一摄像组件对当前场景的第一曝光参数,并基于所述第一曝光参数查询预设参数表,以获取所述第二摄像组件对应的第二曝光参数;The parameter query module 102 is configured to obtain the first exposure parameter of the first camera component to the current scene, and query a preset parameter table based on the first exposure parameter to obtain the second exposure corresponding to the second camera component parameter;
参数执行模块103,用于所述第一摄像组件执行第一曝光参数以获取第一图像、所述第二摄像组件执行第二曝光参数以获取第二图像,以基于所述第一图像和第二图像构建所述当前场景对应的虚化图像。The parameter execution module 103 is used for the first camera component to execute the first exposure parameter to obtain a first image, and the second camera component to execute the second exposure parameter to obtain a second image based on the first image and the first image. The second image constructs a blurred image corresponding to the current scene.
可选地,在本申请实施例中,所述同步执行模块101可以包括:Optionally, in this embodiment of the present application, the synchronization execution module 101 may include:
同步信息处理单元,用于基于所述第一摄像组件的状态信息向所述第二摄像组件发送同步控制信号,以使所述第二摄像组件根据所述同步控制信号与所述第一摄像组件同步。The synchronization information processing unit is configured to send a synchronization control signal to the second camera assembly based on the state information of the first camera assembly, so that the second camera assembly communicates with the first camera assembly according to the synchronization control signal Synchronize.
可选地,在本申请实施例中,所述装置100还包括:Optionally, in this embodiment of the present application, the device 100 further includes:
特征点匹配模块,用于对所述第一图像和第二图像进行特征点匹配以对齐所述第一图像和第二图像;A feature point matching module, configured to perform feature point matching on the first image and the second image to align the first image and the second image;
深度数据计算模块,用于根据对齐后的所述第一图像和第二图像计算所 述第一摄像组件和第二摄像组件的视差值,以基于所述视差值计算各特征点的深度数据;The depth data calculation module is configured to calculate the disparity value of the first camera component and the second camera component according to the aligned first image and second image, so as to calculate the depth of each feature point based on the disparity value data;
深度图像构建模块,用于基于所述深度数据构建深度图像;A depth image construction module, configured to construct a depth image based on the depth data;
图像分割模块,用于根据所述深度图像对所述第一图像进行图像分割以获取前景图像和背景图像;An image segmentation module, configured to perform image segmentation on the first image according to the depth image to obtain a foreground image and a background image;
图像融合模块,用于对所述背景图像进行虚化处理以获取虚化背景图像,并将所述虚化背景图像和所述前景图像进行图像融合,以获取所述当前场景对应的虚化图像。The image fusion module is used to perform blur processing on the background image to obtain a blur background image, and perform image fusion on the blur background image and the foreground image to obtain a blur image corresponding to the current scene .
可选地,在本申请实施例中,所述特征点匹配模块包括:Optionally, in the embodiment of the present application, the feature point matching module includes:
极平面构建单元,用于在空间坐标系中根据所述第一摄像组件、第二摄像组件和目标点构建极平面。The polar plane construction unit is used to construct a polar plane according to the first camera component, the second camera component and the target point in the space coordinate system.
极线构建单元,用于分别获取所述第一图像、第二图像与所述极平面相交的极线。The epipolar line constructing unit is used to obtain the epipolar lines that intersect the polar plane with the first image and the second image respectively.
特征点匹配单元,用于基于所述极线确定所述第一图像中特征点对应的所述第二图像中的特征点,以对齐所述第一图像和第二图像。The feature point matching unit is configured to determine the feature point in the second image corresponding to the feature point in the first image based on the epipolar line, so as to align the first image and the second image.
可选地,在本申请实施例中,所述装置100还包括:Optionally, in this embodiment of the present application, the device 100 further includes:
虚化过渡区域划分模块,用于基于所述前景图像的区域边界和所述虚化背景图像的区域边界对所述虚化图像进行区域划分,以获取所述前景图像与虚化背景图像之间的虚化过渡区域。The blur transition region dividing module is used to divide the blur image based on the region boundary of the foreground image and the region boundary of the blur background image to obtain the difference between the foreground image and the blur background image The blur transition area.
优化模块,用于按预设规则对所述虚化过渡区域进行图像质量评估,并在评估结果低于预设要求时,对所述虚化过渡区域进行优化处理。The optimization module is used to evaluate the image quality of the blurred transition area according to preset rules, and perform optimization processing on the blurred transition area when the evaluation result is lower than the preset requirement.
可选地,在本申请实施例中,所述优化模块包括:Optionally, in the embodiment of the present application, the optimization module includes:
子过渡区域划分单元,用于对所述虚化过渡区域进行区域划分以获取多个子过渡区域,并分别对各所述子过渡区进行图像质量评估。The sub-transition area dividing unit is configured to divide the virtual transition area to obtain a plurality of sub-transition areas, and respectively perform image quality evaluation on each of the sub-transition areas.
子过渡区域优化单元,用于在所述子过渡区域的评估结果低于预设要求时,对所述子过渡区域执行平滑处理以优化所述子过渡区域。The sub-transition area optimization unit is configured to perform smoothing processing on the sub-transition area to optimize the sub-transition area when the evaluation result of the sub-transition area is lower than a preset requirement.
可选地,在本申请实施例中,所述装置100还包括:Optionally, in this embodiment of the present application, the device 100 further includes:
第一图像变换模块,用于对所述第一图像进行尺寸变换以将原始尺寸的第一图像变换为目标尺寸的第一图像。The first image transformation module is configured to perform size transformation on the first image to transform the first image of the original size into the first image of the target size.
第二图像变换模块,用于在获取所述虚化图像后,将所述虚化图像进行 尺寸变换以获取原始尺寸的所述虚化图像。The second image transformation module is configured to, after acquiring the blurred image, perform size transformation on the blurred image to obtain the blurred image of the original size.
可选地,在本申请实施例中,虚化过渡区域划分模块还用于以前景区域的边界为路径,向虚化背景区域延伸,按预设大小的窗口沿路径划分虚化过渡区域;其中,所述虚化过渡区域包含预设比例的前景图像。Optionally, in the embodiment of the present application, the blurred transition area dividing module is further configured to extend to the blurred background area by taking the boundary of the foreground area as a path, and divide the blurred transition area along the path according to a window of a preset size; wherein , The blurred transition area includes a foreground image of a preset ratio.
可选地,在本申请实施例中,同步执行模块还用于响应于第一触发操作启动第一摄像组件,并利用所述第一摄像组件采集当前场景图像;对所述当前场景图像进行图像识别以确定当前场景模式,并根据所述当前场景模式选择第二摄像组件。Optionally, in the embodiment of the present application, the synchronization execution module is further configured to activate the first camera component in response to the first trigger operation, and use the first camera component to collect the current scene image; image the current scene image Recognize to determine the current scene mode, and select the second camera component according to the current scene mode.
应理解,根据本申请实施例的图像处理装置100中的各个单元、模块和其它操作和/或功能分别为了实现图像处理方法中的相应流程,为了简洁,在此不再赘述。It should be understood that the various units, modules, and other operations and/or functions in the image processing apparatus 100 according to the embodiment of the present application are used to implement corresponding procedures in the image processing method, and are not repeated here for brevity.
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that although several modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory. In fact, according to the embodiments of the present disclosure, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
图11示出了适于用来实现本发明实施例的电子设备的计算机***的结构示意图。FIG. 11 shows a schematic structural diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present invention.
需要说明的是,图11示出的电子设备的计算机***110仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。It should be noted that the computer system 110 of the electronic device shown in FIG. 11 is only an example, and should not bring any limitation to the function and scope of use of the embodiment of the present invention.
如图11所示,计算机***110包括中央处理单元(Central Processing Unit,CPU)1101,其可以根据存储在只读存储器(Read-Only Memory,ROM)1102中的程序或者从存储部分1108加载到随机访问存储器(Random Access Memory,RAM)1103中的程序而执行各种适当的动作和处理。在RAM 1103中,还存储有***操作所需的各种程序和数据。CPU 1101、ROM 1102以及RAM 1103通过总线1104彼此相连。输入/输出(Input/Output,I/O)接口1105也连接至总线1104。As shown in FIG. 11, the computer system 110 includes a central processing unit (CPU) 1101, which can be loaded into a random system according to a program stored in a read-only memory (Read-Only Memory, ROM) 1102 or from a storage part 1108. Access to the program in the memory (Random Access Memory, RAM) 1103 to execute various appropriate actions and processing. In RAM 1103, various programs and data required for system operation are also stored. The CPU 1101, the ROM 1102, and the RAM 1103 are connected to each other through a bus 1104. An input/output (Input/Output, I/O) interface 1105 is also connected to the bus 1104.
以下部件连接至I/O接口1105:包括键盘、鼠标等的输入部分1106;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等以及扬声器等的输出部分1107;包括硬盘等的存储部分 1108;以及包括诸如LAN(Local Area Network,局域网)卡、调制解调器等的网络接口卡的通信部分1109。通信部分1109经由诸如因特网的网络执行通信处理。驱动器1110也根据需要连接至I/O接口1105。可拆卸介质1111,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器1110上,以便于从其上读出的计算机程序根据需要被安装入存储部分1108。The following components are connected to the I/O interface 1105: input part 1106 including keyboard, mouse, etc.; including output part 1107 such as cathode ray tube (Cathode Ray Tube, CRT), liquid crystal display (LCD), and speakers A storage part 1108 including a hard disk, etc.; and a communication part 1109 including a network interface card such as a LAN (Local Area Network) card, a modem, and the like. The communication section 1109 performs communication processing via a network such as the Internet. The driver 1110 is also connected to the I/O interface 1105 as needed. A removable medium 1111, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 1110 as needed, so that the computer program read therefrom is installed into the storage portion 1108 as needed.
特别地,根据本发明的实施例,下文参考流程图描述的过程可以被实现为计算机软件程序。例如,本发明的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分1109从网络上被下载和安装,和/或从可拆卸介质1111被安装。在该计算机程序被中央处理单元(CPU)1101执行时,执行本申请的***中限定的各种功能。In particular, according to an embodiment of the present invention, the process described below with reference to the flowchart can be implemented as a computer software program. For example, an embodiment of the present invention includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication part 1109, and/or installed from the removable medium 1111. When the computer program is executed by the central processing unit (CPU) 1101, various functions defined in the system of the present application are executed.
需要说明的是,本发明实施例所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本发明中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本发明中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、有线等等,或者上述的 任意合适的组合。It should be noted that the computer-readable medium shown in the embodiment of the present invention may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Erasable Programmable Read Only Memory (EPROM), flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable of the above The combination. In the present invention, the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device. In the present invention, a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. The computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device . The program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wired, etc., or any suitable combination of the above.
附图中的流程图和框图,图示了按照本发明各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowcharts and block diagrams in the drawings illustrate the possible implementation of the system architecture, functions, and operations of the system, method, and computer program product according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of the code, and the above-mentioned module, program segment, or part of the code contains one or more for realizing the specified logic function. Executable instructions. It should also be noted that, in some alternative implementations, the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown one after another can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved. It should also be noted that each block in the block diagram or flowchart, and the combination of blocks in the block diagram or flowchart, can be implemented by a dedicated hardware-based system that performs the specified function or operation, or can be implemented by It is realized by a combination of dedicated hardware and computer instructions.
描述于本发明实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现,所描述的单元也可以设置在处理器中。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定。The units described in the embodiments of the present invention may be implemented in software or hardware, and the described units may also be provided in a processor. Among them, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被一个该电子设备执行时,使得该电子设备实现如下述实施例中所述的方法。例如,所述的电子设备可以实现如图1所示的各个步骤。As another aspect, this application also provides a computer-readable medium. The computer-readable medium may be included in the electronic device described in the above-mentioned embodiment; or it may exist alone without being assembled into the electronic device. in. The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by an electronic device, the electronic device realizes the method described in the following embodiments. For example, the electronic device can implement the steps shown in FIG. 1.
此外,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。In addition, the above-mentioned drawings are merely schematic illustrations of the processing included in the method according to the exemplary embodiment of the present invention, and are not intended for limitation. It is easy to understand that the processing shown in the above drawings does not indicate or limit the time sequence of these processings. In addition, it is easy to understand that these processes can be executed synchronously or asynchronously in multiple modules, for example.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描 述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working processes of the systems, devices and units described above can refer to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the unit is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
该作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The unit described as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
该功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If this function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks and other media that can store program codes. .
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Should be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.

Claims (20)

  1. 一种图像处理方法,其特征在于,包括:An image processing method, characterized in that it comprises:
    响应于第一触发操作,启动第一摄像组件和第二摄像组件,并对所述第一摄像组件和第二摄像组件进行同步;以及In response to the first trigger operation, start the first camera assembly and the second camera assembly, and synchronize the first camera assembly and the second camera assembly; and
    获取所述第一摄像组件对当前场景的第一曝光参数,并基于所述第一曝光参数查询预设参数表,以获取所述第二摄像组件对应的第二曝光参数;Acquiring the first exposure parameter of the first camera component for the current scene, and querying a preset parameter table based on the first exposure parameter to acquire the second exposure parameter corresponding to the second camera component;
    所述第一摄像组件执行第一曝光参数以获取第一图像、所述第二摄像组件执行第二曝光参数以获取第二图像,以基于所述第一图像和第二图像构建所述当前场景对应的虚化图像。The first camera component executes a first exposure parameter to obtain a first image, and the second camera component executes a second exposure parameter to obtain a second image, so as to construct the current scene based on the first image and the second image The corresponding blurred image.
  2. 根据权利要求1所述的方法,其特征在于,所述对所述第一摄像组件和第二摄像组件进行同步,包括:The method according to claim 1, wherein the synchronizing the first camera component and the second camera component comprises:
    基于所述第一摄像组件的状态信息向所述第二摄像组件发送同步控制信号,以使所述第二摄像组件根据所述同步控制信号与所述第一摄像组件同步。Sending a synchronization control signal to the second camera component based on the state information of the first camera component, so that the second camera component is synchronized with the first camera component according to the synchronization control signal.
  3. 根据权利要求1所述的方法,其特征在于,所述基于所述第一图像和第二图像构建所述当前场景对应的虚化图像,包括:The method according to claim 1, wherein the constructing a blurred image corresponding to the current scene based on the first image and the second image comprises:
    对所述第一图像和第二图像进行特征点匹配以对齐所述第一图像和第二图像;Performing feature point matching on the first image and the second image to align the first image and the second image;
    根据对齐后的所述第一图像和第二图像计算所述第一摄像组件和第二摄像组件的视差值,以基于所述视差值计算各特征点的深度数据;Calculating the parallax value of the first camera component and the second camera component according to the aligned first image and second image, so as to calculate the depth data of each feature point based on the parallax value;
    基于所述深度数据构建深度图像;Constructing a depth image based on the depth data;
    根据所述深度图像对所述第一图像进行图像分割以获取前景图像和背景图像;Performing image segmentation on the first image according to the depth image to obtain a foreground image and a background image;
    对所述背景图像进行虚化处理以获取虚化背景图像,并将所述虚化背景图像和所述前景图像进行图像融合,以获取所述当前场景对应的虚化图像。Performing blurring processing on the background image to obtain a blurring background image, and performing image fusion on the blurring background image and the foreground image to obtain a blurring image corresponding to the current scene.
  4. 根据权利要求3所述的方法,其特征在于,所述对所述第一图像和第二图像进行特征点匹配以对齐所述第一图像和第二图像,包括:The method according to claim 3, wherein the matching of feature points on the first image and the second image to align the first image and the second image comprises:
    在空间坐标系中根据所述第一摄像组件、第二摄像组件和目标点构建极平面;Constructing a polar plane according to the first camera component, the second camera component and the target point in the space coordinate system;
    分别获取所述第一图像、第二图像与所述极平面相交的极线;Respectively acquiring the epipolar lines that the first image and the second image intersect with the polar plane;
    基于所述极线确定所述第一图像中特征点对应的所述第二图像中的特征点,以对齐所述第一图像和第二图像。The feature point in the second image corresponding to the feature point in the first image is determined based on the epipolar line, so as to align the first image and the second image.
  5. 根据权利要求3所述的方法,其特征在于,所述获取所述当前场景的虚化图像后,所述方法还包括:The method according to claim 3, wherein after the obtaining the blurred image of the current scene, the method further comprises:
    基于所述前景图像的区域边界和所述虚化背景图像的区域边界对所述虚化图像进行区域划分,以获取所述前景图像与虚化背景图像之间的虚化过渡区域;Performing area division on the blurred image based on the area boundary of the foreground image and the area boundary of the blurred background image to obtain a blurred transition area between the foreground image and the blurred background image;
    按预设规则对所述虚化过渡区域进行图像质量评估,并在评估结果低于预设要求时,对所述虚化过渡区域进行优化处理。The image quality of the blurred transition area is evaluated according to preset rules, and when the evaluation result is lower than the preset requirement, the blurred transition area is optimized.
  6. 根据权利要求5所述的方法,其特征在于,所述按预设规则对所述虚化过渡区域进行图像质量评估,并在评估结果低于预设要求时,对所述虚化过渡区域进行优化处理,包括:The method according to claim 5, wherein the image quality evaluation is performed on the blurred transition area according to a preset rule, and when the evaluation result is lower than a preset requirement, the blurred transition area is evaluated Optimization processing, including:
    对所述虚化过渡区域进行区域划分以获取多个子过渡区域,并分别对各所述子过渡区进行图像质量评估;Performing region division on the blurred transition area to obtain a plurality of sub-transition areas, and performing image quality evaluation on each of the sub-transition areas;
    在所述子过渡区域的评估结果低于预设要求时,对所述子过渡区域执行平滑处理以优化所述子过渡区域。When the evaluation result of the sub-transition area is lower than a preset requirement, smoothing processing is performed on the sub-transition area to optimize the sub-transition area.
  7. 根据权利要求5所述的方法,其特征在于,所述基于所述前景图像的区域边界和所述虚化背景图像的区域边界对所述虚化图像进行区域划分,以获取所述前景图像与虚化背景图像之间的虚化过渡区域,包括:The method according to claim 5, wherein the area of the blurred image is divided based on the area boundary of the foreground image and the area boundary of the blurred background image to obtain the foreground image and the area boundary of the blurred background image. The blurred transition area between the blurred background images includes:
    以前景区域的边界为路径,向虚化背景区域延伸,按预设大小的窗口沿路径划分虚化过渡区域;其中,所述虚化过渡区域包含预设比例的前景图像。Taking the boundary of the foreground area as a path, it extends to the blurred background area, and divides the blurred transition area along the path according to a window of a preset size; wherein the blurred transition area includes a foreground image of a preset ratio.
  8. 根据权利要求3所述的方法,其特征在于,所述对齐所述第一图像和第二图像前,所述方法还包括:The method according to claim 3, wherein before the aligning the first image and the second image, the method further comprises:
    对所述第一图像进行尺寸变换以将原始尺寸的第一图像变换为目标尺寸的第一图像;以及Performing size transformation on the first image to transform the first image of the original size into the first image of the target size; and
    在获取所述虚化图像后,将所述虚化图像进行尺寸变换以获取原始尺寸的所述虚化图像。After obtaining the blurred image, the blurred image is subjected to size transformation to obtain the blurred image of the original size.
  9. 根据权利要求1所述的方法,其特征在于,所述响应于第一触发操作,启动第一摄像组件和第二摄像组件包括:The method according to claim 1, wherein in response to the first trigger operation, activating the first camera component and the second camera component comprises:
    响应于第一触发操作启动第一摄像组件,并利用所述第一摄像组件采集 当前场景图像;Activating the first camera component in response to the first triggering operation, and using the first camera component to collect current scene images;
    对所述当前场景图像进行图像识别以确定当前场景模式,并根据所述当前场景模式选择第二摄像组件。Image recognition is performed on the current scene image to determine the current scene mode, and the second camera component is selected according to the current scene mode.
  10. 一种图像处理装置,其特征在于,包括:An image processing device, characterized in that it comprises:
    同步执行模块,用于响应于第一触发操作,启动第一摄像组件和第二摄像组件,并对所述第一摄像组件和第二摄像组件进行同步;以及The synchronization execution module is configured to activate the first camera assembly and the second camera assembly in response to the first trigger operation, and synchronize the first camera assembly and the second camera assembly; and
    参数查询模块,用于获取所述第一摄像组件对当前场景的第一曝光参数,并基于所述第一曝光参数查询预设参数表,以获取所述第二摄像组件对应的第二曝光参数;The parameter query module is configured to obtain the first exposure parameter of the first camera component to the current scene, and query a preset parameter table based on the first exposure parameter to obtain the second exposure parameter corresponding to the second camera component ;
    参数执行模块,用于所述第一摄像组件执行第一曝光参数以获取第一图像、所述第二摄像组件执行第二曝光参数以获取第二图像,以基于所述第一图像和第二图像构建所述当前场景对应的虚化图像。The parameter execution module is used for the first camera component to execute a first exposure parameter to obtain a first image, and the second camera component to execute a second exposure parameter to obtain a second image based on the first image and the second image. The image constructs a blurred image corresponding to the current scene.
  11. 根据权利要求10所述的装置,其特征在于,所述同步执行模块包括:The device according to claim 10, wherein the synchronization execution module comprises:
    同步信息处理单元,用于基于所述第一摄像组件的状态信息向所述第二摄像组件发送同步控制信号,以使所述第二摄像组件根据所述同步控制信号与所述第一摄像组件同步。The synchronization information processing unit is configured to send a synchronization control signal to the second camera assembly based on the state information of the first camera assembly, so that the second camera assembly communicates with the first camera assembly according to the synchronization control signal Synchronize.
  12. 根据权利要求10所述的装置,其特征在于,所述装置还包括:The device according to claim 10, wherein the device further comprises:
    特征点匹配模块,用于对所述第一图像和第二图像进行特征点匹配以对齐所述第一图像和第二图像;A feature point matching module, configured to perform feature point matching on the first image and the second image to align the first image and the second image;
    深度数据计算模块,用于根据对齐后的所述第一图像和第二图像计算所述第一摄像组件和第二摄像组件的视差值,以基于所述视差值计算各特征点的深度数据;The depth data calculation module is configured to calculate the disparity value of the first camera component and the second camera component according to the aligned first image and second image, so as to calculate the depth of each feature point based on the disparity value data;
    深度图像构建模块,用于基于所述深度数据构建深度图像;A depth image construction module, configured to construct a depth image based on the depth data;
    图像分割模块,用于根据所述深度图像对所述第一图像进行图像分割以获取前景图像和背景图像;An image segmentation module, configured to perform image segmentation on the first image according to the depth image to obtain a foreground image and a background image;
    图像融合模块,用于对所述背景图像进行虚化处理以获取虚化背景图像,并将所述虚化背景图像和所述前景图像进行图像融合,以获取所述当前场景对应的虚化图像。The image fusion module is used to perform blur processing on the background image to obtain a blur background image, and perform image fusion on the blur background image and the foreground image to obtain a blur image corresponding to the current scene .
  13. 根据权利要求12所述的装置,其特征在于所述特征点匹配模块还包括:The device according to claim 12, wherein the characteristic point matching module further comprises:
    极平面构建单元,用于在空间坐标系中根据所述第一摄像组件、第二摄像组件和目标点构建极平面;A polar plane constructing unit for constructing a polar plane according to the first camera component, the second camera component and the target point in the space coordinate system;
    极线构建单元,用于分别获取所述第一图像、第二图像与所述极平面相交的极线;An epipolar line construction unit, configured to obtain the epipolar lines that intersect the polar plane with the first image and the second image respectively;
    特征点匹配单元,用于基于所述极线确定所述第一图像中特征点对应的所述第二图像中的特征点,以对齐所述第一图像和第二图像。The feature point matching unit is configured to determine the feature point in the second image corresponding to the feature point in the first image based on the epipolar line, so as to align the first image and the second image.
  14. 根据权利要求10所述的装置,其特征在于,所述装置还包括:The device according to claim 10, wherein the device further comprises:
    虚化过渡区域划分模块,用于基于所述前景图像的区域边界和所述虚化背景图像的区域边界对所述虚化图像进行区域划分,以获取所述前景图像与虚化背景图像之间的虚化过渡区域;The blur transition region dividing module is used to divide the blur image based on the region boundary of the foreground image and the region boundary of the blur background image to obtain the difference between the foreground image and the blur background image The blurred transition area;
    优化模块,用于按预设规则对所述虚化过渡区域进行图像质量评估,并在评估结果低于预设要求时,对所述虚化过渡区域进行优化处理。The optimization module is used to evaluate the image quality of the blurred transition area according to preset rules, and perform optimization processing on the blurred transition area when the evaluation result is lower than the preset requirement.
  15. 根据权利要求14所述的装置,其特征在于,所述优化模块包括:The device according to claim 14, wherein the optimization module comprises:
    子过渡区域划分单元,用于对所述虚化过渡区域进行区域划分以获取多个子过渡区域,并分别对各所述子过渡区进行图像质量评估;A sub-transition area dividing unit, configured to divide the virtual transition area to obtain a plurality of sub-transition areas, and perform image quality evaluation on each of the sub-transition areas;
    子过渡区域优化单元,用于在所述子过渡区域的评估结果低于预设要求时,对所述子过渡区域执行平滑处理以优化所述子过渡区域。The sub-transition area optimization unit is configured to perform smoothing processing on the sub-transition area to optimize the sub-transition area when the evaluation result of the sub-transition area is lower than a preset requirement.
  16. 根据权利要求14所述的装置,其特征在于,所述虚化过渡区域划分模块还用于以前景区域的边界为路径,向虚化背景区域延伸,按预设大小的窗口沿路径划分虚化过渡区域;其中,所述虚化过渡区域包含预设比例的前景图像。The device according to claim 14, wherein the blur transition area dividing module is further configured to take the border of the foreground area as a path, extend to the blur background area, and divide the blur along the path according to a window of a preset size. Transition area; wherein the blurred transition area contains a foreground image of a preset ratio.
  17. 根据权利要求12所述的装置,其特征在于,所述装置还包括:The device according to claim 12, wherein the device further comprises:
    第一图像变换模块,用于对所述第一图像进行尺寸变换以将原始尺寸的第一图像变换为目标尺寸的第一图像;The first image transformation module is configured to perform size transformation on the first image to transform the first image of the original size into the first image of the target size;
    第二图像变换模块,用于在获取所述虚化图像后,将所述虚化图像进行尺寸变换以获取原始尺寸的所述虚化图像。The second image transformation module is configured to perform size transformation on the blurred image after acquiring the blurred image to obtain the blurred image of the original size.
  18. 根据权利要求10所述的装置,其特征在于,所述同步执行模块还用于,响应于第一触发操作启动第一摄像组件,并利用所述第一摄像组件采集当前场景图像;对所述当前场景图像进行图像识别以确定当前场景模式,并根据所述当前场景模式选择第二摄像组件。The device according to claim 10, wherein the synchronization execution module is further configured to activate the first camera component in response to the first trigger operation, and use the first camera component to collect current scene images; Image recognition is performed on the current scene image to determine the current scene mode, and the second camera component is selected according to the current scene mode.
  19. 一种计算机可读介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至9中任一项所述的图像处理方法。A computer-readable medium having a computer program stored thereon, wherein the computer program implements the image processing method according to any one of claims 1 to 9 when the computer program is executed by a processor.
  20. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    一个或多个处理器;One or more processors;
    存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如权利要求1至9中任一项所述的图像处理方法。The storage device is configured to store one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors implement any of claims 1 to 9 The image processing method described in one item.
PCT/CN2020/139236 2019-12-31 2020-12-25 Image processing method, image processing system, computer readable medium, and electronic apparatus WO2021136078A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911406556.X 2019-12-31
CN201911406556.XA CN113129241B (en) 2019-12-31 2019-12-31 Image processing method and device, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
WO2021136078A1 true WO2021136078A1 (en) 2021-07-08

Family

ID=76686483

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/139236 WO2021136078A1 (en) 2019-12-31 2020-12-25 Image processing method, image processing system, computer readable medium, and electronic apparatus

Country Status (2)

Country Link
CN (1) CN113129241B (en)
WO (1) WO2021136078A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025107A (en) * 2021-12-01 2022-02-08 北京七维视觉科技有限公司 Image ghost shooting method and device, storage medium and fusion processor
CN115002345A (en) * 2022-05-13 2022-09-02 北京字节跳动网络技术有限公司 Image correction method and device, electronic equipment and storage medium
CN115802153A (en) * 2021-09-09 2023-03-14 哲库科技(上海)有限公司 Image shooting method and device, computer equipment and storage medium
CN116095517A (en) * 2022-08-31 2023-05-09 荣耀终端有限公司 Blurring method and blurring device
CN117152398A (en) * 2023-10-30 2023-12-01 深圳优立全息科技有限公司 Three-dimensional image blurring method, device, equipment and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113935930A (en) * 2021-09-09 2022-01-14 深圳市优***科技股份有限公司 Image fusion method and system
CN114125296A (en) * 2021-11-24 2022-03-01 广东维沃软件技术有限公司 Image processing method, image processing device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851122A (en) * 2017-02-27 2017-06-13 上海兴芯微电子科技有限公司 The scaling method and device of the auto exposure parameter based on dual camera system
US20170249742A1 (en) * 2016-02-25 2017-08-31 Nigella LAWSON Depth of field processing
CN109377460A (en) * 2018-10-15 2019-02-22 Oppo广东移动通信有限公司 A kind of image processing method, image processing apparatus and terminal device
CN109862269A (en) * 2019-02-18 2019-06-07 Oppo广东移动通信有限公司 Image-pickup method, device, electronic equipment and computer readable storage medium
CN110493506A (en) * 2018-12-12 2019-11-22 杭州海康威视数字技术股份有限公司 A kind of image processing method and system

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9460118B2 (en) * 2014-09-30 2016-10-04 Duelight Llc System, method, and computer program product for exchanging images
CN104751405B (en) * 2015-03-11 2018-11-13 百度在线网络技术(北京)有限公司 A kind of method and apparatus for being blurred to image
JP2017192086A (en) * 2016-04-15 2017-10-19 キヤノン株式会社 Image generating apparatus, image observing apparatus, imaging apparatus and image processing program
CN108270960A (en) * 2016-12-30 2018-07-10 聚晶半导体股份有限公司 Image capturing device and its control method
CN108230252B (en) * 2017-01-24 2022-02-01 深圳市商汤科技有限公司 Image processing method and device and electronic equipment
CN107040726B (en) * 2017-04-19 2020-04-07 宇龙计算机通信科技(深圳)有限公司 Double-camera synchronous exposure method and system
CN107610046A (en) * 2017-10-24 2018-01-19 上海闻泰电子科技有限公司 Background-blurring method, apparatus and system
CN108024058B (en) * 2017-11-30 2019-08-02 Oppo广东移动通信有限公司 Image blurs processing method, device, mobile terminal and storage medium
CN108322646B (en) * 2018-01-31 2020-04-10 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN108335323B (en) * 2018-03-20 2020-12-29 厦门美图之家科技有限公司 Blurring method of image background and mobile terminal
CN109862262A (en) * 2019-01-02 2019-06-07 上海闻泰电子科技有限公司 Image weakening method, device, terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170249742A1 (en) * 2016-02-25 2017-08-31 Nigella LAWSON Depth of field processing
CN106851122A (en) * 2017-02-27 2017-06-13 上海兴芯微电子科技有限公司 The scaling method and device of the auto exposure parameter based on dual camera system
CN109377460A (en) * 2018-10-15 2019-02-22 Oppo广东移动通信有限公司 A kind of image processing method, image processing apparatus and terminal device
CN110493506A (en) * 2018-12-12 2019-11-22 杭州海康威视数字技术股份有限公司 A kind of image processing method and system
CN109862269A (en) * 2019-02-18 2019-06-07 Oppo广东移动通信有限公司 Image-pickup method, device, electronic equipment and computer readable storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115802153A (en) * 2021-09-09 2023-03-14 哲库科技(上海)有限公司 Image shooting method and device, computer equipment and storage medium
CN114025107A (en) * 2021-12-01 2022-02-08 北京七维视觉科技有限公司 Image ghost shooting method and device, storage medium and fusion processor
CN114025107B (en) * 2021-12-01 2023-12-01 北京七维视觉科技有限公司 Image ghost shooting method, device, storage medium and fusion processor
CN115002345A (en) * 2022-05-13 2022-09-02 北京字节跳动网络技术有限公司 Image correction method and device, electronic equipment and storage medium
CN115002345B (en) * 2022-05-13 2024-02-13 北京字节跳动网络技术有限公司 Image correction method, device, electronic equipment and storage medium
CN116095517A (en) * 2022-08-31 2023-05-09 荣耀终端有限公司 Blurring method and blurring device
CN116095517B (en) * 2022-08-31 2024-04-09 荣耀终端有限公司 Blurring method, terminal device and readable storage medium
CN117152398A (en) * 2023-10-30 2023-12-01 深圳优立全息科技有限公司 Three-dimensional image blurring method, device, equipment and storage medium
CN117152398B (en) * 2023-10-30 2024-02-13 深圳优立全息科技有限公司 Three-dimensional image blurring method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113129241B (en) 2023-02-07
CN113129241A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
WO2021136078A1 (en) Image processing method, image processing system, computer readable medium, and electronic apparatus
WO2019105214A1 (en) Image blurring method and apparatus, mobile terminal and storage medium
KR102278776B1 (en) Image processing method, apparatus, and apparatus
US11756223B2 (en) Depth-aware photo editing
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
WO2018201809A1 (en) Double cameras-based image processing device and method
WO2019105262A1 (en) Background blur processing method, apparatus, and device
WO2018228467A1 (en) Image exposure method and device, photographing device, and storage medium
KR102480245B1 (en) Automated generation of panning shots
CN106899781B (en) Image processing method and electronic equipment
EP2328125A1 (en) Image splicing method and device
EP3067746A1 (en) Photographing method for dual-camera device and dual-camera device
CN110336942B (en) Blurred image acquisition method, terminal and computer-readable storage medium
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
WO2019119986A1 (en) Image processing method and device, computer readable storage medium, and electronic apparatus
WO2019037038A1 (en) Image processing method and device, and server
WO2022160857A1 (en) Image processing method and apparatus, and computer-readable storage medium and electronic device
CN110213491B (en) Focusing method, device and storage medium
WO2020042000A1 (en) Camera device and focusing method
WO2023142352A1 (en) Depth image acquisition method and device, terminal, imaging system and medium
CN114363522A (en) Photographing method and related device
CN108289170A (en) The camera arrangement and method of metering region can be detected
JP2001208522A (en) Distance image generator, distance image generation method and program supply medium
CN116347056A (en) Image focusing method, device, computer equipment and storage medium
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20909281

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20909281

Country of ref document: EP

Kind code of ref document: A1