CN113409209B - Image deblurring method, device, electronic equipment and storage medium - Google Patents

Image deblurring method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113409209B
CN113409209B CN202110672843.6A CN202110672843A CN113409209B CN 113409209 B CN113409209 B CN 113409209B CN 202110672843 A CN202110672843 A CN 202110672843A CN 113409209 B CN113409209 B CN 113409209B
Authority
CN
China
Prior art keywords
image
deblurring
original image
original
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110672843.6A
Other languages
Chinese (zh)
Other versions
CN113409209A (en
Inventor
邹子杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110672843.6A priority Critical patent/CN113409209B/en
Publication of CN113409209A publication Critical patent/CN113409209A/en
Application granted granted Critical
Publication of CN113409209B publication Critical patent/CN113409209B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an image deblurring method, an image deblurring device, a storage medium and electronic equipment, and relates to the technical field of image and video processing. The image deblurring method comprises the following steps: acquiring a frame of first original image; performing single-frame deblurring processing on the first original image to obtain a first deblurred image; acquiring one or more frames of second original images acquired in a neighborhood time of the shooting time of the first original images; and performing deblurring processing on the first deblurring image by using the second original image to obtain a second deblurring image. The method combines two modes of single-frame deblurring and multi-frame deblurring, is beneficial to improving the image deblurring effect, and is particularly suitable for deblurring of face images.

Description

Image deblurring method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image and video processing technologies, and in particular, to an image deblurring method, an image deblurring device, a computer readable storage medium, and an electronic apparatus.
Background
In the image capturing process, blurring of an image due to shake, defocus or other causes is a common situation. Image deblurring is a task involving image quality, and the cause of blurring is complex, making it difficult for the related art to achieve a high-quality image deblurring effect.
Disclosure of Invention
The present disclosure provides an image deblurring method, an image deblurring apparatus, a computer-readable storage medium, and an electronic device, thereby improving an image deblurring effect at least to some extent.
According to a first aspect of the present disclosure, there is provided an image deblurring method comprising: acquiring a frame of first original image; performing single-frame deblurring processing on the first original image to obtain a first deblurred image; acquiring one or more frames of second original images acquired in a neighborhood time of the shooting time of the first original images; and performing deblurring processing on the first deblurring image by using the second original image to obtain a second deblurring image.
According to a second aspect of the present disclosure, there is provided an image deblurring apparatus comprising: a first acquisition module configured to acquire a frame of a first original image; the first deblurring module is configured to perform single-frame deblurring processing on the first original image to obtain a first deblurred image; a second acquisition module configured to acquire one or more frames of second original images acquired within a neighborhood time of a photographing time of the first original image; and the second deblurring module is configured to deblur the first deblurring image by using the second original image to obtain a second deblurring image.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image deblurring method of the first aspect described above and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the image deblurring method of the first aspect described above and possible implementations thereof via execution of the executable instructions.
The technical scheme of the present disclosure has the following beneficial effects:
The method combines two modes of single-frame deblurring and multi-frame deblurring, utilizes the space domain and time domain information of the image, is favorable for improving the deblurring effect of the image, is particularly suitable for deblurring of the face image, can remove the blur caused by rigid motion of the face and the blur related to image quality, and further recovers the texture and texture details of the face, so that the deblurred image is clearer and more real.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Fig. 1 shows a schematic diagram of a system architecture in the present exemplary embodiment;
Fig. 2 shows a schematic structural diagram of an electronic device in the present exemplary embodiment;
Fig. 3 shows a flowchart of an image deblurring method in the present exemplary embodiment;
Fig. 4 shows a schematic diagram of determining a first original image in the present exemplary embodiment;
FIG. 5 illustrates a flowchart for determining image blur level in the present exemplary embodiment;
fig. 6 is a schematic diagram showing the structure of a single frame deblurring network in the present exemplary embodiment;
FIG. 7 shows a schematic diagram of a training single frame deblurring network in the present exemplary embodiment;
fig. 8 shows a schematic diagram of determining a second original image in the present exemplary embodiment;
fig. 9 shows a flowchart of acquiring a second original image in the present exemplary embodiment;
fig. 10 shows a schematic diagram of image fusion in the present exemplary embodiment;
Fig. 11 shows a flowchart of image registration in the present exemplary embodiment;
fig. 12 shows a schematic diagram of matching of an image pyramid with feature points in the present exemplary embodiment;
fig. 13 shows a flowchart of another image deblurring method in the present exemplary embodiment;
Fig. 14 shows a schematic configuration of an image deblurring device in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
Image blur can be classified into lens hardware blur, motion blur, virtual focus blur, other types of blur, and the like by type. The motion blur is a condition most frequently occurring when a terminal such as a smart phone shoots, and is caused by very complex motion blur, particularly a face image, the environment where the image is positioned is usually a natural environment with complex background, the background of the blurred texture color and the environment with different degrees has extremely high similarity, and the blur is discontinuous and difficult to recognize, so that the difficulty of deblurring the image is higher.
In one scheme of the related art, a blur kernel of an image is estimated by a data model, and deblurring processing is performed using the blur kernel image. However, when the blur causes are complex and difficult to analyze, the blur kernel cannot be accurately estimated, and the deblurring quality of the image is affected.
In view of the above, exemplary embodiments of the present disclosure provide an image deblurring method. The system architecture of the image deblurring method operating environment is described first.
Fig. 1 shows a schematic diagram of a system architecture, which system architecture 100 may include a terminal 110 and a server 120. The terminal 110 may be a terminal device such as a desktop computer, a notebook computer, a smart phone, a tablet computer, etc., and the server 120 may be a server providing services related to image processing, or a cluster formed by a plurality of servers. The terminal 110 and the server 120 may form a connection through a wired or wireless communication link for data interaction. The terminal 110 may capture or acquire the first original image or the second original image from other devices. In one embodiment, the terminal 110 may send the first original image and the second original image to the server 120, and the server 120 outputs a deblurred image (such as the second deblurred image or the third deblurred image) and returns to the terminal 110 by performing the image deblurring method in the present exemplary embodiment. In one embodiment, the image deblurring method in the present exemplary embodiment may be performed by the terminal 110 to obtain a deblurred image.
Application scenarios of the image deblurring method include, but are not limited to: when a user opens a photographing function on the terminal 110 and clicks a photographing key (shutter key) in a photographing interface, the terminal 110 is triggered to photograph a first original image through a built-in camera and photograph a second original image in a subsequent neighborhood time; the terminal 110 performs the above image deblurring method, or sends the first original image and the second original image to the server 120, and the server 120 performs the above image deblurring method, so as to finally obtain a deblurred image, and displays and stores the deblurred image on a shooting interface of the terminal 110, so as to implement image deblurring processing synchronous with shooting.
As described above, the main execution body of the image deblurring method may be the terminal 110 or the server 120. Exemplary embodiments of the present disclosure also provide an electronic device for performing the image deblurring method, which may be the terminal 110 or the server 120. The configuration of the above-described electronic device will be exemplarily described below taking the mobile terminal 200 in fig. 2 as an example. It will be appreciated by those skilled in the art that the configuration of fig. 2 can also be applied to stationary type devices in addition to components specifically for mobile purposes.
As shown in fig. 2, the mobile terminal 200 may specifically include: processor 210, internal memory 221, external memory interface 222, USB (Universal Serial Bus ) interface 230, charge management module 240, power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, headset interface 274, sensor module 280, display screen 290, camera module 291, indicator 292, motor 293, keys 294, and SIM (Subscriber Identification Module, subscriber identity module) card interface 295, and the like.
Processor 210 may include one or more processing units such as, for example: processor 210 may include an AP (Application Processor ), modem Processor, GPU (Graphics Processing Unit, graphics Processor), ISP (IMAGE SIGNAL Processor ), controller, encoder, decoder, DSP (DIGITAL SIGNAL Processor ), baseband Processor and/or NPU (Neural-Network Processing Unit, neural network Processor), and the like.
The encoder may encode (i.e., compress) the image or video, for example, the current image, to obtain the bitstream data; the decoder may decode (i.e., decompress) the code stream data of the image or video to restore the image or video data. The mobile terminal 200 may support one or more encoders and decoders. In this way, the mobile terminal 200 can process images or videos in various encoding formats, such as: image formats such as JPEG (Joint Photographic Experts Group ), PNG (Portable Network Graphics, portable network graphics), BMP (Bitmap), and Video formats such as MPEG (Moving Picture Experts Group ) 1, MPEG2, h.263, h.264, HEVC (HIGH EFFICIENCY Video Coding).
In one embodiment, processor 210 may include one or more interfaces through which connections are made with other components of mobile terminal 200.
Internal memory 221 may be used to store computer executable program code that includes instructions. The internal memory 221 may include a volatile memory and a nonvolatile memory. The processor 210 performs various functional applications of the mobile terminal 200 and data processing by executing instructions stored in the internal memory 221.
The external memory interface 222 may be used to connect an external memory, such as a Micro SD card, to enable expansion of the memory capabilities of the mobile terminal 200. The external memory communicates with the processor 210 through the external memory interface 222 to implement data storage functions, such as storing files of images, videos, and the like.
The USB interface 230 is an interface conforming to the USB standard specification, and may be used to connect a charger to charge the mobile terminal 200, or may be connected to a headset or other electronic device.
The charge management module 240 is configured to receive a charge input from a charger. The charging management module 240 may also supply power to the device through the power management module 241 while charging the battery 242; the power management module 241 may also monitor the status of the battery.
The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 250 may provide a 2G, 3G, 4G, 5G, etc. mobile communication solution applied on the mobile terminal 200. The wireless Communication module 260 may provide wireless Communication solutions such as WLAN (Wireless Local Area Networks, wireless local area network) applied on the mobile terminal 200 (e.g., wi-Fi (WIRELESS FIDELITY, wireless fidelity) network), BT (Bluetooth), GNSS (Global Navigation SATELLITE SYSTEM ), FM (Frequency Modulation, frequency modulation), NFC (NEAR FIELD Communication, short range wireless Communication technology), IR (Infrared technology), etc.
The mobile terminal 200 may implement a display function through a GPU, a display screen 290, an AP, and the like, and display a user interface. For example, when a user performs camera detection, the mobile terminal 200 may display an interface of a camera detection App (Application) in the display screen 290.
The mobile terminal 200 may implement a photographing function through an ISP, a camera module 291, an encoder, a decoder, a GPU, a display 290, an AP, and the like. For example, the user may start an image or video capturing function in the hidden camera detection App, and at this time, an image of the space to be detected may be acquired through the image capturing module 291.
The mobile terminal 200 may implement audio functions through an audio module 270, a speaker 271, a receiver 272, a microphone 273, a headphone interface 274, an AP, and the like.
The sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyro sensor 2803, a barometric sensor 2804, etc. to implement a corresponding sensing function.
The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc. The motor 293 may generate vibration cues, may also be used for touch vibration feedback, or the like. The keys 294 include a power on key, a volume key, etc.
The mobile terminal 200 may support one or more SIM card interfaces 295 for interfacing with a SIM card to enable telephony and mobile communications, among other functions.
Fig. 3 illustrates an exemplary flow of the image deblurring method described above, which may include:
Step S310, a frame of first original image is obtained;
Step S320, performing single-frame deblurring processing on the first original image to obtain a first deblurred image;
Step S330, acquiring one or more frames of second original images acquired in a neighborhood time of the shooting time of the first original images;
In step S340, the first deblurring image is deblurred by using the second original image, so as to obtain a second deblurring image.
The first original image and the second original image are images which are acquired by the terminal and are not subjected to deblurring processing. The first original image is adjacent to the second original image in terms of photographing time, and the two original images may be images photographed for the same subject or scene. In one embodiment, the first original image and the second original image may be images of a face as a main shooting object. The first deblurring image is an intermediate image obtained by performing single-frame deblurring processing on the first original image, and is not an image which is finally output; the second deblurred image may be the final output image after the complete deblurring process.
In one embodiment, the second deblurring image may be further subjected to single-frame deblurring processing, to obtain a third deblurring image, and finally output.
Based on the method, two modes of single-frame deblurring and multi-frame deblurring are combined, the spatial domain and time domain information of the image is utilized, the image deblurring effect is improved, the method is particularly suitable for deblurring of the face image, the blur caused by rigid motion of the face can be removed, the blur related to the image quality can be further recovered, and the texture detail of the face can be further recovered, so that the deblurred image is clearer and more real.
Each step in fig. 3 is specifically described below.
Referring to fig. 3, in step S310, a frame of a first original image is acquired.
The first original image is an original image acquired during a normal photographing time. The conventional photographing time refers to the time when a user operates the terminal to photograph or the terminal photographs according to an automatic photographing setting. The regular photographing time may be a time of photographing an image, for example, a photographing trigger time, that is, a time when the user clicks the photographing key, at which the terminal collects an original image as a first original image. The regular photographing time may be a time of photographing a plurality of images, for example, a time period including a photographing trigger time, during which the terminal collects a plurality of original images, and selects one of the original images as a first original image.
In one embodiment, step S310 may include:
and acquiring a frame of reference original image and one or more frames of adjacent original images acquired at the shooting trigger moment, and taking a frame of image with the lowest ambiguity in the reference original image and the adjacent original images as a first original image.
Referring to fig. 4 for example, when the camera of the terminal is activated, it captures the original images at every moment, typically in the form of preview images, for example, the frame rate of the camera preview is 30fps, 30 frames of preview images can be captured per second, and these images can be stored in the buffer. When a user presses a shooting key at a time t 2, the trigger terminal acquires a frame of original image F 2 at a time t 2 and records the frame of original image as a reference original image; in practice, the terminal is acquiring the original image at time t 0、t1 before t 2, and at time t 3、t4 after t 2. The terminal may read one or more frames of original images adjacent to F 2, for example, read a previous frame of original image F 1 and a next frame of original image F 3 of F 2, thereby obtaining three frames of original images F 1、F2、F3, taking one frame of image with the lowest blur (i.e., the highest definition) as the first original image, or taking t 0~t4 as the normal shooting time, and reading all 5 frames of original images within the time.
Note that, in the present exemplary embodiment, the number of the obtained adjacent original images is not limited, and for example, a preset number of adjacent original images may be obtained, or all the adjacent original images in the buffer may be obtained.
After the reference original image and the adjacent original images are acquired, a frame of image with the lowest ambiguity (namely the highest definition) is taken as a first original image, so that the first original image obtained is the sharpest image before the image deblurring treatment is carried out, and the definition of the image after the subsequent deblurring treatment is improved.
The present exemplary embodiment is not limited to a specific manner of calculating the ambiguity (or definition), and is described below by way of example:
referring to fig. 5, after the reference original image and the adjacent original image are acquired, the degree of blur of each frame of image may be determined through steps S510 and S520:
step S510, converting the reference original image or the adjacent original image into a frequency domain image;
step S520, taking the difference between the high frequency component statistic and the low frequency component statistic in the frequency domain image as the ambiguity of the reference original image or the adjacent original image.
Taking the example of calculating the ambiguity of the image F2, the image F2 itself is a spatial domain image, and may be converted into a frequency domain image by fourier transform, wavelet transform, discrete cosine transform, or the like. The frequency domain image includes a high frequency part and a low frequency part, the low frequency part is mostly information of a flat area, a color block, and the like, and the high frequency part is mostly information of texture, edges, noise, and the like. The difference between the high-frequency component statistic value and the low-frequency component statistic value in the frequency domain image is calculated, in general, the more blurred the image is, the more the blurring noise is, the larger the difference is, and therefore the difference can be used as the blurring degree of the image. The high frequency component statistic may be an average value of signal amplitudes of the high frequency part, the low frequency component statistic may be an average value of signal amplitudes of the low frequency part, and the ambiguity may be expressed as the following formula:
Difffreq=Avg(Freq_powerhigh)-Avg(Freq_powerlow) (1)
where Avg (freq_power high) represents the integral averaging of the signal amplitude of the high frequency part in the frequency domain image, and Avg (freq_power low) represents the integral averaging of the signal amplitude of the low frequency part in the frequency domain image. The image ambiguity is calculated by adopting the difference value of the two parts, the calculation process is simple, and the calculation amount of the whole scheme is reduced.
In addition, gradient statistics, a deep learning algorithm, etc. may be used to calculate the image blur (or sharpness).
In one embodiment, an ambiguity screening condition with respect to the first original image may also be set. For example, setting a first blur degree threshold requires that the blur degree of the first original image is smaller than the first blur degree threshold. Illustratively, after the reference original image is acquired, if the degree of blur of the reference original image is smaller than the first degree of blur threshold, the reference original image is taken as the first original image; if the ambiguity of the reference original image is not less than the first ambiguity threshold, selecting adjacent original images from near to far frames in adjacent frames of the reference original image, judging whether the ambiguity is less than the first ambiguity threshold, and if so, taking the adjacent original images as the first original images. For example, in fig. 4, it may be selected and determined in the order of F 2、F1、F3、F0、F4 from frame to frame whether the blur level of the image is smaller than the first blur level threshold, and when the blur level is smaller than the first blur level threshold, the currently selected image is taken as the first original image.
With continued reference to fig. 3, in step S320, a single frame deblurring process is performed on the first original image, resulting in a first deblurred image.
The single frame deblurring processing refers to deblurring processing without using information of frames other than the first original image. The present disclosure is not limited to a specific manner of single-frame deblurring, and for example, deblurring may be performed on the first original image using a blur kernel.
In one embodiment, step S320 may include the steps of:
And processing the first original image by using the single-frame deblurring network to obtain a first deblurred image.
The single-frame deblurring network is a pre-trained neural network for single-frame image deblurring, and an end-to-end network structure can be adopted. Fig. 6 shows an exemplary architecture of a single frame deblurring network that employs a U-Net like architecture. After the first original image is input into the network, the processing procedure is as follows:
First, the pixel rearrangement operation is performed by the pixel rearrangement layer at the input end, and in general, the size of the first original image is larger, and the first original image can be rearranged to a plurality of channels through the space_to_depth function, so that the image size of each channel is reduced. For example, when the block_size of the space_to_depth is set to 2, the splitting of pixels of 2×2 grids in each channel in the first original image into 4 different channels is equivalent to splitting an image of one channel into 4 new images of channels, and the width and the height of the new images are reduced to half of those of the original images, so as to obtain a characteristic image of H/2*W/2×12. This facilitates subsequent operations such as convolving the small-sized feature image.
And secondly, a downsampling part consisting of a 2d convolution layer, a residual block and a downsampling layer performs downsampling on the characteristic image and multi-scale convolution operation in the downsampling process so as to extract image characteristics of a multi-scale layer. The downsampling layer may be implemented using a pooling operation.
And an up-sampling part consisting of a residual block, an up-sampling layer and a 2d convolution layer, and performing up-sampling on the feature image after down-sampling and multi-scale convolution operation in the up-sampling process to restore the image detail information on a multi-scale level. The upsampling layer may be implemented using transposed convolution, interpolation (e.g., bilinear interpolation), etc. The structure of the up-sampling portion and the structure of the down-sampling portion may be symmetrical, and the operation of the up-sampling portion may be the inverse operation of the down-sampling portion.
Finally, the pixel rearrangement layer at the output end performs a pixel rearrangement operation, which may be the inverse of the pixel rearrangement operation at the input end, for recovering the size of the first original image, for example, the characteristic image of H/2*W/2×12 may be rearranged into a three-channel image of h×w×3, so that the output first deblurred image is consistent with the size of the first original image.
Fig. 7 shows a schematic diagram of a training single frame deblurring network. And acquiring a sample input image and a standard image (Ground truth), wherein the standard image is a clear image corresponding to the sample input image, for example, a large number of clear images can be acquired as the standard image, and the clear images are subjected to blurring processing to obtain the corresponding sample input image. Inputting a sample input image into a single frame deblurring network to be trained, and outputting a corresponding sample output image; calculating a loss function based on a difference between the sample output image and the standard image; the loss function may be in the form of L1, L2, etc.; the parameters of the single frame deblurring network are updated by using the loss function, for example, the gradient of each parameter can be calculated through a back propagation algorithm, and gradient descent update is carried out on each parameter. The single frame deblurring network is thereby iteratively trained until its accuracy meets a predetermined requirement.
It should be appreciated that a single frame deblurring Network may employ a different Network structure than that shown in fig. 6 or 7, for example, a GAN (GENERATIVE ADVERSARIAL Network, generation of an antagonizing Network) structure may be employed.
The single-frame deblurring process can realize the deblurring of the single-frame image in the spatial domain, and compared with the first original image, the definition of the obtained first deblurred image can be obviously improved.
With continued reference to fig. 3, in step S330, one or more frames of second original images acquired within a neighborhood time of the photographing time of the first original image are acquired.
The neighborhood time refers to a period of time adjacent to the photographing time of the first original image. The neighborhood time may be within a normal photographing time, for example, in fig. 4, assuming that F 4 is selected as the first original image and the photographing time is t 4, t 0~t3 may be the neighborhood time and F 0~F3 may be the second original image. The neighborhood time may also be located outside the normal photographing time, for example, after the normal photographing time, there may be an additional photographing time, as shown in fig. 8, and it is assumed that the normal photographing time ends after photographing the image F 4, and the terminal continues to photograph the image in a later period, which is the additional photographing time, the neighborhood time is t 5~t8, and the photographed image F 5~F8 is the second original image.
The present disclosure does not limit the number of second original images acquired, and in general, the greater the number of second original images, the more advantageous the subsequent deblurring process. Thus, in one embodiment, the number of second original images required may be determined based on the blur degree of the first deblurred image, the higher the blur degree of the first deblurred image, the greater the number of second original images required.
In addition to the amount, the quality of the second original image will also affect the effect of the subsequent deblurring process. In one embodiment, referring to fig. 9, step S330 may include:
step S910, when the ambiguity of the first deblurred image is determined to meet the preset condition, acquiring one or more frames of second original images acquired in the neighborhood time with the first exposure time length;
in step S920, when it is determined that the ambiguity of the first deblurred image does not meet the preset condition, one or more frames of the second original image acquired in the second exposure time in the neighborhood time are acquired.
Wherein the second exposure time period is longer than the first exposure time period. The exposure time period affects the quality of the second original image. Specifically, if the exposure time is shorter, the motion degree of the object in the second original image is lower, so that it is difficult to reduce the blur compensation of the first deblurring image by using the second original image, and more noise is easily introduced into the second original image, so as to affect the deblurring effect.
The preset condition may be that the ambiguity is less than a second ambiguity threshold, which may be determined empirically or in actual demand. When the ambiguity of the first deblurred image is lower, the quality requirement on the second original image is lower, and the second original image shot under the shorter first exposure time can meet the requirement; when the degree of blur of the first deblurred image is higher, the quality requirement for the second original image is also higher, and the second original image photographed at a longer second exposure time is required.
Therefore, aiming at the first deblurred images with different fuzziness, the second original images acquired under different exposure time periods are adopted, and the scheme realization cost and the deblurring effect are considered.
In one embodiment, the acquiring the one or more frames of the second original image acquired in the neighborhood time with the first exposure time period may include the following steps:
Determining neighborhood time in conventional shooting time, wherein the conventional shooting time is the shooting time of the first original image;
And acquiring one or more frames of second original images acquired in the neighborhood time with the first exposure time.
The conventional shooting time may be a time for the terminal to shoot the reference original image and to capture several frames of preview images before and after the shooting function is triggered by the user (e.g., clicking the shooting key), for example, may be a time period t 0~t4 in fig. 4. The time period except the photographing time of the first original image in the normal photographing time may be taken as the neighborhood time, for example, the photographing time t 4 of the first original image F 4 is removed in the normal photographing time t 0~t4, resulting in the neighborhood time t 0~t3.
In the neighborhood time, the terminal may collect multiple frames of original images with a fixed exposure time, including the first original image and the second original image, that is, the first original image may also be collected under the first exposure time.
Based on the scheme for acquiring the second original image with the first exposure time, the second original image can be acquired from the images shot in the conventional shooting time, additional images are not needed to be shot, and the scheme implementation cost is reduced.
In one embodiment, the acquiring the one or more frames of the second original image acquired in the second exposure time period in the neighborhood time may include the following steps:
determining a neighborhood time in an additional shooting time, wherein the additional shooting time is later than the shooting time of the first original image;
And acquiring one or more frames of second original images acquired in the neighborhood time with the second exposure time.
The additional capturing time may be a time taken to additionally capture a plurality of frames of images after the above-described normal capturing time, which is later than the capturing time of the first original image. And shooting one or more frames of second original images with a longer second exposure time in the additional shooting time.
In one embodiment, the correspondence between the blur degree of the first deblurred image and the required exposure time period may be determined in advance according to experience and experimental debugging conditions. In actual operation, the second exposure time period is determined based on the correspondence and the degree of blur of the first deblurred image. After the user triggers the shooting function and the terminal completes the conventional shooting, a shooting time is increased, namely, the shooting time is added, and a second original image is shot with a second exposure time length for subsequent deblurring processing. In the case of additional shooting, a corresponding prompt may be presented, such as displaying a prompt message "in shooting, do not move the lens" in the shooting interface, to prompt the user to keep the lens aimed at the target during the additional shooting time.
By the scheme for acquiring the second original image with the second exposure time length, the second original image with long exposure can be additionally shot under the condition that the ambiguity of the first deblurred image is higher, and the deblurring effect of the subsequent image is ensured.
With continued reference to fig. 3, in step S340, the first deblurring image is deblurred by using the second original image, so as to obtain a second deblurring image.
Compared with the single-frame deblurring process of step S320, the step S340 adopts multi-frame deblurring process, and can implement the deblurring process of multi-frame image in time domain.
In one embodiment, step S340 may include the steps of:
and registering and fusing the first deblurring image and the second original image to obtain a second deblurring image.
When the number of the second original images is greater than one frame, the following two modes can be adopted for registration and fusion:
Mode one: the number of the first deblurred images is one frame, and if the number of the second original images is m frames, the m+1 frame images can be registered and fused pairwise. Taking fig. 8 as an example, assume that the first original image is F 4, and a single frame deblurring process is performed to obtain a first deblurred image F 4', where the second original image includes F 5、F6、F7、F8. In the multi-frame deblurring process, referring to fig. 10, F 4 ' and F 5 may be registered and fused to obtain an image F 5 ', and then F 5 ' and F 6 are registered and fused to obtain an image F 6 ', … …, until the registration and fusion of the image F 8 of the last frame are completed to obtain a second deblurred image F 8 '.
In the merging process, as shown in fig. 10, the images may be superimposed in a weighted average manner, and the weight may be determined according to the number of images actually merged. Assume that the m+1 frame images are fused in the order of 1 st frame and 2 nd frame, and refer to the following formula:
Wherein i is any positive integer of [2, m+1 ]. When the i-th frame and the fused image (namely F i-1') of the previous i-1 frame are fused, the weights are respectively 1 and i-1. As a result, in fig. 10, the weight M 4 is 1, and the weight M 5 is 2.
Mode two: the number of the first deblurred images is one frame, and assuming that the number of the second original images is m frames, the m+1 frame images can be all registered to one of the reference images, and then the registered m+1 frame images are fused. The reference image may be any one of the m+1 frame images, for example, may be a first deblurring image, and then the m frame second original images are registered to the first deblurring image, and then all the m+1 frame images are fused, for example, the m+1 frame images may be superimposed on average.
It can be seen that the principle of the two modes is the same, but the order of registration and fusion is different.
Since the photographing time of the second original image is different from that of the first deblurred image (i.e., the photographing time of the first original image) and the photographing time of the second original image of each frame is also different, a small motion may exist in the photographed object, or a photographer may generate hand shake, etc., resulting in different viewing angles between the second original image and the first deblurred image and between the second original images, in order to improve the accuracy of image fusion, registration is performed before fusion. The present disclosure is not limited to a specific manner of registration, and is described below by way of one example:
referring to fig. 11, the above-mentioned registering of the first deblurred image with the second original image may include the following steps S1110 to S1140:
Step S1110, determining a current fusion image obtained by fusing the first deblurred image and the fused image in the second original image as an image to be registered, and determining another image to be registered in an unfused image in the first deblurred image and the second original image; or determining the reference image in the first deblurred image and the second original image as one image to be registered, and determining the other image to be registered in the non-reference image in the first deblurred image and the second original image.
When the image fusion is performed in the above manner, referring to formula (2), the current fused image is the image that is currently fused, that is, F i-1', and another image to be registered, typically the i-th frame image F i, is determined from the unfused image in the first deblurred image and the second original image. In step S1110, F i-1' and F i may be used as two images to be registered.
When the second mode is adopted for image fusion, a frame is determined from the first deblurred image and the second original image to serve as a reference image of a registration standard, and the rest are non-reference images. When the reference image and a frame of non-reference image are used as two images to be registered, for example, when the first deblurred image is used as the reference image, the first deblurred image and the 1 st frame of second original image can be used as two images to be registered, then the first deblurred image and the 2 nd frame of second original image are used as two images to be registered, …, and finally the first deblurred image and the m th frame of second original image are used as two images to be registered.
For ease of illustration, the two images to be registered are denoted P, Q below.
Step S1120, pyramid operation is performed on the two images to be registered, respectively, to obtain sampling images of each image to be registered under multiple resolutions.
The pyramid operation refers to a series of downsampling of an image with different low resolution to obtain a set of progressively lower resolution sampled images. The plurality of sample images at the resolution may include an original image at the original resolution as a particular sample image.
In general, a stop condition may be set when the pyramid operation is performed, such as a specific resolution or a downsampling multiple, and when this condition is reached, continued downsampling is stopped. The present disclosure does not specifically limit the stop condition and the downsampling multiple of each layer in the pyramid. For the images P and Q to be registered, downsampling is performed layer by multiples of 1/2, 1/4, and 1/8, respectively, to obtain sampled images P (i.e., original image), P (1/2) (representing the sampled images obtained by downsampling by 1/2), P (1/4), P (1/8), and sampled images Q, Q (1/2), Q (1/4), and Q (1/8), respectively, as illustrated in FIG. 12.
Step S1130, performing feature point matching on the two sampled images at each resolution to obtain matching feature point pairs of the two images to be registered.
The type of the feature points and the feature point detection algorithm are not limited in the present disclosure, and for example, harris corner points, SIFT (Scale-INVARIANT FEATURE TRANSFORM, scale invariant feature transform) feature points, detection algorithms thereof, and the like may be used. And carrying out feature point matching on the two sampling images at each resolution. Specifically, after detecting a feature point in the image to be registered P, a feature point corresponding to the feature point is detected in each of its sampled images, thereby obtaining a set of feature points of P, for example, (P 1,p2,p3,p4) shown in fig. 12, and if (P 1,p2,p3,p4) matching with a set of feature points (Q 1,q2,q3,q4) of the image to be registered Q is successful, P 1 and Q 1 are determined as a matching feature point pair of P and Q. Therefore, feature point matching under different scales is realized, and because the semantics corresponding to the images with different scales are different, the stability of the semantics of the feature points under different scales is ensured, and the accuracy of matching the feature point pairs is improved.
Step S1140, registering the two images to be registered according to the matching feature point pairs.
After the matching point pairs are obtained, a transformation matrix between two images to be registered can be calculated through an optical flow algorithm and the like, and then any one of the images to be registered is transformed, so that the registration of the two images to be registered is realized.
Through the registering and fusing, a second deblurring image is finally obtained, the second deblurring image can be output as a final image processing result, and the second deblurring image can be further processed, for example, sharpening processing can be performed on the second deblurring image, so that the overall sharpness of the image is improved.
In one embodiment, the image deblurring method may further include the steps of:
And carrying out single-frame deblurring processing on the second deblurring image to obtain a third deblurring image.
The implementation of the single frame deblurring process may refer to step S320, for example, the second deblurring image may be input into the single frame deblurring network again, and the third deblurring image may be output through the network. The side effects such as local blurring possibly brought by the image fusion can be removed through the single frame deblurring treatment again, and the definition of the image is further improved. The third deblurred image may be output as a final image processing result.
Fig. 13 shows another exemplary flow of the image deblurring method in the present exemplary embodiment, including:
Step S1310, a user clicks a shooting key in a shooting interface of the terminal, and the terminal is triggered to acquire a plurality of original images in a first exposure time period through a camera in a conventional shooting time;
Step S1320, calculating the ambiguity of the original images, and selecting one frame of image with the lowest ambiguity (namely the highest definition) as the first original image;
Step S1330, inputting the first original image into a pre-trained single-frame deblurring network to perform single-frame deblurring processing and outputting a first deblurred image;
step S1340, judging whether the blur degree of the first deblurred image meets a preset condition, wherein the preset condition may be that the blur degree of the first deblurred image is smaller than a second blur degree threshold; if yes, go to step S1350; if not, executing step S1360;
step S1350, obtaining one or more frames of second original images from the original images acquired in the regular shooting time;
Step S1360, adding an additional shooting time after the conventional shooting time, and collecting one or more frames of second original images in the additional shooting time with a second exposure time length which is longer than the first exposure time length;
Step S1370, registering the first deblurred image with the second original image, which can refer to the method of fig. 11, obtaining a matching feature point pair through image pyramid and Harris corner detection and matching, and performing optical flow registration according to the matching feature point pair;
Step S1380, fusing the registered first deblurred image with the second original image to obtain a second deblurred image; in steps S1370 and S1380, a frame-by-frame registration and fusion method (refer to the first method described above) may also be adopted;
step S1390, performing image sharpening on the second deblurred image, inputting the second deblurred image into a single-frame deblurring network, performing single-frame deblurring processing again, and outputting a third deblurred image; the third deblurred image can be displayed in a shooting interface of the terminal as a final image processing result and stored in a memory of the terminal.
Exemplary embodiments of the present disclosure also provide an image deblurring apparatus. Referring to fig. 14, the image deblurring apparatus 1400 may include:
a first acquisition module 1410 configured to acquire a frame of a first original image;
a first deblurring module 1420 configured to perform a single frame deblurring process on the first original image to obtain a first deblurred image;
A second acquisition module 1430 configured to acquire one or more frames of second original images acquired within a neighborhood time of a photographing time of the first original image;
A second deblurring module 1440 configured to deblur the first deblurred image with the second original image to obtain a second deblurred image.
In one embodiment, the first acquisition module 1410 is configured to:
And acquiring a frame of reference original image and one or more frames of original images adjacent to the reference original image acquired at the shooting trigger moment, and taking the acquired frame of image with the lowest ambiguity in the reference original image and the adjacent original images as a first original image.
In one embodiment, the first obtaining module 1410 is configured to determine the ambiguity of each frame image after obtaining the reference original image and the adjacent original image by:
Converting the reference original image or the adjacent original image into a frequency domain image;
And taking the difference value between the high-frequency component statistic value and the low-frequency component statistic value in the frequency domain image as the ambiguity of the reference original image or the adjacent original image.
In one embodiment, the first deblurring module 1420 is configured to:
And processing the first original image by using the single-frame deblurring network to obtain a first deblurred image.
In one embodiment, the second acquisition module 1430 is configured to:
When the ambiguity of the first deblurred image is determined to meet a preset condition, acquiring one or more frames of second original images acquired in a first exposure time in a neighborhood time;
when the ambiguity of the first deblurred image is determined not to meet the preset condition, acquiring one or more frames of second original images acquired in the neighborhood time with the second exposure time;
the second exposure time period is longer than the first exposure time period.
In one embodiment, the second acquisition module 1430 is configured to:
determining a neighborhood time within a conventional photographing time, wherein the conventional photographing time comprises a photographing time of a first original image;
one or more frames of second original images acquired in the neighborhood time with the first exposure time length are acquired.
In one embodiment, the second acquisition module 1430 is configured to:
determining a neighborhood time in an additional shooting time, wherein the additional shooting time is later than the shooting time of the first original image;
And acquiring one or more frames of second original images acquired in the neighborhood time with the second exposure time.
In one embodiment, the second deblurring module 1440 is configured to:
and registering and fusing the first deblurring image and the second original image to obtain a second deblurring image.
In one embodiment, the second deblurring module 1440 is configured to perform image registration by:
Determining a current fusion image obtained by fusing the first deblurred image and the fused image in the second original image as an image to be registered, and determining another image to be registered in an unfused image in the first deblurred image and the second original image; or determining the reference image in the first deblurred image and the second original image as one image to be registered, and determining the other image to be registered in the non-reference image in the first deblurred image and the second original image;
Pyramid operation is carried out on the two images to be registered respectively, so that sampling images of each image to be registered under a plurality of resolutions are obtained;
Matching feature points of the two sampling images under each resolution to obtain matching feature point pairs of the two images to be registered;
And registering the two images to be registered according to the matched characteristic point pairs.
In one embodiment, the first deblurring module 1420 is further configured to:
And carrying out single-frame deblurring processing on the second deblurring image to obtain a third deblurring image.
Details of each part of the above apparatus are already described in the method part of the embodiments, and thus will not be described in detail.
Exemplary embodiments of the present disclosure also provide a computer readable storage medium, which may be implemented in the form of a program product comprising program code for causing an electronic device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the above section of the "exemplary method" when the program product is run on the electronic device. In one embodiment, the program product may be implemented as a portable compact disc read only memory (CD-ROM) and includes program code and may be run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. An image deblurring method, comprising:
Acquiring a frame of first original image;
performing single-frame deblurring processing on the first original image to obtain a first deblurred image;
acquiring one or more frames of second original images acquired in a neighborhood time of the shooting time of the first original images; the method further comprises the steps of: determining the number of second original images required according to the degree of blur of the first deblurred image;
Deblurring the first deblurred image by using the second original image to obtain a second deblurred image;
the acquiring one or more frames of second original images acquired in a neighborhood time of the shooting time of the first original image includes:
When the ambiguity of the first deblurred image is determined to meet a preset condition, acquiring one or more frames of the second original image acquired in the neighborhood time with a first exposure time length;
When the ambiguity of the first deblurred image is determined not to meet the preset condition, acquiring one or more frames of the second original image acquired in the neighborhood time with a second exposure time;
the second exposure time period is longer than the first exposure time period;
the deblurring the first deblurring image by using the second original image to obtain a second deblurring image, which comprises the following steps:
registering and fusing the first deblurring image and the second original image to obtain the second deblurring image.
2. The method of claim 1, wherein the acquiring a frame of the first original image comprises:
And acquiring a frame of reference original image and one or more frames of adjacent original images acquired at the shooting trigger moment, and taking a frame of image with the lowest ambiguity in the reference original image and the adjacent original images as the first original image.
3. The method of claim 2, wherein after the reference original image and the adjacent original image are acquired, determining the blur level of each frame of images by:
converting the reference original image or the adjacent original image into a frequency domain image;
And taking the difference value of the high-frequency component statistic value and the low-frequency component statistic value in the frequency domain image as the ambiguity of the reference original image or the adjacent original image.
4. The method of claim 1, wherein said performing a single frame deblurring process on said first original image to obtain a first deblurred image comprises:
And processing the first original image by using a single-frame deblurring network to obtain the first deblurred image.
5. The method of claim 1, wherein the acquiring the one or more frames of the second original image acquired at the first exposure duration in the neighborhood time comprises:
determining the neighborhood time within a regular photographing time, wherein the regular photographing time comprises the photographing time of the first original image;
And acquiring one or more frames of the second original image acquired in the neighborhood time with the first exposure time.
6. The method of claim 1, wherein the acquiring the one or more frames of the second original image acquired at the second exposure time in the neighborhood time comprises:
determining the neighborhood time in an additional shooting time, wherein the additional shooting time is later than the shooting time of the first original image;
And acquiring one or more frames of the second original image acquired in the neighborhood time with the second exposure time.
7. The method of claim 1, wherein the registering and fusing the first deblurred image with the second original image comprises:
Determining a current fusion image obtained by fusing the first deblurred image and the fused image in the second original image as an image to be registered, and determining another image to be registered in unfused images in the first deblurred image and the second original image; or determining the reference image in the first deblurring image and the second original image as one image to be registered, and determining the other image to be registered in the non-reference image in the first deblurring image and the second original image;
Pyramid operation is carried out on the two images to be registered respectively, so that sampling images of each image to be registered under a plurality of resolutions are obtained;
matching feature points of the two sampling images under each resolution to obtain matching feature point pairs of the two images to be registered;
And registering the two images to be registered according to the matched characteristic point pairs.
8. The method according to claim 1, wherein the method further comprises:
And carrying out single-frame deblurring processing on the second deblurring image to obtain a third deblurring image.
9. An image deblurring apparatus, comprising:
a first acquisition module configured to acquire a frame of a first original image;
The first deblurring module is configured to perform single-frame deblurring processing on the first original image to obtain a first deblurred image; the apparatus is further configured to: determining the number of second original images required according to the degree of blur of the first deblurred image;
a second acquisition module configured to acquire one or more frames of second original images acquired within a neighborhood time of a photographing time of the first original image;
a second deblurring module configured to deblur the first deblurred image using the second original image to obtain a second deblurred image;
the second acquisition module is configured to:
When the ambiguity of the first deblurred image is determined to meet a preset condition, acquiring one or more frames of the second original image acquired in the neighborhood time with a first exposure time length;
When the ambiguity of the first deblurred image is determined not to meet the preset condition, acquiring one or more frames of the second original image acquired in the neighborhood time with a second exposure time;
the second exposure time period is longer than the first exposure time period;
the second deblurring module is configured to:
registering and fusing the first deblurring image and the second original image to obtain the second deblurring image.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 9.
11. An electronic device, comprising:
A processor; and
A memory for storing executable instructions of the processor;
Wherein the processor is configured to perform the method of any one of claims 1 to 9 via execution of the executable instructions.
CN202110672843.6A 2021-06-17 2021-06-17 Image deblurring method, device, electronic equipment and storage medium Active CN113409209B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110672843.6A CN113409209B (en) 2021-06-17 2021-06-17 Image deblurring method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110672843.6A CN113409209B (en) 2021-06-17 2021-06-17 Image deblurring method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113409209A CN113409209A (en) 2021-09-17
CN113409209B true CN113409209B (en) 2024-06-21

Family

ID=77684815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110672843.6A Active CN113409209B (en) 2021-06-17 2021-06-17 Image deblurring method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113409209B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708166A (en) * 2022-04-08 2022-07-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and terminal

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189285A (en) * 2019-05-28 2019-08-30 北京迈格威科技有限公司 A kind of frames fusion method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101341871B1 (en) * 2012-09-12 2014-01-07 포항공과대학교 산학협력단 Method for deblurring video and apparatus thereof
US9224194B2 (en) * 2014-01-21 2015-12-29 Adobe Systems Incorporated Joint video deblurring and stabilization
CN107240092B (en) * 2017-05-05 2020-02-14 浙江大华技术股份有限公司 Image ambiguity detection method and device
CN111275626B (en) * 2018-12-05 2023-06-23 深圳市炜博科技有限公司 Video deblurring method, device and equipment based on ambiguity
CN113992847A (en) * 2019-04-22 2022-01-28 深圳市商汤科技有限公司 Video image processing method and device
CN111932480A (en) * 2020-08-25 2020-11-13 Oppo(重庆)智能科技有限公司 Deblurred video recovery method and device, terminal equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189285A (en) * 2019-05-28 2019-08-30 北京迈格威科技有限公司 A kind of frames fusion method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多帧图像复原算法研究;李警;《中国优秀硕士学位论文全文数据库信息科技辑》;20170315;I138-4630 *

Also Published As

Publication number Publication date
CN113409209A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN111598776B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN111641835B (en) Video processing method, video processing device and electronic equipment
CN111580765A (en) Screen projection method, screen projection device, storage medium, screen projection equipment and screen projection equipment
CN111445392B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN112767295A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111161176B (en) Image processing method and device, storage medium and electronic equipment
CN111696039B (en) Image processing method and device, storage medium and electronic equipment
CN111741303B (en) Deep video processing method and device, storage medium and electronic equipment
CN113409203A (en) Image blurring degree determining method, data set constructing method and deblurring method
CN104918027A (en) Method, electronic device, and server for generating digitally processed pictures
CN111768351A (en) Image denoising method, image denoising device, storage medium and electronic device
CN111835973A (en) Shooting method, shooting device, storage medium and mobile terminal
CN111784734A (en) Image processing method and device, storage medium and electronic equipment
CN112927271A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111652933B (en) Repositioning method and device based on monocular camera, storage medium and electronic equipment
CN113810596A (en) Time-delay shooting method and device
CN113409209B (en) Image deblurring method, device, electronic equipment and storage medium
CN113658073B (en) Image denoising processing method and device, storage medium and electronic equipment
CN113658128A (en) Image blurring degree determining method, data set constructing method and deblurring method
CN113781336B (en) Image processing method, device, electronic equipment and storage medium
CN113038010A (en) Video processing method, video processing device, storage medium and electronic equipment
CN114390219B (en) Shooting method, shooting device, electronic equipment and storage medium
CN111034187A (en) Dynamic image generation method and device, movable platform and storage medium
CN113379624A (en) Image generation method, training method, device and equipment of image generation model
CN113364964A (en) Image processing method, image processing apparatus, storage medium, and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant