CN113409209A - Image deblurring method and device, electronic equipment and storage medium - Google Patents

Image deblurring method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113409209A
CN113409209A CN202110672843.6A CN202110672843A CN113409209A CN 113409209 A CN113409209 A CN 113409209A CN 202110672843 A CN202110672843 A CN 202110672843A CN 113409209 A CN113409209 A CN 113409209A
Authority
CN
China
Prior art keywords
image
original image
deblurred
original
deblurring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110672843.6A
Other languages
Chinese (zh)
Other versions
CN113409209B (en
Inventor
邹子杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110672843.6A priority Critical patent/CN113409209B/en
Publication of CN113409209A publication Critical patent/CN113409209A/en
Application granted granted Critical
Publication of CN113409209B publication Critical patent/CN113409209B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an image deblurring method, an image deblurring device, a storage medium and electronic equipment, and relates to the technical field of image and video processing. The image deblurring method comprises the following steps: acquiring a frame of first original image; performing single-frame deblurring processing on the first original image to obtain a first deblurred image; acquiring one or more frames of second original images acquired in the neighborhood time of the shooting time of the first original image; and carrying out deblurring processing on the first deblurred image by using the second original image to obtain a second deblurred image. The method combines the single-frame deblurring mode and the multi-frame deblurring mode, is favorable for improving the deblurring effect of the image, and is particularly suitable for deblurring of the face image.

Description

Image deblurring method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image and video processing technologies, and in particular, to an image deblurring method, an image deblurring apparatus, a computer-readable storage medium, and an electronic device.
Background
In the image capturing process, it is a common situation that the image is blurred due to shake, defocus or other reasons. Image deblurring is a task related to image quality, and the blurring causes complexity, so that the related art is difficult to realize high-quality image deblurring effect.
Disclosure of Invention
The present disclosure provides an image deblurring method, an image deblurring apparatus, a computer-readable storage medium, and an electronic device, thereby improving an image deblurring effect at least to some extent.
According to a first aspect of the present disclosure, there is provided an image deblurring method, comprising: acquiring a frame of first original image; performing single-frame deblurring processing on the first original image to obtain a first deblurred image; acquiring one or more frames of second original images acquired in the neighborhood time of the shooting time of the first original image; and carrying out deblurring processing on the first deblurred image by using the second original image to obtain a second deblurred image.
According to a second aspect of the present disclosure, there is provided an image deblurring apparatus comprising: a first acquisition module configured to acquire a frame of a first original image; the first deblurring module is configured to perform single-frame deblurring processing on the first original image to obtain a first deblurred image; a second acquisition module configured to acquire one or more frames of second original images acquired in a time neighborhood of a shooting time of the first original image; and the second deblurring module is configured to perform deblurring processing on the first deblurred image by using the second original image to obtain a second deblurred image.
According to a third aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the image deblurring method of the first aspect described above and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the image deblurring method of the first aspect described above and possible embodiments thereof via execution of the executable instructions.
The technical scheme of the disclosure has the following beneficial effects:
the method combines two modes of single frame deblurring and multi-frame deblurring, utilizes spatial domain and time domain information of the image, is beneficial to improving the deblurring effect of the image, is particularly suitable for deblurring of a face image, can remove the blur caused by rigid motion of the face and the blur related to the image quality, and further recovers the texture and the texture details of the face, so that the deblurred image is clearer and more real.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 shows a schematic diagram of a system architecture in the present exemplary embodiment;
fig. 2 shows a schematic configuration diagram of an electronic apparatus in the present exemplary embodiment;
FIG. 3 illustrates a flow chart of a method of deblurring an image in the present exemplary embodiment;
fig. 4 shows a schematic diagram of determining a first original image in the present exemplary embodiment;
FIG. 5 illustrates a flow chart for determining image blur in the present exemplary embodiment;
FIG. 6 is a schematic diagram illustrating the structure of a single frame deblurring network in the exemplary embodiment;
FIG. 7 is a diagram illustrating a training single frame deblurring network in accordance with the present exemplary embodiment;
fig. 8 is a diagram illustrating determination of a second original image in the present exemplary embodiment;
fig. 9 shows a flowchart of acquiring a second original image in the present exemplary embodiment;
FIG. 10 shows a schematic diagram of image fusion in the present exemplary embodiment;
fig. 11 shows a flowchart of image registration in the present exemplary embodiment;
FIG. 12 is a schematic diagram illustrating matching of image pyramids to feature points in the present exemplary embodiment;
FIG. 13 illustrates a flow chart of another method of deblurring an image in the present exemplary embodiment;
fig. 14 shows a schematic configuration diagram of an image deblurring apparatus in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Image blur may be classified into lens hardware blur, motion blur, virtual focus blur, and other types of blur, etc. according to types. The motion blur is the most common situation when terminals such as smart phones take pictures, the cause of the motion blur is very complex, especially the motion blur of human face images, the environment where the images are located is usually a natural environment with a complex background, the blurred texture colors with different degrees have extremely high similarity with the background of the environment, and the blur is not continuous and is not easy to identify, so that the difficulty of deblurring the images is high.
In one scheme of the related art, a blur kernel of an image is estimated by a data model, and the image is deblurred by the blur kernel. However, when the blur cause is complex and difficult to analyze, the blur kernel cannot be accurately estimated, and the quality of image deblurring is affected.
In view of the above, exemplary embodiments of the present disclosure provide an image deblurring method. The system architecture of the operating environment of the image deblurring method is described first.
Fig. 1 shows a schematic diagram of a system architecture, and the system architecture 100 may include a terminal 110 and a server 120. The terminal 110 may be a desktop computer, a notebook computer, a smart phone, a tablet computer, or other terminal devices, and the server 120 may be a server providing image processing related services, or a cluster formed by multiple servers. The terminal 110 and the server 120 may form a connection through a wired or wireless communication link for data interaction. The terminal 110 may capture or acquire the first original image or the second original image from another device. In one embodiment, the terminal 110 may send the first original image and the second original image to the server 120, and the server 120 outputs a deblurred image (such as the second deblurred image or the third deblurred image) by performing the image deblurring method in the present exemplary embodiment, and returns to the terminal 110. In one embodiment, the image deblurring method of the present exemplary embodiment may be performed by the terminal 110, resulting in a deblurred image.
Application scenarios of the image deblurring method include, but are not limited to: a user opens a photographing function on the terminal 110, and when clicking a photographing key (shutter key) in a photographing interface, the terminal 110 is triggered to photograph a first original image through a built-in camera, and photograph a second original image in the subsequent neighborhood time; the terminal 110 executes the image deblurring method, or sends the first original image and the second original image to the server 120, and the server 120 executes the image deblurring method to finally obtain a deblurred image, and the deblurred image is displayed and stored on a shooting interface of the terminal 110, so that image deblurring processing synchronous with shooting is realized.
As can be seen from the above, the main body of the image deblurring method may be the terminal 110 or the server 120. Exemplary embodiments of the present disclosure also provide an electronic device for performing an image deblurring method, which may be the terminal 110 or the server 120. The structure of the electronic device is exemplarily described below by taking the mobile terminal 200 in fig. 2 as an example. It will be appreciated by those skilled in the art that the configuration of figure 2 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes.
As shown in fig. 2, the mobile terminal 200 may specifically include: a processor 210, an internal memory 221, an external memory interface 222, a USB (Universal Serial Bus) interface 230, a charging management Module 240, a power management Module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication Module 250, a wireless communication Module 260, an audio Module 270, a speaker 271, a microphone 272, a microphone 273, an earphone interface 274, a sensor Module 280, a display 290, a camera Module 291, a pointer 292, a motor 293, a button 294, and a SIM (Subscriber identity Module) card interface 295.
Processor 210 may include one or more processing units, such as: the Processor 210 may include an AP (Application Processor), a modem Processor, a GPU (Graphics Processing Unit), an ISP (Image Signal Processor), a controller, an encoder, a decoder, a DSP (Digital Signal Processor), a baseband Processor, and/or an NPU (Neural-Network Processing Unit), etc.
The encoder may encode (i.e., compress) an image or a video, for example, encode a current image to obtain code stream data; the decoder may decode (i.e., decompress) the codestream data of the image or video to restore the image or video data. The mobile terminal 200 may support one or more encoders and decoders. In this way, the mobile terminal 200 may process images or video in a variety of encoding formats, such as: image formats such as JPEG (Joint Photographic Experts Group), PNG (Portable Network Graphics), BMP (Bitmap), and Video formats such as MPEG (Moving Picture Experts Group) 1, MPEG2, h.263, h.264, and HEVC (High Efficiency Video Coding).
In one embodiment, processor 210 may include one or more interfaces through which connections are made to other components of mobile terminal 200.
Internal memory 221 may be used to store computer-executable program code, including instructions. The internal memory 221 may include volatile memory and nonvolatile memory. The processor 210 executes various functional applications of the mobile terminal 200 and data processing by executing instructions stored in the internal memory 221.
The external memory interface 222 may be used to connect an external memory, such as a Micro SD card, for expanding the storage capability of the mobile terminal 200. The external memory communicates with the processor 210 through the external memory interface 222 to implement data storage functions, such as storing images, videos, and other files.
The USB interface 230 is an interface conforming to the USB standard specification, and may be used to connect a charger to charge the mobile terminal 200, or connect an earphone or other electronic devices.
The charge management module 240 is configured to receive a charging input from a charger. While the charging management module 240 charges the battery 242, the power management module 241 may also supply power to the device; the power management module 241 may also monitor the status of the battery.
The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 250 may provide a mobile communication solution of 2G, 3G, 4G, 5G, etc. applied to the mobile terminal 200. The Wireless Communication module 260 may provide Wireless Communication solutions such as WLAN (Wireless Local Area Networks) (e.g., Wi-Fi (Wireless Fidelity), BT (Bluetooth), GNSS (Global Navigation Satellite System), FM (Frequency Modulation), NFC (Near Field Communication), IR (Infrared technology), and the like, which are applied to the mobile terminal 200.
The mobile terminal 200 may implement a display function through the GPU, the display screen 290, the AP, and the like, and display a user interface. For example, when the user performs camera detection, the mobile terminal 200 may display an interface of a camera detection App (Application) in the display screen 290.
The mobile terminal 200 may implement a photographing function through the ISP, the camera module 291, the encoder, the decoder, the GPU, the display 290, the AP, and the like. For example, a user can start an image or video shooting function in the hidden camera detection App, and at this time, an image of a space to be detected can be acquired through the camera module 291.
The mobile terminal 200 may implement an audio function through the audio module 270, the speaker 271, the receiver 272, the microphone 273, the earphone interface 274, the AP, and the like.
The sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, a barometric pressure sensor 2804, etc. to implement a corresponding inductive detection function.
Indicator 292 may be an indicator light that may be used to indicate a state of charge, a change in charge, or may be used to indicate a message, missed call, notification, etc. The motor 293 may generate a vibration cue, may also be used for touch vibration feedback, and the like. The keys 294 include a power-on key, a volume key, and the like.
The mobile terminal 200 may support one or more SIM card interfaces 295 for connecting SIM cards to implement functions such as call and mobile communication.
Fig. 3 shows an exemplary flow of the image deblurring method, which may include:
step S310, acquiring a frame of first original image;
step S320, performing single-frame deblurring processing on the first original image to obtain a first deblurred image;
step S330, acquiring one or more frames of second original images collected in the neighborhood time of the shooting time of the first original image;
step S340, performing deblurring processing on the first deblurred image by using the second original image to obtain a second deblurred image.
The first original image and the second original image are both images which are acquired by a terminal and are not subjected to blurring processing. The first original image is adjacent to the second original image in capture time, and the two original images may be images captured for the same object or scene. In one embodiment, the first original image and the second original image may be images of a human face as a main photographic subject. The first deblurred image is an intermediate image obtained by performing single-frame deblurring processing on the first original image, and is not an image which is finally output; the second deblurred image may be the image that is finally output after the completion of the entire deblurring process.
In an embodiment, the second deblurred image may be further subjected to single-frame deblurring processing to obtain a third deblurred image, and the third deblurred image is finally output.
Based on the method, two modes of single-frame deblurring and multi-frame deblurring are combined, the spatial domain and time domain information of the image is utilized, the deblurring effect of the image is favorably improved, the method is particularly suitable for deblurring of the face image, the blurring caused by rigid motion of the face and the blurring related to the image quality can be removed, the texture and the texture details of the face can be further recovered, and the deblurred image is clearer and truer.
Each step in fig. 3 will be described in detail below.
Referring to fig. 3, in step S310, a frame of a first original image is acquired.
The first original image is an original image acquired during a regular photographing time. The regular photographing time refers to the time when the user operates the terminal to photograph or the terminal photographs according to the automatic photographing setting. The regular shooting time may be a time for shooting an image, for example, a shooting trigger time, that is, a time when a user clicks a shooting key, at which the terminal acquires an original image as a first original image. The regular shooting time may also be a time for shooting a plurality of images, and for example, may be a time period including a shooting trigger time, and the terminal acquires a plurality of original images in the time period, and selects one of the original images as the first original image.
In one embodiment, step S310 may include:
acquiring a frame of reference original image and one or more frames of original images adjacent to the reference original image acquired at the shooting triggering moment, and taking the frame of image with the lowest fuzzy degree in the reference original image and the adjacent original images as a first original image.
As illustrated with reference to fig. 4, when the camera of the terminal is started, it acquires an original image at every moment, and usually presents the original image to the user in a preview image manner, for example, the frame rate of the camera preview is 30fps, and then 30 preview images can be acquired every second, and these images can be stored in the cache. When the user is at t2Pressing a shooting key at a moment to trigger the terminal to be at t2Constantly acquiring a frame of original image F2Recording as a reference original image; in fact, the terminal is at t2Previous t0、t1Time of day, and t2After t3、t4The original images are all acquired at the moment. Terminal can read and F2Adjacent one or more original images, e.g. reading F2Previous frame original image F1And the next frame original image F3Thereby obtaining three original images F1、F2、F3Taking the image of the frame with the lowest blurring degree (i.e. the highest definition) as the first original image, or taking t as the first original image0~t4For a normal shooting time, all 5 frames of original images within that time are read.
It should be noted that, in the present exemplary embodiment, the number of the obtained adjacent original images is not limited, for example, a preset number of adjacent original images may be obtained, or all adjacent original images in the cache may be obtained.
After the reference original image and the adjacent original image are obtained, a frame of image with the lowest fuzziness (i.e. the highest definition) is used as a first original image, and the obtained first original image is ensured to be the clearest image before the image deblurring processing is carried out, so that the image definition after the subsequent deblurring processing is improved.
The present exemplary embodiment does not limit the specific manner of calculating the ambiguity (or sharpness), and the following exemplary descriptions are given:
referring to fig. 5, after the reference original image and the adjacent original image are acquired, the degree of blur of each frame image can be determined through steps S510 and S520:
step S510, converting the reference original image or the adjacent original image into a frequency domain image;
in step S520, the difference between the high frequency component statistics and the low frequency component statistics in the frequency domain image is used as the blur degree of the reference original image or the adjacent original image.
Taking the calculation of the ambiguity of the image F2 as an example, the image F2 itself is a spatial domain image, and can be converted into a frequency domain image by fourier transform, wavelet transform, discrete cosine transform, or the like. The frequency domain image comprises a high frequency part and a low frequency part, wherein the low frequency part is mostly information such as a flat area and a color block, and the high frequency part is mostly information such as texture, edges and noise. The difference between the high frequency component statistic and the low frequency component statistic in the frequency domain image is calculated, and generally, the more blurred the image is, the more severe the blurring noise is, and the larger the difference is, so that the difference can be regarded as the blurring degree of the image. The high frequency component statistic may be an average value of the signal amplitude of the high frequency part, and the low frequency component statistic may be an average value of the signal amplitude of the low frequency part, and the ambiguity may be expressed as the following formula:
Difffreq=Avg(Freq_powerhigh)-Avg(Freq_powerlow) (1)
wherein, Avg (Fr)eq_powerhigh) Represents the integral average of the amplitude of the signal at the high frequency part of the frequency domain image, Avg (Freq _ power)low) Which represents the integral averaging of the signal amplitude of the low frequency part in the frequency domain image. The image fuzziness is calculated by adopting the difference value of the two parts, the calculation process is simple, and the calculation amount of the whole scheme is favorably reduced.
In addition, the fuzziness (or sharpness) of the image can be calculated by adopting gradient statistics, a deep learning algorithm and the like.
In one embodiment, a blur degree screening condition may also be set with respect to the first original image. For example, a first threshold value of the degree of blur is set, and the degree of blur of the first original image is required to be smaller than the first threshold value of the degree of blur. Illustratively, after acquiring the reference original image, if the degree of blur of the reference original image is smaller than a first degree of blur threshold, the reference original image is taken as a first original image; and if the fuzziness of the reference original image is not less than the first fuzziness threshold, selecting adjacent original images frame by frame from near to far in adjacent frames of the reference original image, judging whether the fuzziness of the adjacent original images is less than the first fuzziness threshold, and if the fuzziness of the adjacent original images is less than the first fuzziness threshold, taking the adjacent original images as the first original images. For example, in FIG. 4, F may be followed2、F1、F3、F0、F4The image is selected frame by frame and whether the image fuzziness is smaller than a first fuzziness threshold value is judged, and when the image fuzziness is smaller than the first fuzziness threshold value, the currently selected image is used as a first original image.
With continued reference to fig. 3, in step S320, a single-frame deblurring process is performed on the first original image to obtain a first deblurred image.
The single-frame deblurring processing refers to deblurring processing without the help of information of other frames except the first original image. The present disclosure does not limit the specific manner of the single-frame deblurring process, and for example, the blur kernel may be used to deblur the first original image.
In one embodiment, step S320 may include the steps of:
and processing the first original image by using a single-frame deblurring network to obtain a first deblurred image.
The single-frame deblurring network is a pre-trained neural network for deblurring a single-frame image, and can adopt an end-to-end network structure. FIG. 6 shows an exemplary structure of a single frame deblurring network, which employs a U-Net like structure. After the first original image is input into the network, the processing procedure is as follows:
first, a pixel rearrangement operation is performed by the pixel rearrangement layer at the input end, and generally, the size of the first original image is large, and the first original image can be rearranged to a plurality of channels by the space _ to _ depth function, so that the image size of each channel is reduced. For example, the first original image is a three-channel image of H × W × 3, and when the block _ size of space _ to _ depth is set to 2, it is equivalent to splitting pixels of 2 × 2 grids in each channel in the first original image into 4 different channels, so that the image of one channel is split into new images of 4 channels, and the width and height of the new images are both reduced to half of the original image, thereby obtaining a feature image of H/2W/2 × 12. This facilitates subsequent operations such as convolution of small-sized feature images.
And secondly, a down-sampling part consisting of the 2d convolution layer, the residual block and the down-sampling layer performs down-sampling on the characteristic image and multi-scale convolution operation in the down-sampling process so as to extract the image characteristics of a multi-scale layer. The downsampling layer may be implemented using a pooling operation.
And thirdly, an upsampling part consisting of a residual block, an upsampling layer and a 2d convolution layer performs upsampling on the downsampled characteristic image and multi-scale convolution operation in the upsampling process so as to recover image detail information on a multi-scale layer. The upsampling layer may be implemented by operations such as transpose convolution, interpolation (e.g., bilinear interpolation), and the like. The structure of the upsampling part may be symmetrical to that of the downsampling part, and the operation of the upsampling part may be the inverse operation of the downsampling part.
Finally, a pixel rearrangement operation is performed by the pixel rearrangement layer at the output end, which may be the inverse operation of the pixel rearrangement operation at the input end, for restoring the size of the first original image, for example, the feature image of H/2 × W/2 × 12 may be rearranged into a three-channel image of H × W × 3, so that the output first deblurred image is consistent with the size of the first original image.
FIG. 7 shows a schematic diagram of training a single frame deblurring network. The method includes the steps of obtaining a sample input image and a standard image (Ground route), wherein the standard image is a clear image corresponding to the sample input image, for example, a large number of clear images can be obtained to serve as the standard image, and the clear image is subjected to blurring processing to obtain a corresponding sample input image. Inputting the sample input image into a single-frame deblurring network to be trained, and outputting a corresponding sample output image; calculating a loss function based on a difference between the sample output image and the standard image; the loss function can be selected from L1, L2 and the like; parameters of the single-frame deblurring network are updated by using a loss function, for example, the gradient of each parameter can be calculated by a back propagation algorithm, and gradient descent update is carried out on each parameter. Thus, the single-frame deblurring network is iteratively trained until the accuracy reaches the preset requirement.
It should be understood that the single frame deblurring Network may adopt a Network structure different from that shown in fig. 6 or fig. 7, for example, a structure of GAN (generic adaptive Network, generated countermeasure Network) may be adopted.
Through single-frame deblurring processing, deblurring of a single-frame image spatial domain level can be achieved, and compared with a first original image, the definition of the obtained first deblurred image can be remarkably improved.
With continued reference to fig. 3, in step S330, one or more frames of second original images acquired in a time adjacent to the shooting time of the first original image are acquired.
The neighborhood time refers to a time period adjacent to the capturing time of the first original image. The neighborhood time may lie within the normal capture time, such as in FIG. 4, assuming F is chosen4Is a first original image with a shooting time t4Then t is0~t3May be neighborhood time, F0~F3May be the second original image. The neighborhood time may be outside the regular shooting time, for example, after the regular shooting time, there may be an additional shooting time, as shown with reference to fig. 8, assuming that the regular shooting time is in the shooting of the image F4And finishing the later time period, wherein the terminal continues to shoot the image in the later time period, the time period is the additional shooting time, and the neighborhood time is t5~t8Captured image F5~F8Is the second original image.
The number of the acquired second original images is not limited in the present disclosure, and generally, the larger the number of the second original images is, the more advantageous the subsequent deblurring process is. Thus, in an embodiment, the number of required second original images may be determined in dependence of the degree of blur of the first deblurred image, the higher the number of required second original images.
In addition to the amount, the quality of the second original image will also affect the effect of the subsequent deblurring process. In one embodiment, referring to fig. 9, step S330 may include:
step S910, when determining that the blurring degree of the first deblurred image meets a preset condition, acquiring one or more frames of second original images acquired within neighborhood time in a first exposure time length;
step S920, when it is determined that the blur degree of the first deblurred image does not satisfy the preset condition, acquiring one or more frames of second original images acquired within the neighborhood time with the second exposure duration.
Wherein the second exposure duration is greater than the first exposure duration. The exposure time affects the quality of the second original image. Specifically, if the exposure time is short, the degree of motion of the object in the second original image is low, and it is difficult to reduce the blur compensation of the first deblurred image by using the second original image, and at the same time, more noise is easily introduced into the second original image, which affects the deblurring effect.
The preset condition may be that the ambiguity is less than a second ambiguity threshold, which may be determined based on empirical or practical requirements. When the blurring degree of the first deblurred image is low, the quality requirement on the second original image is also low, and the second original image shot under the short first exposure time can meet the requirement; when the degree of blur of the first deblurred image is high, and the quality requirement for the second original image is also high, the second original image captured under the second exposure time length that is long is needed.
Therefore, for the first deblurred images with different blurriness degrees, the second original images collected under different exposure time lengths are adopted, and the scheme realization cost and the deblurring effect are considered.
In one embodiment, the acquiring one or more second original images acquired in a neighborhood time with a first exposure duration may include:
determining neighborhood time within regular shooting time, wherein the regular shooting time is the shooting time of the first original image;
and acquiring one or more frames of second original images acquired in the neighborhood time in the first exposure time.
The regular shooting time may be a time when the terminal shoots the reference original image and captures several frames of preview images before and after the terminal shoots the shooting function (e.g. clicking a shooting key) triggered by the user, and may be t in fig. 4, for example0~t4A time period. The time period excluding the capturing time of the first original image within the regular capturing time may be regarded as the neighborhood time, for example, at the regular capturing time t0~t4Internally removing the first original image F4Time t of photographing4To obtain the neighborhood time t0~t3
In the neighborhood time, the terminal may acquire multiple frames of original images with a fixed exposure time, including a first original image and a second original image, where the first original image may also be acquired with the first exposure time.
Based on the scheme of acquiring the second original image in the first exposure time, the second original image can be acquired from the image shot in the conventional shooting time, no additional image needs to be shot, and the scheme implementation cost is reduced.
In one embodiment, the acquiring one or more second original images acquired in the neighborhood time with the second exposure duration may include:
determining neighborhood time within the additional shooting time, wherein the additional shooting time is later than the shooting time of the first original image;
and acquiring one or more frames of second original images acquired in the neighborhood time in a second exposure time.
Wherein the additional photographing time may be a time taken to additionally photograph several frames of images after the above-mentioned regular photographing time, which is later than the photographing time of the first original image. And shooting one or more frames of second original images in the additional shooting time at a second longer exposure time.
In one embodiment, the correspondence between the degree of blur of the first deblurred image and the required exposure time may be determined in advance based on experience and experimental debugging. In actual operation, the second exposure time length is determined based on the corresponding relation and the blurring degree of the first deblurred image. And after the user triggers the shooting function and the terminal finishes the conventional shooting, adding a period of shooting time, namely adding the shooting time, and shooting a second original image for subsequent deblurring processing by using a second exposure duration. In the case of the additional shot, a corresponding prompt may be presented, such as displaying a prompt message "in shot, do not move the lens" within the shooting interface to prompt the user to keep the lens aimed at the target for the additional shot time.
By the scheme of acquiring the second original image according to the second exposure time length, the second original image with long exposure can be additionally shot under the condition that the blurring degree of the first deblurred image is high, and the deblurring effect of a subsequent image is ensured.
With continued reference to fig. 3, in step S340, the first deblurred image is deblurred by using the second original image, resulting in a second deblurred image.
Compared with the single-frame deblurring processing in step S320, the multi-frame deblurring processing in step S340 can realize the deblurring processing in the time domain layer of the multi-frame image.
In one embodiment, step S340 may include the steps of:
and registering and fusing the first deblurred image and the second original image to obtain a second deblurred image.
When the number of the second original images is larger than one frame, the following two ways can be adopted for registration and fusion:
the first method is as follows: the number of the first deblurred images is one frame, and if the number of the second original images is m frames, the m +1 frames of images can be registered and fused pairwise. Taking FIG. 8 as an example, assume that the first original image is F4Obtaining a first deblurred image F after single-frame deblurring processing4', the second original image includes F5、F6、F7、F8. In the case of performing the multi-frame deblurring process, as shown in fig. 10, F may be first performed4' and F5Registering and fusing to obtain an image F5', then F5' and F6Carrying out registration and fusion to obtain F6', … … until the last frame image F8Completing registration and fusion to obtain a second deblurred image F8'。
In fusion, as shown in fig. 10, the images may be superimposed by weighted average, and the weight may be determined according to the number of images actually fused. Assuming that the m +1 frame images are fused in the order of the 1 st frame and the 2 nd frame, refer to the following formula:
Figure BDA0003119415710000131
wherein i is [2, m +1 ]]Any positive integer of (a). Fused image of the ith frame with the previous i-1 frame (i.e., F)i-1') are 1 and i-1, respectively. This can be seen from the weight M in FIG. 104Is 1, weight M5Is 2.
The second method comprises the following steps: the number of the first deblurred images is one frame, and assuming that the number of the second original images is m frames, the m +1 frame images can be completely registered to one reference image, and then the registered m +1 frame images are fused. The reference image may be any one of the m +1 frame images, for example, may be a first deblurred image, the m frames of second original images are all registered to the first deblurred image, and then all the m +1 frame images are fused, for example, the m +1 frame images may be subjected to average value superposition.
It can be seen that the principle of the above two approaches is the same, except that the order of registration and fusion is different.
Since the shooting time of the second original image is different from that of the first deblurred image (i.e. the shooting time of the first original image), and the shooting time of the second original image is also different for each frame, there may be a slight movement of the shooting object, or a hand shake or the like may occur to the photographer, which results in a difference in the angle of view between the second original image and the first deblurred image, and a difference in the angle of view between the second original images, in order to improve the accuracy of image fusion, registration is performed before the fusion. The present disclosure is not limited to the specific manner of registration, and the following is illustrated by an example:
referring to fig. 11, the above-mentioned registration of the first deblurred image with the second original image may include the following steps S1110 to S1140:
step S1110, determining a current fused image obtained by fusing the first deblurred image and the fused image in the second original image as an image to be registered, and determining another image to be registered in the un-fused image in the first deblurred image and the second original image; or determining the reference image in the first deblurred image and the second original image as an image to be registered, and determining another image to be registered in the non-reference image in the first deblurred image and the second original image.
When image fusion is performed in the first mode, referring to formula (2), the current fusion image is an image obtained by current fusion, that is, Fi-1' determining another image to be registered, typically the i-th frame image F, in the un-fused image of the first deblurred image and the second original imagei. In step S1110, F may be seti-1' and FiAs two images to be registered.
When the second mode is adopted for image fusion, one frame of reference image serving as a registration reference is determined in the first deblurred image and the second original image, and the rest of reference images are non-reference images. For example, when the first deblurred image is used as the reference image, the first deblurred image and the 1 st frame second original image may be used as the two images to be registered, the first deblurred image and the 2 nd frame second original image may be used as the two images to be registered, …, and finally the first deblurred image and the m th frame second original image may be used as the two images to be registered.
For convenience of explanation, the two images to be registered are indicated below as P, Q.
Step S1120, performing pyramid operation on the two images to be registered respectively to obtain a sampling image of each image to be registered at multiple resolutions.
The pyramid operation is to perform a series of downsampling with different low resolutions on the image to obtain a sampling image set with gradually reduced resolution. The sampled images at the plurality of resolutions may include the original image at the original resolution as a particular sampled image.
In general, when performing the pyramid operation, a stop condition may be set, such as a certain resolution or a down-sampling multiple, and when this condition is reached, the down-sampling is stopped from continuing. The stop condition and the downsampling multiple of each layer in the pyramid are not specifically limited by the present disclosure. As illustrated with reference to fig. 12, for the images P and Q to be registered, downsampling is performed layer by layer according to multiples of 1/2, 1/4 and 1/8, respectively, to obtain a sampled image P (i.e., original image) of P, P (1/2) (representing the sampled images downsampled according to a multiple of 1/2), P (1/4), P (1/8), and a sampled image Q, Q (1/2), Q (1/4) and Q (1/8) of Q.
In step S1130, feature point matching is performed on the two sampled images at each resolution, so as to obtain matching feature point pairs of the two images to be registered.
The present disclosure does not limit the type of the Feature points and the Feature point detection algorithm, and for example, Harris corner points, SIFT (Scale-Invariant Feature Transform) Feature points and detection algorithms thereof may be used. For two sampled images at each resolution, the average is takenAnd matching the line characteristic points. Specifically, after detecting a feature point in the image P to be registered, a feature point corresponding to the feature point is detected in each of the sample images thereof, thereby obtaining a set of feature points of P, for example, (P) shown in fig. 121,p2,p3,p4) If (p)1,p2,p3,p4) With a set of characteristic points (Q) of the image Q to be registered1,q2,q3,q4) If the matching is successful, p is determined1And q is1Is a matching characteristic point pair of P and Q. Therefore, the matching of the feature points under different scales is realized, and because the corresponding semantics of the images under different scales are different, the stability of the semantics of the feature points under different scales is ensured, and the accuracy of matching the feature point pairs is improved.
And step S1140, registering the two images to be registered according to the matching characteristic point pairs.
After the matching point pairs are obtained, a transformation matrix between the two images to be registered can be calculated through an optical flow algorithm and the like, and then any one of the images to be registered is transformed, so that the registration of the two images to be registered is realized.
Through the registration and fusion, a second deblurred image is finally obtained, the second deblurred image can be output as a final image processing result, and further processing can also be performed on the second deblurred image, for example, the second deblurred image can be sharpened, so that the overall sharpness of the image is improved.
In one embodiment, the image deblurring method may further include the steps of:
and carrying out single-frame deblurring processing on the second deblurred image to obtain a third deblurred image.
The implementation manner of the single-frame deblurring process may refer to step S320 described above, for example, the second deblurred image may be input into the single-frame deblurring network again, and the third deblurred image may be output through the network. By performing single-frame deblurring processing again, the side effects of local blurring and the like possibly brought by the image fusion can be removed, and the definition of the image is further improved. The third deblurred image may be output as a final image processing result.
Fig. 13 shows another exemplary flow of the image deblurring method in the present exemplary embodiment, including:
step 1310, clicking a shooting key in a shooting interface of the terminal by a user, and triggering the terminal to acquire a plurality of original images in a first exposure duration within a conventional shooting time through a camera;
step S1320, calculating the blurriness of the original images, and selecting a frame of image with the lowest blurriness (i.e. the highest definition) as a first original image;
step S1330, inputting the first original image into a pre-trained single frame deblurring network to perform single frame deblurring processing, and outputting a first deblurred image;
step S1340, judging whether the ambiguity of the first deblurred image meets a preset condition, wherein the preset condition can be that the ambiguity of the first deblurred image is smaller than a second ambiguity threshold; if yes, go to step S1350; if not, go to step S1360;
step S1350, acquiring one or more frames of second original images from the original images acquired within the conventional shooting time;
step S1360, adding a period of additional shooting time after the conventional shooting time, and collecting one or more frames of second original images within the additional shooting time by using a second exposure time length, wherein the second exposure time length is greater than the first exposure time length;
step S1370, registering the first deblurred image and the second original image, obtaining a matching feature point pair through image pyramid and Harris corner detection and matching with reference to the method of fig. 11, and performing optical flow registration according to the matching feature point pair;
step S1380, fusing the registered first deblurred image and the second original image to obtain a second deblurred image; in steps S1370 and S1380, a frame-by-frame registration and fusion method (refer to the first method);
step S1390, carrying out image sharpening on the second deblurred image, inputting the second deblurred image into a single-frame deblurring network, carrying out single-frame deblurring processing again, and outputting a third deblurred image; the third deblurred image may be presented within the terminal's camera interface as a final image processing result and stored in the terminal's memory.
Exemplary embodiments of the present disclosure also provide an image deblurring apparatus. Referring to fig. 14, the image deblurring apparatus 1400 may include:
a first obtaining module 1410 configured to obtain a frame of a first original image;
a first deblurring module 1420 configured to perform single-frame deblurring processing on the first original image to obtain a first deblurred image;
a second obtaining module 1430 configured to obtain one or more frames of second original images acquired in a time adjacent to a shooting time of the first original image;
and a second deblurring module 1440 configured to deblur the first deblurred image using the second original image to obtain a second deblurred image.
In one embodiment, the first obtaining module 1410 is configured to:
acquiring a frame of reference original image and one or more frames of original images adjacent to the reference original image acquired at the shooting trigger moment, and taking the frame of image with the lowest fuzzy degree in the acquired reference original image and the adjacent original images as a first original image.
In one embodiment, the first obtaining module 1410 is configured to determine the ambiguity of each frame image after obtaining the reference original image and the adjacent original images by:
converting the reference original image or the adjacent original image into a frequency domain image;
and taking the difference value between the high-frequency component statistic value and the low-frequency component statistic value in the frequency domain image as the fuzziness of the reference original image or the adjacent original image.
In one embodiment, the first deblurring module 1420 is configured to:
and processing the first original image by using a single-frame deblurring network to obtain a first deblurred image.
In one embodiment, the second obtaining module 1430 is configured to:
when the fact that the fuzziness of the first deblurred image meets a preset condition is determined, one or more frames of second original images collected within neighborhood time in a first exposure time length are obtained;
when the fuzzy degree of the first deblurred image is determined not to meet the preset condition, acquiring one or more frames of second original images acquired within the neighborhood time in a second exposure time;
the second exposure duration is greater than the first exposure duration.
In one embodiment, the second obtaining module 1430 is configured to:
determining neighborhood time within a conventional shooting time, the conventional shooting time including a shooting time of the first original image;
one or more frames of second original images acquired within a neighborhood time with a first exposure duration are acquired.
In one embodiment, the second obtaining module 1430 is configured to:
determining neighborhood time within the additional shooting time, wherein the additional shooting time is later than the shooting time of the first original image;
and acquiring one or more frames of second original images acquired in the neighborhood time in the second exposure time length.
In one embodiment, the second deblurring module 1440 is configured to:
and registering and fusing the first deblurred image and the second original image to obtain a second deblurred image.
In one embodiment, the second deblurring module 1440 is configured to perform image registration by:
determining a current fusion image obtained by fusing the first deblurred image and the fused image in the second original image as an image to be registered, and determining another image to be registered in the unfused image in the first deblurred image and the second original image; or determining the reference image in the first deblurred image and the second original image as an image to be registered, and determining another image to be registered in the non-reference image in the first deblurred image and the second original image;
performing pyramid operation on the two images to be registered respectively to obtain a sampling image of each image to be registered under multiple resolutions;
performing characteristic point matching on the two sampled images under each resolution to obtain matched characteristic point pairs of the two images to be registered;
and registering the two images to be registered according to the matched feature point pairs.
In one embodiment, the first deblurring module 1420 is further configured to:
and carrying out single-frame deblurring processing on the second deblurred image to obtain a third deblurred image.
The details of the above-mentioned parts of the apparatus have been described in detail in the method part embodiments, and thus are not described again.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which may be implemented in the form of a program product, including program code for causing an electronic device to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "exemplary method" section of this specification, when the program product is run on the electronic device. In one embodiment, the program product may be embodied as a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the following claims.

Claims (13)

1. An image deblurring method, comprising:
acquiring a frame of first original image;
performing single-frame deblurring processing on the first original image to obtain a first deblurred image;
acquiring one or more frames of second original images acquired in the neighborhood time of the shooting time of the first original image;
and carrying out deblurring processing on the first deblurred image by using the second original image to obtain a second deblurred image.
2. The method of claim 1, wherein said obtaining a frame of a first original image comprises:
acquiring a frame of reference original image and one or more frames of original images adjacent to the reference original image acquired at the shooting triggering moment, and taking the frame of image with the lowest fuzzy degree in the reference original image and the adjacent original images as the first original image.
3. The method according to claim 2, wherein after the reference original image and the adjacent original image are obtained, the blur degree of each frame image is determined by:
converting the reference original image or the adjacent original image into a frequency domain image;
and taking the difference value between the high-frequency component statistic value and the low-frequency component statistic value in the frequency domain image as the fuzziness of the reference original image or the adjacent original image.
4. The method of claim 1, wherein the deblurring the first original image for a single frame to obtain a first deblurred image comprises:
and processing the first original image by using a single-frame deblurring network to obtain the first deblurred image.
5. The method of claim 1, wherein said obtaining one or more second raw images acquired in a time neighborhood of a capture time of the first raw image comprises:
when the fact that the fuzziness of the first deblurred image meets a preset condition is determined, acquiring one or more frames of the second original image acquired within the neighborhood time in a first exposure time;
when the fuzzy degree of the first deblurred image is determined not to meet the preset condition, acquiring one or more frames of the second original image acquired within the neighborhood time in a second exposure time;
the second exposure duration is greater than the first exposure duration.
6. The method of claim 5, wherein said obtaining one or more frames of the second raw image acquired at the first exposure time duration in the neighborhood of times comprises:
determining the neighborhood time within a regular shooting time, the regular shooting time comprising a shooting time of the first original image;
and acquiring one or more frames of the second original image acquired within the neighborhood time in the first exposure time.
7. The method of claim 5, wherein said obtaining one or more frames of the second raw image captured at a second exposure time duration in the neighborhood of times comprises:
determining the neighborhood time within an additional shooting time, wherein the additional shooting time is later than the shooting time of the first original image;
and acquiring one or more frames of the second original image acquired within the neighborhood time in the second exposure time.
8. The method of claim 1, wherein deblurring the first deblurred image using the second original image to obtain a second deblurred image comprises:
and registering and fusing the first deblurred image and the second original image to obtain the second deblurred image.
9. The method of claim 8, wherein said registering and fusing the first deblurred image with the second original image comprises:
determining a current fused image obtained by fusing the first deblurred image and the fused image in the second original image as an image to be registered, and determining another image to be registered in the unfused image in the first deblurred image and the second original image; or determining the reference image in the first deblurred image and the second original image as an image to be registered, and determining another image to be registered in the non-reference image in the first deblurred image and the second original image;
performing pyramid operation on the two images to be registered respectively to obtain a sampling image of each image to be registered under multiple resolutions;
performing characteristic point matching on the two sampled images under each resolution to obtain matched characteristic point pairs of the two images to be registered;
and registering the two images to be registered according to the matched feature point pairs.
10. The method of claim 1, further comprising:
and carrying out single-frame deblurring processing on the second deblurred image to obtain a third deblurred image.
11. An image deblurring apparatus, comprising:
a first acquisition module configured to acquire a frame of a first original image;
the first deblurring module is configured to perform single-frame deblurring processing on the first original image to obtain a first deblurred image;
a second acquisition module configured to acquire one or more frames of second original images acquired in a time neighborhood of a shooting time of the first original image;
and the second deblurring module is configured to perform deblurring processing on the first deblurred image by using the second original image to obtain a second deblurred image.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 10.
13. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 10 via execution of the executable instructions.
CN202110672843.6A 2021-06-17 2021-06-17 Image deblurring method, device, electronic equipment and storage medium Active CN113409209B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110672843.6A CN113409209B (en) 2021-06-17 2021-06-17 Image deblurring method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110672843.6A CN113409209B (en) 2021-06-17 2021-06-17 Image deblurring method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113409209A true CN113409209A (en) 2021-09-17
CN113409209B CN113409209B (en) 2024-06-21

Family

ID=77684815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110672843.6A Active CN113409209B (en) 2021-06-17 2021-06-17 Image deblurring method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113409209B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708166A (en) * 2022-04-08 2022-07-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and terminal

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101341871B1 (en) * 2012-09-12 2014-01-07 포항공과대학교 산학협력단 Method for deblurring video and apparatus thereof
US20150206289A1 (en) * 2014-01-21 2015-07-23 Adobe Systems Incorporated Joint Video Deblurring and Stabilization
CN107240092A (en) * 2017-05-05 2017-10-10 浙江大华技术股份有限公司 A kind of image blur detection method and device
CN110062164A (en) * 2019-04-22 2019-07-26 深圳市商汤科技有限公司 Method of video image processing and device
CN110189285A (en) * 2019-05-28 2019-08-30 北京迈格威科技有限公司 A kind of frames fusion method and device
CN111275626A (en) * 2018-12-05 2020-06-12 深圳市炜博科技有限公司 Video deblurring method, device and equipment based on ambiguity
CN111932480A (en) * 2020-08-25 2020-11-13 Oppo(重庆)智能科技有限公司 Deblurred video recovery method and device, terminal equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101341871B1 (en) * 2012-09-12 2014-01-07 포항공과대학교 산학협력단 Method for deblurring video and apparatus thereof
US20150206289A1 (en) * 2014-01-21 2015-07-23 Adobe Systems Incorporated Joint Video Deblurring and Stabilization
CN107240092A (en) * 2017-05-05 2017-10-10 浙江大华技术股份有限公司 A kind of image blur detection method and device
CN111275626A (en) * 2018-12-05 2020-06-12 深圳市炜博科技有限公司 Video deblurring method, device and equipment based on ambiguity
CN110062164A (en) * 2019-04-22 2019-07-26 深圳市商汤科技有限公司 Method of video image processing and device
CN110189285A (en) * 2019-05-28 2019-08-30 北京迈格威科技有限公司 A kind of frames fusion method and device
CN111932480A (en) * 2020-08-25 2020-11-13 Oppo(重庆)智能科技有限公司 Deblurred video recovery method and device, terminal equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李警: "多帧图像复原算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 March 2017 (2017-03-15), pages 138 - 4630 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708166A (en) * 2022-04-08 2022-07-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and terminal

Also Published As

Publication number Publication date
CN113409209B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN111598776B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN111580765B (en) Screen projection method, screen projection device, storage medium, screen projection equipment and screen projection equipment
CN111445392B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN112330574A (en) Portrait restoration method and device, electronic equipment and computer storage medium
CN111641835A (en) Video processing method, video processing device and electronic equipment
CN112767295A (en) Image processing method, image processing apparatus, storage medium, and electronic device
WO2022206202A1 (en) Image beautification processing method and apparatus, storage medium, and electronic device
CN111696039B (en) Image processing method and device, storage medium and electronic equipment
CN113409203A (en) Image blurring degree determining method, data set constructing method and deblurring method
CN111768351A (en) Image denoising method, image denoising device, storage medium and electronic device
CN111784734A (en) Image processing method and device, storage medium and electronic equipment
CN112927271A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN104918027A (en) Method, electronic device, and server for generating digitally processed pictures
CN111161176A (en) Image processing method and device, storage medium and electronic equipment
CN111835973A (en) Shooting method, shooting device, storage medium and mobile terminal
CN113313776A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN113658073B (en) Image denoising processing method and device, storage medium and electronic equipment
CN113409209B (en) Image deblurring method, device, electronic equipment and storage medium
CN113658128A (en) Image blurring degree determining method, data set constructing method and deblurring method
CN113781336B (en) Image processing method, device, electronic equipment and storage medium
CN111416937B (en) Image processing method, image processing device, storage medium and mobile equipment
CN114390219B (en) Shooting method, shooting device, electronic equipment and storage medium
CN113379624A (en) Image generation method, training method, device and equipment of image generation model
CN113364964A (en) Image processing method, image processing apparatus, storage medium, and terminal device
CN115423732A (en) Target image generation method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant