CN108765325B - Small unmanned aerial vehicle blurred image restoration method - Google Patents
Small unmanned aerial vehicle blurred image restoration method Download PDFInfo
- Publication number
- CN108765325B CN108765325B CN201810471510.5A CN201810471510A CN108765325B CN 108765325 B CN108765325 B CN 108765325B CN 201810471510 A CN201810471510 A CN 201810471510A CN 108765325 B CN108765325 B CN 108765325B
- Authority
- CN
- China
- Prior art keywords
- image
- fuzzy
- blurred image
- blurred
- aerial vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000001228 spectrum Methods 0.000 claims abstract description 19
- 238000001914 filtration Methods 0.000 claims abstract description 12
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 238000002834 transmittance Methods 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 10
- 238000003708 edge detection Methods 0.000 claims description 7
- 230000007704 transition Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 238000004134 energy conservation Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 230000035939 shock Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 2
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 claims 2
- 230000002708 enhancing effect Effects 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 11
- 238000003384 imaging method Methods 0.000 description 5
- 230000015556 catabolic process Effects 0.000 description 4
- 238000006731 degradation reaction Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 238000011084 recovery Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for restoring a blurred image of a small unmanned aerial vehicle, and relates to the technical field of computer vision. The method is mainly used for enhancing the practicability and robustness of the miniature unmanned aerial vehicle blurred image restoration. The invention comprises the following steps: (1) identifying the small unmanned aerial vehicle image by using an unmanned aerial vehicle image fuzzy type identification algorithm based on a convolutional neural network to obtain a fuzzy type; (2) restoring the image with the fuzzy type of motion blur by using a motion blur image blind restoration algorithm with mixed characteristic regularization constraint to obtain a clear restored image; (3) restoring the image with the fuzzy type of atmospheric fuzzy by using an image defogging algorithm based on mixed prior and guided filtering to obtain a clear restored image; (4) and restoring the image with the fuzzy type of out-of-focus fuzzy by using an out-of-focus fuzzy blind restoration algorithm based on frequency spectrum preprocessing and improved Hough transform to obtain a clear restored image.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a method for restoring a blurred image of a small unmanned aerial vehicle.
Background
In recent years, a small unmanned aerial vehicle becomes an important mode for acquiring near-ground remote sensing information with high maneuverability, high cost performance and low operation difficulty, and has wide application prospects in various fields such as low-altitude photography, near-ground survey, intelligent transportation, fire fighting anti-terrorism, military reconnaissance and the like. The small unmanned aerial vehicle is easily influenced by factors such as severe weather, cradle head shake, relative motion, imaging system faults and the like in the imaging process, so that image blur degradation is caused, and atmospheric blur, motion blur and defocusing blur are three common blur types. The image blurring directly interferes with timely grasping of information and accurate decision of a decision maker, so that the deblurring processing of the image becomes a key for improving the information acquisition quality of the small unmanned aerial vehicle. The blurred image restoration is a process of establishing an image degradation physical model by using a known prior condition according to an image degradation mechanism and restoring a clear image in a targeted manner. The method mainly solves the problems of fuzzy type identification and various types of fuzzy image restoration. Compared with the traditional deblurring method based on image enhancement, the blurred image restoration method has stronger pertinence, better deblurring effect and more complete information storage. In conclusion, the method for restoring the blurred image of the small unmanned aerial vehicle has important practical significance.
In terms of blurred image restoration, the scholars do a lot of work: pearl et al propose a method for restoring a fully-variational image for motion imaging hybrid blur. Firstly, qualitatively identifying a fuzzy type according to the spectral characteristics of a fuzzy image, then quantitatively estimating a point spread function of a fuzzy model by using a cepstrum analysis method, and finally adding a coupling gradient fidelity term to improve a full variation restoration algorithm to restore the fuzzy image. The method well solves the problem of blind restoration of the motion mixed image, but the type of fuzzy processing is too limited, and the application range is smaller. Xuzongchen proposes a practical image blind restoration processing method. Firstly, a cepstrum method is used for dividing a blurred image into three types of motion, defocusing and other blurs, then an improved smooth constraint double-regularization image blind restoration algorithm is used for restoring other blurred images, a parameter method is used for restoring the motion blurred image and the defocusing blurred image, and finally a restoration result is optimized through a ringing effect post-processing algorithm to obtain a restored clear image. The method has a wide application range, but the fuzzy type identification precision is poor. Korean xiaofang et al propose a method for restoring moving and defocused blurred images. Firstly preprocessing a blurred image to obtain a logarithmic spectrum binary image, carrying out Hough transform on the logarithmic spectrum binary image, judging the image blur type by comparing the number of bright spots in a transform matrix, then estimating the blur direction by utilizing two directional differentials aiming at the motion blurred image, then obtaining the edge function of the blurred image by utilizing an improved Prewiit operator and a Fermi function, and finally restoring by utilizing a wiener filtering algorithm and combining a modulation transfer function to obtain a clear image. The fuzzy type identification method is high in fuzzy type identification precision, poor in restoration effect, obvious in ringing phenomenon and low in practicability. The treble provides an optical remote sensing image quality improvement and evaluation method. Firstly, an optical remote sensing image imaging link model is analyzed by a system, then a restoration method is provided according to various degradation factors in an imaging link, and finally the quality of the restored image is evaluated. The method is strong in practicability, but is not suitable for unmanned aerial vehicle image processing. And revetment proposes a restoration method aiming at the remote sensing image of the unmanned aerial vehicle. By inducing common fuzzy types of images of the unmanned aerial vehicle, the blind restoration algorithm based on L0 sparse prior, the blind restoration algorithm of saturated fuzzy images for eliminating abnormal values of the camera and the restoration algorithm of the unmanned aerial vehicle remote sensing atmosphere degraded images based on multiple scattering APSF estimation are respectively provided for restoration, and motion blur, abnormal value interference and atmosphere fuzzy images are restored in a targeted manner. The method has strong pertinence, but the recovery effect needs to be further enhanced, fuzzy type identification is not considered, and the quality-reducing type judgment difficulty is high.
Disclosure of Invention
In view of this, the present invention provides a method for recovering a blurred image by a small-sized unmanned aerial vehicle, which can simplify a process of recovering a blurred image and improve the quality of image recovery.
Based on the above purpose, the technical scheme provided by the invention is as follows:
a method for restoring a blurred image of a small unmanned aerial vehicle is applied to an image of the small unmanned aerial vehicle, and comprises the following steps:
the method comprises the following steps: identifying the small unmanned aerial vehicle blurred image by using an unmanned aerial vehicle image blur type identification algorithm based on a convolutional neural network to obtain a blur type, wherein the blur type comprises a motion blurred image, an atmosphere blurred image and an out-of-focus blurred image, the motion blurred image is executed in the second step, the atmosphere blurred image is executed in the third step, and the out-of-focus blurred image is executed in the fourth step;
step two: restoring the image with the fuzzy type of motion blur by using a motion blur image blind restoration algorithm with mixed characteristic regularization constraint to obtain a clear restored image;
step three: restoring the image with the fuzzy type of atmospheric fuzzy by using an image defogging algorithm based on mixed prior and guided filtering to obtain a clear restored image;
step four: restoring the image with the fuzzy type of out-of-focus fuzzy by using an out-of-focus fuzzy blind restoration algorithm based on frequency spectrum preprocessing and improved Hough transform to obtain a clear restored image;
and (5) restoring the blurred image of the small unmanned aerial vehicle.
The first step specifically comprises the following steps:
(101) carrying out fast Fourier transform on the small unmanned aerial vehicle blurred image to obtain a spectrogram of the blurred image;
(102) carrying out feature extraction on the spectrogram of the blurred image by using a trained convolutional neural network model to obtain a feature map;
(103) and inputting the feature map into a classifier to obtain the fuzzy type.
Wherein, the second step specifically comprises the following steps:
(201) extracting and sharpening edge information of the motion blurred image by using an edge detection algorithm based on conformal monogenic phase consistency and an impact filter to obtain sharpened edge details;
(202) the fuzzy kernel model is constructed as follows:
wherein L is a sharp image, B is a motion blurred image,is the first order partial derivative of the motion blurred image, k is the blur kernel,to sharpen edge details, | |. the luminance1(| |. non-woven hair |)2Respectively represent L1、L2The norm of the number of the first-order-of-arrival,representing the first and second order partial derivatives of the blur kernel respectively,in order to be a sparse regularization term,for smoothing the regularization term, λk1、λk2、λk3Respectively are parameters of each regular term;
(203) correcting the fuzzy kernel according to the nonnegativity of the pixels of the fuzzy kernel and the energy conservation characteristic to obtain a final fuzzy kernel model:
∫kdxdy=1
wherein, (i, j) is a blur kernel coordinate, max (k (:)) is a maximum pixel value of the blur kernel, μ is a threshold coefficient, and ═ kdxdy ═ 1 represents normalization processing on the blur kernel;
(204) the method for constructing the restored image model comprises the following steps:
wherein,for the term of the super laplacian prior,for edge preserving regularization term, SF (L)*) The image is a clear image processed by a Shock filter;
(205) correcting the non-negative characteristic of the restored image to obtain a final restored image model:
(206) and constructing a linear multi-scale pyramid for the motion blurred image of the unmanned aerial vehicle, performing optimized solution on the final blurred kernel model and the restored image model by using a semi-quadratic variable splitting strategy in each resolution layer until the number of iterations is met, obtaining solutions of the blurred kernel and the restored image of each resolution layer, and obtaining a clear restored image of the motion blurred image according to the solutions of the blurred kernel and the restored image of each resolution layer.
The third step specifically comprises the following steps:
(301) obtaining dark channel map I of atmosphere fuzzy imagedark(x) And depth map dr(x),
Wherein, IcFor a color channel of the atmospheric blur image I in RGB color space, Ω (x) is a region centered at x, IvAnd IsRespectively a brightness channel and a saturation channel in HSV color space, epsilon is a random error of the depth image represented by a random variable theta0、θ1、θ2Is a linear coefficient;
(302) redefining the region where the pixel points are located by using a double-constraint region segmentation method, and dividing four pixel regions, which is specifically described as follows:
wherein, I1In the region of high brightness and dense fog, I2In the non-highlight dense fog region, I3In the high-brightness non-dense fog region, I4Is not highlightedA non-dense fog region is formed, and alpha and beta are region division threshold values;
(303) respectively extracting pixel coordinate point positions 0.1% before the brightness in the dark channel image and the depth image, then comparing the pixel coordinate point positions extracted from the two images, if the pixel coordinate point positions exist in the two images at the same time, reserving the pixel coordinate point positions, otherwise, removing the pixel coordinate points, and finally searching a value with the highest brightness corresponding to the reserved pixel coordinate points in the atmosphere blurred image as an atmosphere light value A;
(304) constructing a rough atmosphere transmittance map by using a mixed prior model:
t(x)=mtdark(x)+ntcolor(x)
n=1-m
wherein, tdarkCoarse transmittance, t, obtained for a dark channel priorcolorIs the coarse transmittance obtained by color attenuation prior, m and n are mixed prior coefficients,
tcolor(x)=e-ηd(x)
wherein, omega is a fidelity coefficient, and eta is an atmospheric scattering coefficient;
(305) optimizing the coarse air transmittance graph by using a guided filtering algorithm to obtain a fine air transmittance graph t' (x);
(306) substituting the atmospheric light value and the fine atmospheric transmittance map into an atmospheric scattering model to obtain a clear restored image of the atmospheric blurred image:
wherein, t0And t1Are parameters introduced.
The fourth step specifically comprises the following steps:
(401) fast two-dimensional Fourier transform is carried out on the defocused blurred image to obtain a spectrogram of the defocused blurred image;
(402) estimating a pixel transition region according to a gray value curve of a spectrogram, and calculating a binarization threshold value by pixel mean values of the transition region in four directions of 45 degrees, 135 degrees, 45 degrees and 135 degrees;
(403) carrying out binarization processing and morphological filtering on the frequency spectrum image to obtain a frequency spectrum binary image;
(404) extracting edge information of the spectrum binary image by using a Canny edge detection algorithm to obtain edge details of the spectrum binary image;
(405) calculating the distances from all the edge points to the central point, and storing the distances in the set D;
(406) taking the center point as the center of a circle and taking any distance selected from the D as a radius to make a circle to obtain a candidate circle;
(407) judging whether the number of edge points on the candidate circle is greater than a threshold value, if so, determining the candidate circle as a zero point circle, and keeping the corresponding radius;
(408) looping the step (406) to the step (407) until the number of the traversal sets D or the zero point circles reaches a preset maximum value;
(409) estimating a blur kernel of the defocused blurred image by using an adjacent zero point ratio method for the two detected adjacent zero point circles;
(410) and (5) performing iterative restoration on the out-of-focus blurred image by using the restoration image model in the step (204) to obtain a clear restoration image of the out-of-focus blurred image.
Compared with the background technology, the invention has the advantages that:
the method is mainly used for enhancing the practicability and robustness of the miniature unmanned aerial vehicle blurred image restoration. Experimental results show that the unmanned aerial vehicle image fuzzy type recognition algorithm based on the convolutional neural network is higher in recognition accuracy compared with the traditional recognition method, the motion fuzzy image blind restoration algorithm based on the regularization constraint of the mixed characteristic is better in the restoration effect of the motion fuzzy image of the small unmanned aerial vehicle, the image defogging algorithm based on the mixed prior and the guided filtering is better in the restoration effect of the atmosphere fuzzy image of the small unmanned aerial vehicle, the defocusing fuzzy blind restoration algorithm based on the frequency spectrum preprocessing and the improved Hough transform is better in the restoration effect of the defocusing fuzzy image of the small unmanned aerial vehicle, the robustness of the unmanned aerial vehicle fuzzy image restoration method is strong, and the practicability is better. The method for restoring the blurred image of the small unmanned aerial vehicle can realize the restoration of the blurred image with strong practicability and robustness, and is an important improvement on the prior art.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings in conjunction with specific embodiments.
The embodiment explains the principle of image restoration of the small unmanned aerial vehicle, and the calculation is carried out according to the ideas of image blur type identification, motion blur restoration, atmospheric blur restoration and defocusing blur restoration, and the optimization improvement is emphasized on the image restoration process of the small unmanned aerial vehicle. The method comprises the following specific steps:
the method comprises the following steps: identifying the small unmanned aerial vehicle blurred image by using an unmanned aerial vehicle image blur type identification algorithm based on a convolutional neural network to obtain a blur type, wherein the blur type comprises a motion blurred image, an atmosphere blurred image and an out-of-focus blurred image, the motion blurred image is executed in the second step, the atmosphere blurred image is executed in the third step, and the out-of-focus blurred image is executed in the fourth step;
(101) carrying out fast Fourier transform on the small unmanned aerial vehicle blurred image to obtain a spectrogram of the blurred image;
(102) carrying out feature extraction on the spectrogram of the blurred image by using a trained convolutional neural network model to obtain a feature map;
(103) and inputting the feature map into a Softmax classifier to obtain the type of the blurred image.
Step two: restoring the image with the fuzzy type of motion blur by using a motion blur image blind restoration algorithm with mixed characteristic regularization constraint to obtain a clear restored image;
(201) carrying out edge information extraction and sharpening on the small unmanned aerial vehicle motion blurred image by using an edge detection algorithm and an impact filter based on conformal monogenic phase consistency to obtain sharpened edge details
(202) The fuzzy kernel model is constructed as follows:
wherein L is a sharp image, B is a motion blurred image,is the first-order partial derivative of the motion-blurred image, k is the blur kernel, | |1(| |. non-woven hair |)2Respectively represent L1、L2The norm of the number of the first-order-of-arrival,representing the first and second order partial derivatives of the blur kernel respectively,in order to be a sparse regularization term,for smoothing the regularization term, λk1、λk2、λk3Respectively are parameters of each regularization term;
(203) correcting the fuzzy kernel according to the nonnegativity of the pixels of the fuzzy kernel and the energy conservation characteristic to obtain a final fuzzy kernel model:
∫kdxdy=1
wherein, (i, j) is a blur kernel coordinate, max (k (:)) is a maximum pixel value of the blur kernel, μ is a threshold coefficient, and ═ kdxdy ═ 1 represents normalization processing on the blur kernel;
(204) the method for constructing the restored image model comprises the following steps:
wherein,for the term of the super laplacian prior,for edge preserving regularization term, SF (L)*) The image is a clear image processed by a Shock filter;
(205) correcting the non-negative characteristic of the restored image to obtain a final restored image model:
(206) and constructing a linear multi-scale pyramid for the motion blurred image of the unmanned aerial vehicle, performing optimized solution on the final blurred kernel model and the restored image model by using a semi-quadratic variable splitting strategy in each resolution layer until the number of iterations is met, obtaining solutions of the blurred kernel and the restored image of each resolution layer, and obtaining a clear restored image of the motion blurred image according to the solutions of the blurred kernel and the restored image of each resolution layer.
Compared with other image edge extraction sharpening strategies, the edge detection algorithm based on the conformal monogenic phase consistency has stronger robustness and higher speed, and is more suitable for fidelity item estimation in the fuzzy kernel model. The L1 norm is the optimal convex approximation of the L0 norm, the L0 and L1 norm normative matrixes have good sparse effects, but the L0 norm operation is complex, so that the L1 norm is used as a sparse rule operator of a fuzzy kernel model, and the operation speed of the algorithm is improved. Compared with the L1 norm, the L2 norm can effectively improve the generalization capability of the model and prevent the model from being over-fitted. And combining the first-order and second-order partial derivative characteristics of the image, forming a multi-mixture regularization term by using the L2 norms of the first-order and second-order partial derivatives of the image of the fuzzy core for smooth constraint, and further inhibiting abnormal values in the fuzzy core while improving the smooth constraint effect. Compared with the double-regularization fuzzy kernel model with sparse smooth characteristic provided in other algorithms, the method improves the smooth and fidelity regularization terms of the fuzzy kernel model, and enhances the anti-noise performance of the fuzzy kernel while ensuring accurate estimation.
Compared with the Gaussian distribution and the Laplace distribution, the fitting effect of the super Laplace distribution is optimal, so that the image regular term is constructed by adopting the super Laplace prior in the image model estimation stage, and the edge details of the obtained restored image can be richer. The edge-preserving regular term can better compensate the fuzzy effect in the total variation model, and effectively solves the problem that a high-quality clear image cannot be recovered in the existing method. The smaller the edge preserving regular term is, the closer the generated restored image edge is to the sharp clear image edge is.
Step three: restoring the image with the fuzzy type of atmospheric fuzzy by using an image defogging algorithm based on mixed prior and guided filtering to obtain a clear restored image;
(301) dark channel map I for obtaining atmosphere fuzzy image of small unmanned aerial vehicledark(x) And depth map dr(x),
Wherein, IcTo blur a color channel of image I in RGB color space, Ω (x) is a region centered at x, IvAnd IsRespectively a brightness channel and a saturation channel in HSV color space, epsilon is a random error of the depth image represented by a random variable theta0、θ1、θ2Is a linear coefficient;
(302) redefining the region where the pixel points are located by using a double-constraint region segmentation method, and dividing four pixel regions, which is specifically described as follows:
wherein, I1In the region of high brightness and dense fog, I2In the non-highlight dense fog region, I3High brightness non-dense fog region, I4A non-high-brightness non-dense fog region, wherein alpha and beta are region division threshold values;
(303) a new atmospheric light value estimation method is provided. Firstly, pixel positions 0.1% before brightness in a dark channel image and a depth image are respectively extracted, then coordinate point positions extracted by two images are compared, if the coordinate point positions exist in the two images at the same time, the coordinate point positions are reserved, otherwise, the coordinate point positions are removed, and finally, a value with the highest brightness corresponding to the reserved coordinate points is searched in the small unmanned aerial vehicle atmosphere fuzzy image and is used as an atmosphere light value A;
(304) a mixed prior strategy is provided for estimating the atmospheric transmittance, and a coarse atmospheric transmittance graph is constructed by using a mixed prior model:
t(x)=mtdark(x)+ntcolor(x)
n=1-m
wherein, tdarkCoarse transmittance, t, obtained for a dark channel priorcolorFor coarse transmittance obtained by color attenuation prior, m and n are mixed prior coefficients.
tcolor(x)=e-ηd(x)
Wherein, omega is a fidelity coefficient, and eta is an atmospheric scattering coefficient;
(305) optimizing the coarse air transmittance graph by using a guided filtering algorithm to obtain a fine air transmittance graph t' (x);
(306) substituting the atmospheric light value and the fine atmospheric transmittance graph into an atmospheric scattering model to obtain an atmospheric fuzzy clear restoration image:
wherein, in order to avoid image distortion caused by t being too large or too small, a parameter t is introduced0And t1It is restricted.
Dark channel prior is not suitable for the highlight area, but compared with other prior algorithms, the dark channel prior is better for processing the dense fog area, and the color attenuation prior is not suitable for the dense fog area, but can better solve the problem of recovery distortion of the highlight area, and has stronger complementarity; the dark channel prior and the color attenuation prior have similar prior characteristics, the algorithm implementation thought is basically the same, and the realizability is high.
The magnitude of the dark channel value can approximately describe the magnitude of fog concentration in the image, and the fog concentration is larger as the value is larger; the luminance channel value can directly represent the luminance of the image, and the larger the value is, the larger the luminance is; the method divides the highlight area and the dense fog area of the image by using a double constraint condition formed by the dark channel image and the brightness channel image, and has a certain theoretical basis.
Step four: restoring the image with the fuzzy type of out-of-focus fuzzy by using an out-of-focus fuzzy blind restoration algorithm based on frequency spectrum preprocessing and improved Hough transform to obtain a clear restored image;
(401) carrying out fast two-dimensional Fourier transform on the small unmanned aerial vehicle out-of-focus blurred image to obtain a spectrogram of the blurred image;
(402) estimating a pixel transition region according to a gray value curve of a spectrogram, and calculating a binarization threshold value by pixel mean values of the transition region in four directions of 45 degrees, 135 degrees, 45 degrees and 135 degrees;
(403) carrying out binarization processing and morphological filtering on the frequency spectrum image to obtain a frequency spectrum binary image;
(404) extracting edge information of the spectrum binary image by using a Canny edge detection algorithm to obtain edge details of the spectrum binary image;
(405) calculating the distances from all the edge points to the central point, and storing the distances in the set D;
(406) taking the center point as the center of a circle and taking any distance selected from the D as a radius to make a circle to obtain a candidate circle;
(407) judging whether the number of edge points on the candidate circle is greater than a threshold value, if so, determining the candidate circle as a zero point circle, and keeping the corresponding radius;
(408) looping the steps (406) to (407) until the number of the traversal sets D or the zero point circles reaches a preset maximum value;
(409) estimating a blur kernel of the defocused blurred image by using an adjacent zero point ratio method for the two detected adjacent zero point circles;
(410) and (5) performing iterative restoration on the out-of-focus blurred image by using the restoration image model in the step (204) to obtain a clear restoration image of the out-of-focus blurred image.
And finishing the restoration of the blurred image of the small unmanned aerial vehicle.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples. Any omissions, modifications, substitutions, improvements and the like in the foregoing embodiments are intended to be included within the scope of the present invention within the spirit and principle of the present invention.
Claims (2)
1. A method for restoring a blurred image of a small unmanned aerial vehicle is applied to an image of the small unmanned aerial vehicle, and is characterized by comprising the following steps:
the method comprises the following steps: identifying the small unmanned aerial vehicle blurred image by using an unmanned aerial vehicle image blur type identification algorithm based on a convolutional neural network to obtain a blur type, wherein the blur type comprises a motion blurred image, an atmosphere blurred image and an out-of-focus blurred image, the motion blurred image is executed in the second step, the atmosphere blurred image is executed in the third step, and the out-of-focus blurred image is executed in the fourth step;
step two: restoring the image with the fuzzy type of motion blur by using a motion blur image blind restoration algorithm with mixed characteristic regularization constraint to obtain a clear restored image;
step three: restoring the image with the fuzzy type of atmospheric fuzzy by using an image defogging algorithm based on mixed prior and guided filtering to obtain a clear restored image;
step four: restoring the image with the fuzzy type of out-of-focus fuzzy by using an out-of-focus fuzzy blind restoration algorithm based on frequency spectrum preprocessing and improved Hough transform to obtain a clear restored image;
wherein the second step specifically comprises the following steps:
(201) extracting and sharpening edge information of the motion blurred image by using an edge detection algorithm based on conformal monogenic phase consistency and an impact filter to obtain sharpened edge details;
(202) the fuzzy kernel model is constructed as follows:
wherein L is a sharp image, B is a motion blurred image,is the first order partial derivative of the motion blurred image, k is the blur kernel,to sharpen edge details, | ·| non-woven phosphor1And | · | non-conducting phosphor2Respectively represent L1、L2Norm of,Representing the first and second order partial derivatives of the blur kernel respectively,in order to be a sparse regularization term,for smoothing the regularization term, λk1、λk2、λk3Respectively are parameters of each regular term;
(203) correcting the fuzzy kernel according to the nonnegativity of the pixels of the fuzzy kernel and the energy conservation characteristic to obtain a final fuzzy kernel model:
∫kdxdy=1
wherein, (i, j) is a blur kernel coordinate, max (k (:)) is a maximum pixel value of the blur kernel, μ is a threshold coefficient, and ═ kdxdy ═ 1 represents normalization processing on the blur kernel;
(204) the method for constructing the restored image model comprises the following steps:
wherein,for the term of the super laplacian prior,for edge preserving regularization term, SF (L)*) The image is a clear image processed by a Shock filter;
(205) correcting the non-negative characteristic of the restored image to obtain a final restored image model:
(206) constructing a linear multi-scale pyramid for the motion blurred image of the unmanned aerial vehicle, performing optimization solution on a final blurred kernel model and a restored image model by using a semi-quadratic variable splitting strategy in each resolution layer until iteration times are met, obtaining solutions of the blurred kernel and the restored image of each resolution layer, and obtaining a clear restored image of the motion blurred image according to the solutions of the blurred kernel and the restored image of each resolution layer;
wherein the third step specifically comprises the following steps:
(301) obtaining dark channel map I of atmosphere fuzzy imagedark(x) And depth map dr(x),
Wherein, IcFor a color channel of the atmospheric blur image I in RGB color space, Ω (x) is a region centered at x, IvAnd IsRespectively a brightness channel and a saturation channel in HSV color space, epsilon is a random error of the depth image represented by a random variable theta0、θ1、θ2Is a linear coefficient;
(302) redefining the region where the pixel points are located by using a double-constraint region segmentation method, and dividing four pixel regions, which is specifically described as follows:
wherein, I1In the region of high brightness and dense fog, I2In the non-highlight dense fog region, I3In the high-brightness non-dense fog region, I4The non-high-brightness non-dense fog region is defined, and alpha and beta are region division threshold values;
(303) respectively extracting pixel coordinate point positions 0.1% before the brightness in the dark channel image and the depth image, then comparing the pixel coordinate point positions extracted from the two images, if the pixel coordinate point positions exist in the two images at the same time, reserving the pixel coordinate point positions, otherwise, removing the pixel coordinate points, and finally searching a value with the highest brightness corresponding to the reserved pixel coordinate points in the atmosphere blurred image as an atmosphere light value A;
(304) constructing a rough atmosphere transmittance map by using a mixed prior model:
t(x)=mtdark(x)+ntcolor(x)
n=1-m
wherein, tdarkCoarse transmittance, t, obtained for a dark channel priorcolorIs the coarse transmittance obtained by color attenuation prior, m and n are mixed prior coefficients,
tcolor(x)=e-ηd(x)
wherein, omega is a fidelity coefficient, and eta is an atmospheric scattering coefficient;
(305) optimizing the coarse air transmittance graph by using a guided filtering algorithm to obtain a fine air transmittance graph t' (x);
(306) substituting the atmospheric light value and the fine atmospheric transmittance map into an atmospheric scattering model to obtain a clear restored image of the atmospheric blurred image:
wherein, t0And t1Is an introduced parameter;
wherein the fourth step specifically comprises the following steps:
(401) fast two-dimensional Fourier transform is carried out on the defocused blurred image to obtain a spectrogram of the defocused blurred image;
(402) estimating a pixel transition region according to a gray value curve of a spectrogram, and calculating a binarization threshold value by pixel mean values of the transition region in four directions of 45 degrees, 135 degrees, 45 degrees and 135 degrees;
(403) carrying out binarization processing and morphological filtering on the frequency spectrum image to obtain a frequency spectrum binary image;
(404) extracting edge information of the spectrum binary image by using a Canny edge detection algorithm to obtain edge details of the spectrum binary image;
(405) calculating the distances from all the edge points to the central point, and storing the distances in the set D;
(406) taking the center point as the center of a circle and taking any distance selected from the D as a radius to make a circle to obtain a candidate circle;
(407) judging whether the number of edge points on the candidate circle is greater than a threshold value, if so, determining the candidate circle as a zero point circle, and keeping the corresponding radius;
(408) looping the step (406) to the step (407) until the number of the traversal sets D or the zero point circles reaches a preset maximum value;
(409) estimating a blur kernel of the defocused blurred image by using an adjacent zero point ratio method for the two detected adjacent zero point circles;
(410) performing iterative restoration on the out-of-focus blurred image by using the restoration image model in the step (204) to obtain a clear restoration image of the out-of-focus blurred image;
and (5) restoring the blurred image of the small unmanned aerial vehicle.
2. The unmanned aerial vehicle blurred image restoration method according to claim 1, wherein the first step specifically comprises the following steps:
(101) carrying out fast Fourier transform on the small unmanned aerial vehicle blurred image to obtain a spectrogram of the blurred image;
(102) carrying out feature extraction on the spectrogram of the blurred image by using a trained convolutional neural network model to obtain a feature map;
(103) and inputting the feature map into a classifier to obtain the fuzzy type.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810471510.5A CN108765325B (en) | 2018-05-17 | 2018-05-17 | Small unmanned aerial vehicle blurred image restoration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810471510.5A CN108765325B (en) | 2018-05-17 | 2018-05-17 | Small unmanned aerial vehicle blurred image restoration method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108765325A CN108765325A (en) | 2018-11-06 |
CN108765325B true CN108765325B (en) | 2021-06-29 |
Family
ID=64006814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810471510.5A Active CN108765325B (en) | 2018-05-17 | 2018-05-17 | Small unmanned aerial vehicle blurred image restoration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108765325B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109521556A (en) * | 2018-12-07 | 2019-03-26 | 歌尔科技有限公司 | A kind of electron microscopic wearable device |
CN109584186A (en) * | 2018-12-25 | 2019-04-05 | 西北工业大学 | A kind of unmanned aerial vehicle onboard image defogging method and device |
CN110097521B (en) * | 2019-05-08 | 2023-02-28 | 华南理工大学 | Convolution neural network image restoration method for reflective metal visual detection |
CN110288550B (en) * | 2019-06-28 | 2020-04-24 | 中国人民解放***箭军工程大学 | Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition |
CN110400312A (en) * | 2019-07-31 | 2019-11-01 | 北京金山云网络技术有限公司 | Determine the method, apparatus and server of image vague category identifier |
CN110648291B (en) * | 2019-09-10 | 2023-03-03 | 武汉科技大学 | Unmanned aerial vehicle motion blurred image restoration method based on deep learning |
CN110676753B (en) * | 2019-10-14 | 2020-06-23 | 宁夏百川电力股份有限公司 | Intelligent inspection robot for power transmission line |
CN110874826B (en) * | 2019-11-18 | 2020-07-31 | 北京邮电大学 | Workpiece image defogging method and device applied to ion beam precise film coating |
CN110895141A (en) * | 2019-11-28 | 2020-03-20 | 梁彦云 | Residential space crowding degree analysis platform |
CN111717406B (en) * | 2020-06-17 | 2021-10-01 | 中国人民解放军陆军工程大学 | Unmanned aerial vehicle image acquisition system |
CN111881982A (en) * | 2020-07-30 | 2020-11-03 | 北京环境特性研究所 | Unmanned aerial vehicle target identification method |
CN112465777A (en) * | 2020-11-26 | 2021-03-09 | 华能通辽风力发电有限公司 | Fan blade surface defect identification technology based on video stream |
CN113191982B (en) * | 2021-05-14 | 2024-05-28 | 北京工业大学 | Single image defogging method based on morphological reconstruction and saturation compensation |
CN113822823B (en) * | 2021-11-17 | 2022-03-15 | 武汉工程大学 | Point neighbor restoration method and system for aerodynamic optical effect image space-variant fuzzy core |
CN114842366B (en) * | 2022-07-05 | 2022-09-16 | 山东中宇航空科技发展有限公司 | Stability identification method for agricultural plant protection unmanned aerial vehicle |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009112710A2 (en) * | 2008-02-11 | 2009-09-17 | Realeyes3D | Method of restoring a blurred image acquired by means of a camera fitted to a communication terminal |
CN104331871A (en) * | 2014-12-02 | 2015-02-04 | 苏州大学 | Image de-blurring method and image de-blurring device |
CN106251301A (en) * | 2016-07-26 | 2016-12-21 | 北京工业大学 | A kind of single image defogging method based on dark primary priori |
CN107369134A (en) * | 2017-06-12 | 2017-11-21 | 上海斐讯数据通信技术有限公司 | A kind of image recovery method of blurred picture |
-
2018
- 2018-05-17 CN CN201810471510.5A patent/CN108765325B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009112710A2 (en) * | 2008-02-11 | 2009-09-17 | Realeyes3D | Method of restoring a blurred image acquired by means of a camera fitted to a communication terminal |
CN104331871A (en) * | 2014-12-02 | 2015-02-04 | 苏州大学 | Image de-blurring method and image de-blurring device |
CN106251301A (en) * | 2016-07-26 | 2016-12-21 | 北京工业大学 | A kind of single image defogging method based on dark primary priori |
CN107369134A (en) * | 2017-06-12 | 2017-11-21 | 上海斐讯数据通信技术有限公司 | A kind of image recovery method of blurred picture |
Non-Patent Citations (1)
Title |
---|
基于LO稀疏先验的相机抖动模糊图像盲复原;仇翔等;《光学精密工程》;20170915;第25卷(第9期);第2490-2498页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108765325A (en) | 2018-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108765325B (en) | Small unmanned aerial vehicle blurred image restoration method | |
Fu et al. | Removing rain from single images via a deep detail network | |
CN107680054B (en) | Multi-source image fusion method in haze environment | |
CN106157267B (en) | Image defogging transmissivity optimization method based on dark channel prior | |
Gao et al. | Sand-dust image restoration based on reversing the blue channel prior | |
CN111079556A (en) | Multi-temporal unmanned aerial vehicle video image change area detection and classification method | |
CN106204509B (en) | Infrared and visible light image fusion method based on regional characteristics | |
CN106097256B (en) | A kind of video image fuzziness detection method based on Image Blind deblurring | |
CN109377450B (en) | Edge protection denoising method | |
CN114118144A (en) | Anti-interference accurate aerial remote sensing image shadow detection method | |
Yu et al. | Image and video dehazing using view-based cluster segmentation | |
Wang et al. | An efficient method for image dehazing | |
Yousaf et al. | Single Image Dehazing and Edge Preservation Based on the Dark Channel Probability‐Weighted Moments | |
Xiao et al. | Single image rain removal based on depth of field and sparse coding | |
CN112419163B (en) | Single image weak supervision defogging method based on priori knowledge and deep learning | |
CN109635809B (en) | Super-pixel segmentation method for visual degradation image | |
CN113421210B (en) | Surface point Yun Chong construction method based on binocular stereoscopic vision | |
CN105608683B (en) | A kind of single image to the fog method | |
Du et al. | Perceptually optimized generative adversarial network for single image dehazing | |
Kumari et al. | A new fast and efficient dehazing and defogging algorithm for single remote sensing images | |
Cheon et al. | A modified steering kernel filter for AWGN removal based on kernel similarity | |
Xie et al. | DHD-Net: A novel deep-learning-based dehazing network | |
Prasenan et al. | A Study of Underwater Image Pre-processing and Techniques | |
CN110647843B (en) | Face image processing method | |
Wang et al. | Adaptive Bright and Dark Channel Combined with Defogging Algorithm Based on Depth of Field |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |