CN110599419A - Image denoising method for preventing loss of image edge information - Google Patents

Image denoising method for preventing loss of image edge information Download PDF

Info

Publication number
CN110599419A
CN110599419A CN201910846861.4A CN201910846861A CN110599419A CN 110599419 A CN110599419 A CN 110599419A CN 201910846861 A CN201910846861 A CN 201910846861A CN 110599419 A CN110599419 A CN 110599419A
Authority
CN
China
Prior art keywords
image
layer
denoising
neural network
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910846861.4A
Other languages
Chinese (zh)
Inventor
檀结庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Hefei Polytechnic University
Original Assignee
Hefei Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Polytechnic University filed Critical Hefei Polytechnic University
Priority to CN201910846861.4A priority Critical patent/CN110599419A/en
Publication of CN110599419A publication Critical patent/CN110599419A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image denoising method for preventing loss of image edge information, which solves the defect that details are easy to lose in the denoising process compared with the prior art. The invention comprises the following steps: constructing and training a denoising convolutional neural network model; acquiring a noise image; and obtaining a denoised result. The invention can obtain good image denoising effect by utilizing the neural network model, can keep more high-frequency details of the image, is more in line with the visual mechanism of human eyes, and improves the quality and visual effect of the image.

Description

Image denoising method for preventing loss of image edge information
Technical Field
The invention relates to the technical field of image processing, in particular to an image denoising method for preventing image edge information from being lost.
Background
With the popularization of various digital instruments and digital products, images and videos become the most common information carriers in human activities, and the images and videos contain a large amount of information of objects, so that the images and videos become the main ways for people to obtain external original information. However, the image is often interfered and affected by various noises during the processes of acquiring, transmitting and storing the image, and the quality of the image preprocessing algorithm is directly related to the effects of subsequent image processing, such as image segmentation, target recognition, edge extraction and the like, so that it is necessary to perform noise reduction processing on the image in order to acquire a high-quality digital image.
In the de-noising process, the integrity (i.e., the main characteristic) of the original information is maintained as much as possible while the useless information in the signal is removed. In the existing denoising algorithms, some denoising algorithms obtain better effects in low-dimensional signal image processing, but are not suitable for high-dimensional signal image processing; or the denoising effect is good, but partial image edge information is lost, or the research on detecting the image edge information is focused on, and the image details are reserved. Therefore, how to find a better balance point in noise resistance and detail retention becomes a serious difficulty in recent research.
Disclosure of Invention
The invention aims to solve the defect that details are easy to lose in the denoising process in the prior art, and provides an image denoising method for preventing image edge information from losing to solve the problem.
In order to achieve the purpose, the technical scheme of the invention is as follows:
an image denoising method for preventing image edge information loss comprises the following steps:
constructing and training a denoising convolutional neural network model: constructing a denoising convolutional neural network model, and training the denoising convolutional neural network model by using an image in a standard training set;
acquisition of a noise image: acquiring an image I containing noise;
obtaining a denoised result: inputting the image I containing noise into the trained denoising convolutional neural network model, obtaining an intermediate image I 'after amplifying by k times through a first layer bicubic interpolation function, and sending the intermediate image I' into a second layer, a third layer and a fourth layer for denoising to obtain a final denoising image O.
The method for constructing and training the denoising convolutional neural network model comprises the following steps:
setting a denoising convolutional neural network model as a four-layer structure, wherein the first layer is a bicubic interpolation function amplification layer, the second layer is a feature extraction layer, the third layer is a nonlinear mapping layer, and the fourth layer is a fusion pairing layer, wherein the second layer, the third layer and the fourth layer are convolutional layers;
for standard image library { R1,R2,L R91The images in (c) are randomly cropped to obtain 24800 image sets { R 'with the size of 32 x 32'1,R′2,L R′24800};
Set 32 x 32 images { R'1,R′2,L R′24800And inputting a denoising convolutional neural network model for training.
The 32 x 32 images are collected { R'1,R′2,L R′24800Inputting a denoising convolutional neural network model for training, and comprising the following steps:
to image set { R'1,R′2,L R′24800Down-sampling is carried out, and an image set (R') is obtained after k times of reduction1,R″2,L R″24800};
Set of images { R ″)1,R″2,L R″24800Inputting the first layer of the de-noising convolutional neural network model, and using bicubic interpolation function to perform down-sampling on the image series { R ″)1,R″2,L R″24800Sequentially amplifying each image by k times to obtain a preprocessed and amplified image set
Image set up with pre-processing magnificationSending to the second layer of the denoised convolutional neural network model:
image set enlarged for input preprocessingIn a certain imageDenoted as Y, a mapping F is calculated1Max (0, W1 × Y + B1), where W1 and B1 respectively denote filters and offsets, Y denotes an input image, W1 has a size of 9 × 9, the number of filters is 64, the spatial size of the filters is 9 × 9, and B1 is a 64-dimensional vector;
inputting the high-dimensional vector representing the image block to the third layer for nonlinear mapping:
calculating F2=max(0,W2*F1+ B2), where W2 is the filter and B2 is the offset, where W2 is 1 × 1, the number of filters is 32, and B2 is a 32-dimensional vector;
inputting the feature map set into a fourth layer for fusion pairing:
calculating F3=W3*F2+ B3, where W3 is the filter and B3 is a bias, where W3 is 5 × 5, the number of filters is 1, and B3 is a 1-dimensional vector;
obtaining optimal value, de-noising imageAnd original image set { R'1,R′2,L R′24800And (4) evaluating, and when the denoised image is closest to the original image, optimizing the corresponding filtering and deviation, namely obtaining the optimal filtering { W1, W2, W3} and convolution base { B1, B2, B3 }.
The optimal filtering { W1, W2, W3} and convolution basis { B1, B2, B3} are obtained by adopting a minimum loss function and a Nadam method, and the method comprises the following specific steps:
the minimization loss function is expressed as follows:
whereinIs a de-noised image setAny one of (1), R'iIs the original high resolution image set { R'1,R′2,L R′24800},Θ={W1,W2,W3,B1,B2,B3}。
The Nadam method expression is as follows:
mt=μmt-1+(1-μ)gt
wherein the content of the first and second substances,is a full differential, f (theta)t-1) For the F function with respect to the convolutional network parameter theta ═ W1,W2,W3,B1,B2,B3A part of the water-soluble polymer is,
t=1,2,3 μ1=0.0008,μ2=0.001,μ3=0.0035,μ=0.9,v=0.999,v1=0.001,v2=0.01,v3=0.1,mtand ntFirst order moment estimation and second order moment estimation of the gradient are respectively carried out, the initialization values are respectively 0,andis to mtAnd ntThe initial value of { B1, B2, B3} is 0, the initial value of { W1, W2, W3} is the unit matrix, and η ═ 0.0003 is the learning rate.
Advantageous effects
Compared with the prior art, the image denoising method for preventing the loss of the image edge information can obtain a good image denoising effect by utilizing the neural network model, can keep more high-frequency details of the image, is more in line with the visual mechanism of human eyes, and improves the quality and the visual effect of the image.
According to the method, amplification processing is carried out by utilizing bicubic interpolation, then a neural network model is adopted for training, so that the convolutional neural network can learn more texture details, a better denoising effect is generated, and the defects that the image obtained in the prior art when the image is denoised is poor in high-frequency details, textures and vision or fuzzy in edges and the like are overcome.
Drawings
FIG. 1 is a sequence diagram of the method of the present invention;
fig. 2a and 3a are respectively noise images to be processed;
2b, 3b are the images of FIGS. 2a, 3a after denoising using the conventional PCA-LPG method;
fig. 2c and 3c are images of fig. 2a and 3a after denoising by using the method of the present invention.
Detailed Description
So that the manner in which the above recited features of the present invention can be understood and readily understood, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings, wherein:
as shown in fig. 1, the image denoising method for preventing loss of image edge information according to the present invention includes the following steps:
firstly, constructing and training a denoising convolutional neural network model: and constructing a denoising convolutional neural network model, and training the denoising convolutional neural network model by using the image in the standard training set. Which comprises the following steps:
(1) the method comprises the steps of setting a denoising convolutional neural network model to be of a four-layer structure, wherein the first layer is a bicubic interpolation function amplification layer, the second layer is a feature extraction layer, the third layer is a nonlinear mapping layer, the fourth layer is a fusion pairing layer, and the second layer, the third layer and the fourth layer are convolutional layers. Because denoising is one of the substeps of image processing, other image processing is usually carried out after the denoising processing, and a cubic interpolation method with high speed and wide application range is selected to combine with a convolutional neural network, so that the denoising processing can be completed at a higher speed without increasing the processing time of the denoising processing.
(2) For standard image library { R1,R2,L R91The images in (c) are randomly cropped to obtain 24800 image sets { R 'with the size of 32 x 32'1,R′2,L R′24800}。
(3) Set 32 x 32 images { R'1,R′2,L R′24800And inputting a denoising convolutional neural network model for training. In the method, a 24800 image set is adopted for training, so that the robustness of the algorithm is guaranteed, and the effectiveness of the method is verified. Through training of a large number of image sets, the correlation between the standard image and the noise image can be found, and therefore better mapping and parameters can be obtained. The convolutional neural network model is usually used in image reconstruction, and the model is applied to image denoising with completely different effects by matching with cubic interpolation, so that a better denoising effect is generated.
A1) To image set { R'1,R′2,L R′24800Down-sampling is carried out, and an image set (R') is obtained after k times of reduction1,R″2,L R″24800}。
A2) Set of images { R ″)1,R″2,L R″24800Inputting the first layer of the de-noising convolutional neural network model, and using bicubic interpolation function to perform down-sampling on the image series { R ″)1,R″2,L R″24800Sequentially amplifying each image by k times to obtain a preprocessed and amplified image set
A3) Image set up with pre-processing magnificationSending to a second layer of the de-noising convolutional neural network model, and carrying out pre-processing on the image setImage blocks are extracted, each of which is represented as a high-dimensional vector, each vector containing a series of feature maps obtained by applying a filter W1 to the image block.
Image set enlarged for input preprocessingIn a certain imageDenoted as Y, a mapping F is calculated1Max (0, W1 × Y + B1), where W1 and B1 respectively denote filters and offsets, Y denotes an input image, W1 has a size of 9 × 9, the number of filters is 64, the spatial size of the filters is 9 × 9, and B1 is a 64-dimensional vector.
A4) The high-dimensional vectors representing the image blocks are input to the third layer for non-linear mapping, and one high-dimensional vector is mapped to another high-dimensional vector, which forms another set of feature maps, obtained by combining the results of the second layer with the filter W2.
Calculating F2=max(0,W2*F1+ B2), where W2 is the filter and B2 is the offset, where W2 is 1 × 1, the number of filters is 32, and B2 is a 32-dimensional vector.
A5) Inputting the feature map set into the fourth layer for fusion pairing, fusing image blocks corresponding to high-dimensional vectors together, performing reference pairing, and combining the result of the third layer with a filter W3 to obtain a mapping set and a denoised image
Calculating F3=W3*F2+ B3, where W3 is the filter and B3 is a bias, where W3 is 5 × 5, the number of filters is 1, and B3 is a 1-dimensional vector.
A6) Obtaining optimal value, de-noising imageAnd original image set { R'1,R′2,L R′24800And (4) evaluating, and when the denoised image is closest to the original image, optimizing the corresponding filtering and deviation, namely obtaining the optimal filtering { W1, W2, W3} and convolution base { B1, B2, B3 }.
Here, the least-loss function and the Nadam method are used to obtain the optimal filtering { W1, W2, W3} and convolution basis { B1, B2, B3 }:
the minimization loss function is expressed as follows:
whereinIs a de-noised image setAny one of (1), R'iIs the original high resolution image set { R'1,R′2,L R′24800},Θ={W1,W2,W3,B1,B2,B3};
The Nadam method expression is as follows:
mt=μmt-1+(1-μ)gt
wherein the content of the first and second substances,is a full differential, f (theta)t-1) For the F function with respect to the convolutional network parameter theta ═ W1,W2,W3,B1,B2,B3A part of the water-soluble polymer is,
t=1,2,3 μ1=0.0008,μ2=0.001,μ3=0.0035,μ=0.9,v=0.999,v1=0.001,v2=0.01,v3=0.1,mtand ntFirst order moment estimation and second order moment estimation of the gradient are respectively carried out, the initialization values are respectively 0,andis to mtAnd ntThe initial value of { B1, B2, B3} is 0, the initial value of { W1, W2, W3} is the unit matrix, and η ═ 0.0003 is the learning rate.
Second step, obtaining a noise image: an image I containing noise is acquired.
Thirdly, obtaining a denoised result: inputting the image I containing noise into the trained denoising convolutional neural network model, obtaining an intermediate image I 'after amplifying by k times through a first layer bicubic interpolation function, and sending the intermediate image I' into a second layer, a third layer and a fourth layer for denoising to obtain a final denoising image O.
As shown in fig. 2a and 3a, they are input images containing noise, respectively, and fig. 2b and 3b are images denoised by PCA-LPG method (i.e. the currently popular sparse representation method, specifically, see document [1] ([1] Lei Zhang, Weisheng Dong, David Zhang, and Guangming Shi, Two-stage image denoising by primary principal component analysis with local pixel grouping, Pattern Recognition, vol.43, No.4, pp.1531-1549 (2010)). fig. 2c and 3c are images denoised by the method of the present invention, respectively.
From fig. 2b and 3b, it can be seen that the image denoised by the PCA-LPG method basically maintains the visual effect of the image, but the processed image is blurred, and particularly, the boundary and detail part of the image are not well processed. From fig. 2c and 3c, it can be seen that the method of the present invention can better process the detail and boundary portion, and maintain better visual effect. Such as: fig. 2c shows a sharper image than fig. 2b, with the texture of the pillars and the boundaries of the steps being sharp, while fig. 2b shows a blurred image with a small amount of noise. In fig. 3c, the wing edges and texture of the butterfly are clear, while the boundary of fig. 3b is blurred.
From an objective point of view, it can be found that,
according to the formulaWhere m × n is the size of the matrix, max is 255, f (i, j) is the original image,the peak signal-to-noise ratio PSNR value is calculated by using the formula for the amplified image. The larger the peak signal-to-noise ratio is, the closer the denoised image is to the original image, that is, the better the visual effect of the denoised image is, and the higher the resolution is.
Table 1 is a comparison of the peak signal to noise ratio of the PCA-LPG method of FIGS. 2 and 3 and the method of the present invention
Peak signal to noise ratio PCA-LPG process Method of the invention
FIG. 2 21.160329 24.116028
FIG. 3 20.293043 23.983940
Table 1 is a comparison table of peak signal-to-noise ratios of fig. 2 and fig. 3 using the PCA-LPG method and the method of the present invention, as shown in table 1, it can be found from the comparison of peak signal-to-noise ratios of the denoised images that the method of the present invention can ensure much higher peak signal-to-noise ratio and higher image quality when processing different types of objects compared with the method of the prior art.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (4)

1. An image denoising method for preventing loss of image edge information is characterized by comprising the following steps:
11) constructing and training a denoising convolutional neural network model: constructing a denoising convolutional neural network model, and training the denoising convolutional neural network model by using an image in a standard training set;
12) acquisition of a noise image: acquiring an image I containing noise;
13) obtaining a denoised result: inputting the image I containing noise into the trained denoising convolutional neural network model, obtaining an intermediate image I 'after amplifying by k times through a first layer bicubic interpolation function, and sending the intermediate image I' into a second layer, a third layer and a fourth layer for denoising to obtain a final denoising image O.
2. The method for denoising the image to prevent the loss of the image edge information according to claim 1, wherein the constructing and training the denoising convolutional neural network model comprises the following steps:
21) setting a denoising convolutional neural network model as a four-layer structure, wherein the first layer is a bicubic interpolation function amplification layer, the second layer is a feature extraction layer, the third layer is a nonlinear mapping layer, and the fourth layer is a fusion pairing layer, wherein the second layer, the third layer and the fourth layer are convolutional layers;
22) for standard image library { R1,R2,L R91The images in (c) are randomly cropped to obtain 24800 image sets { R 'with the size of 32 x 32'1,R′2,L R′24800};
23) Set 32 x 32 images { R'1,R′2,L R′24800And inputting a denoising convolutional neural network model for training.
3. The method of claim 2, wherein the 32 x 32 images are collected into the set { R'1,R′2,L R′24800Inputting a denoising convolutional neural network model for training, and comprising the following steps:
31) to image set { R'1,R′2,L R′24800Down-sampling is carried out, and an image set (R') is obtained after k times of reduction1,R″2,L R″24800};
32) Set of images { R ″)1,R″2,L R″24800Inputting the first layer of the de-noised convolutional neural network model, and down-sampling by using bicubic interpolation functionSeries of images of { R ″ ]1,R″2,L R″24800Sequentially amplifying each image by k times to obtain a preprocessed and amplified image set
33) Image set up with pre-processing magnificationSending to the second layer of the denoised convolutional neural network model:
image set enlarged for input preprocessingIn a certain imageDenoted as Y, a mapping F is calculated1Max (0, W1 × Y + B1), where W1 and B1 respectively denote filters and offsets, Y denotes an input image, W1 has a size of 9 × 9, the number of filters is 64, the spatial size of the filters is 9 × 9, and B1 is a 64-dimensional vector;
34) inputting the high-dimensional vector representing the image block to the third layer for nonlinear mapping:
calculating F2=max(0,W2*F1+ B2), where W2 is the filter and B2 is the offset, where W2 is 1 × 1, the number of filters is 32, and B2 is a 32-dimensional vector;
35) inputting the feature map set into a fourth layer for fusion pairing:
calculating F3=W3*F2+ B3, where W3 is the filter and B3 is a bias, where W3 is 5 × 5, the number of filters is 1, and B3 is a 1-dimensional vector;
36) obtaining optimal value, de-noising imageAnd original image set { R'1,R′2,L R′24800And (4) evaluating, and when the denoised image is closest to the original image, optimizing the corresponding filtering and deviation, namely obtaining the optimal filtering { W1, W2, W3} and convolution base { B1, B2, B3 }.
4. The image denoising method of claim 3, wherein a minimization loss function and a Nadam method are used to obtain optimal filtering { W1, W2, W3} and convolution basis { B1, B2, B3}, and the method comprises the following steps:
41) the minimization loss function is expressed as follows:
whereinIs a de-noised image setAny one of (1), R'iIs the original high resolution image set { R'1,R′2,L R′24800},Θ={W1,W2,W3,B1,B2,B3}。
42) The Nadam method expression is as follows:
mt=μmt-1+(1-μ)gt
wherein the content of the first and second substances,is a full differential, f (theta)t-1) For the F function with respect to the convolutional network parameter theta ═ W1,W2,W3,B1,B2,B3A part of the water-soluble polymer is,
t=1,2,3μ1=0.0008,μ2=0.001,μ3=0.0035,μ=0.9,v=0.999,v1=0.001,v2=0.01,v3=0.1,mtand ntFirst order moment estimation and second order moment estimation of the gradient are respectively carried out, the initialization values are respectively 0,andis to mtAnd ntThe initial value of { B1, B2, B3} is 0, the initial value of { W1, W2, W3} is the unit matrix, and η ═ 0.0003 is the learning rate.
CN201910846861.4A 2019-09-09 2019-09-09 Image denoising method for preventing loss of image edge information Pending CN110599419A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910846861.4A CN110599419A (en) 2019-09-09 2019-09-09 Image denoising method for preventing loss of image edge information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910846861.4A CN110599419A (en) 2019-09-09 2019-09-09 Image denoising method for preventing loss of image edge information

Publications (1)

Publication Number Publication Date
CN110599419A true CN110599419A (en) 2019-12-20

Family

ID=68858135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910846861.4A Pending CN110599419A (en) 2019-09-09 2019-09-09 Image denoising method for preventing loss of image edge information

Country Status (1)

Country Link
CN (1) CN110599419A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703772A (en) * 2023-06-15 2023-09-05 山东财经大学 Image denoising method, system and terminal based on adaptive interpolation algorithm

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204468A (en) * 2016-06-27 2016-12-07 深圳市未来媒体技术研究院 A kind of image de-noising method based on ReLU convolutional neural networks
CN109410127A (en) * 2018-09-17 2019-03-01 西安电子科技大学 A kind of image de-noising method based on deep learning and multi-scale image enhancing
CN109658344A (en) * 2018-11-12 2019-04-19 哈尔滨工业大学(深圳) Image de-noising method, device, equipment and storage medium based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204468A (en) * 2016-06-27 2016-12-07 深圳市未来媒体技术研究院 A kind of image de-noising method based on ReLU convolutional neural networks
CN109410127A (en) * 2018-09-17 2019-03-01 西安电子科技大学 A kind of image de-noising method based on deep learning and multi-scale image enhancing
CN109658344A (en) * 2018-11-12 2019-04-19 哈尔滨工业大学(深圳) Image de-noising method, device, equipment and storage medium based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHAO DONG, ET AL.: "Image Super-Resolution Using Deep Convolutional Networks", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703772A (en) * 2023-06-15 2023-09-05 山东财经大学 Image denoising method, system and terminal based on adaptive interpolation algorithm
CN116703772B (en) * 2023-06-15 2024-03-15 山东财经大学 Image denoising method, system and terminal based on adaptive interpolation algorithm

Similar Documents

Publication Publication Date Title
Tian et al. Deep learning on image denoising: An overview
CN109389556B (en) Multi-scale cavity convolutional neural network super-resolution reconstruction method and device
CN108549892B (en) License plate image sharpening method based on convolutional neural network
CN108921800B (en) Non-local mean denoising method based on shape self-adaptive search window
WO2018045602A1 (en) Blur kernel size estimation method and system based on deep learning
CN112819772B (en) High-precision rapid pattern detection and recognition method
CN111209952A (en) Underwater target detection method based on improved SSD and transfer learning
CN109410127A (en) A kind of image de-noising method based on deep learning and multi-scale image enhancing
CN111612741B (en) Accurate reference-free image quality evaluation method based on distortion recognition
CN107730536B (en) High-speed correlation filtering object tracking method based on depth features
CN110458792B (en) Method and device for evaluating quality of face image
CN111612711A (en) Improved picture deblurring method based on generation countermeasure network
CN110443775B (en) Discrete wavelet transform domain multi-focus image fusion method based on convolutional neural network
CN110503140B (en) Deep migration learning and neighborhood noise reduction based classification method
CN112085017B (en) Tea leaf tender shoot image segmentation method based on significance detection and Grabcut algorithm
CN112330613B (en) Evaluation method and system for cytopathology digital image quality
CN110796616A (en) Fractional order differential operator based L0Norm constraint and adaptive weighted gradient turbulence degradation image recovery method
CN112163994A (en) Multi-scale medical image fusion method based on convolutional neural network
Majumder et al. A tale of a deep learning approach to image forgery detection
CN110503608B (en) Image denoising method based on multi-view convolutional neural network
CN117218029A (en) Night dim light image intelligent processing method based on neural network
CN109949334B (en) Contour detection method based on deep reinforced network residual error connection
CN111445437A (en) Method, system and equipment for processing image by skin processing model constructed based on convolutional neural network
CN110599419A (en) Image denoising method for preventing loss of image edge information
CN113673396A (en) Spore germination rate calculation method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191220