CN114463196A - Image correction method based on deep learning - Google Patents

Image correction method based on deep learning Download PDF

Info

Publication number
CN114463196A
CN114463196A CN202111623814.7A CN202111623814A CN114463196A CN 114463196 A CN114463196 A CN 114463196A CN 202111623814 A CN202111623814 A CN 202111623814A CN 114463196 A CN114463196 A CN 114463196A
Authority
CN
China
Prior art keywords
image
images
image correction
value
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111623814.7A
Other languages
Chinese (zh)
Other versions
CN114463196B (en
Inventor
王玥
雷嘉锐
钱常德
孙焕宇
刘�东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiaxing Research Institute of Zhejiang University
Original Assignee
Jiaxing Research Institute of Zhejiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiaxing Research Institute of Zhejiang University filed Critical Jiaxing Research Institute of Zhejiang University
Priority to CN202111623814.7A priority Critical patent/CN114463196B/en
Publication of CN114463196A publication Critical patent/CN114463196A/en
Application granted granted Critical
Publication of CN114463196B publication Critical patent/CN114463196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image correction method based on deep learning, which comprises the following steps: (1) using image acquisition equipment with chromatic aberration and good quality to shoot images with the same view field as much as possible to serve as chromatic aberration images and reference images; (2) solving the offset of two times of shooting by using a template matching algorithm, cutting two images according to the offset, and further dividing a training set and a test set; (3) constructing an image correction model, wherein the image correction model comprises a weight prediction network and n 3D lookup tables which can be learned; (4) inputting the color difference image into a network, comparing the corrected image with a reference image, and calculating a loss function; training by taking minimization of a loss function as a target, and updating network parameters; (5) and after the model training is finished, carrying out image correction application. The method has simple operation steps, does not need to manually set a large number of parameters and design algorithms, and effectively reduces the time for manually processing the images while ensuring better effect.

Description

Image correction method based on deep learning
Technical Field
The invention belongs to the field of image processing, and particularly relates to an image correction method based on deep learning.
Background
In general, when an image is photographed using an imaging system, there is a problem of chromatic aberration in the photographed image due to chromatic aberration of a lens used or poor image quality caused by mismatching of imaging part hardware and image pickup part hardware. Specifically, the image quality degradation due to chromatic aberration is a serious problem, and the chromatic aberration mainly causes positional deviation of different colors on an image plane due to different refractive indexes of light having different wavelengths passing through a lens and thus causes a change in the overall brightness of an image, and when the lens is simplified due to a spatial position or a lens having a higher power and NA is used, the chromatic aberration becomes more significant. Color artifacts may also be generated in the image by problems such as color crosstalk that may exist in the CCD when responding to the spectrum.
In order to reduce the chromatic aberration, a lens made of a specific glass material or a lens processed in a specific processing mode of an image acquisition device with better quality is used at present. However, these methods increase the manufacturing cost of the lens, and it is difficult to widely use the methods in general image capturing apparatuses.
Aiming at the chromatic aberration generated by common image acquisition equipment, the algorithm correction method is more effective. In this regard, chinese patent application No. 200810212608.5 discloses a method of correcting chromatic aberration by image processing, including: analyzing a brightness signal of an input image to extract a color difference area, respectively calculating a color gradient and a brightness component to obtain a gradient difference between the color components and a gradient of the brightness of the input image, respectively serving as a first weight and a second weight of the degree of color difference of the image, and correcting the chroma of a pixel of the input image according to the two weights, thereby correcting the color difference of the image.
Chinese patent application No. 201610029519.1 discloses a method for correcting a refractive color difference by image processing, comprising: calculating the gradients of a green channel in an image in the vertical and horizontal directions, filtering by a threshold value to obtain an image with a higher gradient value to obtain a region containing refractive chromatic aberration, distinguishing different strong light regions by using a binarization partitioning method, sequentially extracting each region, properly expanding the boundary of each region, and correcting by using the existing chromatic aberration correction method.
The two technical schemes adopt the traditional image processing method, and a large amount of manual parameter adjustment is needed. The principle is mainly based on processing the image edge, and for some complex images, under the condition of more edges and textures, the time required by the algorithm is longer. And the image with more texture has unobvious color difference and has the condition of alternate coverage, so that the adjustment parameter of the algorithm is increased. Therefore, a method capable of quickly and adaptively correcting chromatic aberration of an image is required.
Disclosure of Invention
The invention provides an image correction method based on deep learning, which has simple operation steps, does not need to manually set a large number of parameters and design algorithms, and effectively reduces the time for manually processing images while ensuring better effect. The model has high running speed and few parameters, saves computing resources and can realize the real-time chromatic aberration correction of chromatic aberration images.
An image correction method based on deep learning comprises the following steps:
(1) shooting a group of chromatic aberration images by using image acquisition equipment to be corrected, and then shooting another group of images as reference images by using image acquisition equipment with smaller chromatic aberration and the same magnification; the colored difference images and the reference images are in one-to-one correspondence to form a plurality of groups of image pairs;
(2) carrying out image alignment on the chromatic aberration images and the reference images in each group of image pairs, and dividing the aligned groups of image pairs into a training set and a test set after amplification;
(3) constructing an image correction model, wherein the image correction model comprises a weight prediction network and n learnable 3D lookup tables; the 3D lookup table is used for establishing mapping from a color difference image to a prediction reference image, and the number of channels of the weight prediction network is the same as the number of the 3D lookup table;
when the image is input into the image correction model, interpolation of a feature map output by the weight prediction network is up-sampled to the size of an original image through a weight prediction network and n 3D lookup tables respectively, and the original image and a prediction reference image output by the 3D lookup tables are weighted and then added to obtain the output of the image correction model;
(4) training the image correction model by using a training set; inputting the chromatic aberration images in the training set into a model, comparing the output images of the model with a reference image to obtain a value of a loss function, and updating network parameters by minimizing the loss function as a target;
(5) and after the training of the image correction model is finished, inputting the image to be corrected into the image correction model to obtain a corrected image.
Further, in the step (1), two groups of shot images cover most of the colors of the used scene as much as possible, and the two groups of shot images cover the same field of view range as much as possible.
In the step (2), the specific process of image alignment is as follows:
(2-1) taking a part of image in each group of image pair having color difference image and reference image as template image, and recording its position coordinate (x) on the color difference image1,y1);
(2-2) taking the template image as a sliding window, and respectively calculating the square sum of pixel differences of corresponding points when the template is at different positions on the reference image to be used as a template matching value;
(2-3) recording a matching value in the sliding process of the template on the reference image, wherein the closer the value is to 0, the higher the matching degree is;
(2-4) recording coordinate value (x) of the position where the matching value is closest to 02,y2) And calculating the visual field offset of the reference image and the chromatic aberration image as delta x and delta y.
(2-5) clipping the images according to the shift amount, leaving a portion where the corresponding two image fields overlap.
In the step (3), the input of the 3D lookup table is a color image with a resolution of 224 × 224 and three RGB color channels; the image passes through a 3 x 3D convolutional layer with cubic linear interpolation and stride of 1, 3 channel output in sequence.
The structure of the weight prediction network comprises an upsampling bilinear2d two-dimensional two-line upsampling layer, a 1 × 1 convolutional layer, a ReLU active layer, an instanceNorm2d layer, an upsampling bilinear2d two-dimensional two-line upsampling layer, a stride of 1, a 1 × 1 convolutional layer, a ReLU active layer, an instanceNorm2d layer, an upsampling bilinear2d two-dimensional two-line upsampling layer, a stride of 1, a 1 × 1 convolutional layer, a ReLU active layer, a Dropout layer and a stride of 1, which are connected in sequence, and a 1 × 1 convolutional layer, a ReLU active layer, a Dropout layer and a stride of 1, which are output by a channel of 3; the pictures are subjected to a weight prediction network to obtain n characteristic graphs obtained by carrying out 8-time down-sampling on the original images.
In step (4), the loss function is
Loss=LmsesRsmRm
Figure BDA0003439245900000041
Figure BDA0003439245900000042
Wherein L ismseFor the mean square value, λ, of the deviations of the predicted and reference imagessAnd λmTo control RsAnd RmTwo coefficients of influence on the training, λs=0.0001,λm10; g (-) is a standard ReLU function,
Figure BDA0003439245900000043
the R/G/B value is output after the image passes through a 3D lookup table; omeganAnd predicting the mean square value of the image feature map learned by the network for the weight.
In the training process, parameter optimization is carried out by adopting an Adam parameter optimization algorithm, and the learning rate and the exponential decay rate beta of first-order and second-order moment estimation are respectively set1And beta2(ii) a During training, the network weight is updated by taking a minimum loss function as a target in each parameter propagation.
In the training process, when each epoch is finished, the chromatic aberration images in the test set are input into the model for forward propagation, the output images are compared with the corresponding reference images, and the PSNR value is calculated and used for judging the network effect in real time. The formula for solving the PSNR value is as follows:
Figure BDA0003439245900000044
where MSE is the mean square value of the deviations of the predicted and reference pictures, MAXIIs the maximum value of the image gray scale.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention can generate high-quality images, has simple data preparation, allows the image shooting to have field deviation, can obtain corrected images with better quality through the network and the image acquisition device with chromatic aberration after the network training is well trained, and can reduce the cost of the lens imaging device.
2. The invention directly improves the image quality from the angle of the image without considering system design and matching factors, reduces manual design and has higher practical value.
3. The network constructed by the invention is simple, the image generation is faster, the average processing speed of a single image reaches 0.0109s under the acceleration of the NVIDIA GeForce RTX2070SUPER display card, and the real-time image generation is facilitated.
Drawings
FIG. 1 is a flowchart of an image correction method based on deep learning according to the present invention;
fig. 2 is a network structure diagram of an image correction model in the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
The embodiment of the invention is used for correcting the chromatic aberration of the picture shot by the microscope objective, and the chromatic aberration can occur when the picture is shot by the microscope objective, which is a phenomenon that the chromatic edge is generated on an image acquisition component due to different refractive indexes of the lens to light with different wavelengths, and comprises longitudinal chromatic aberration and transverse chromatic aberration. For the effect of the actual image, the longitudinal chromatic aberration shows that the outer ring and the inner ring of the image edge present different colors, and the transverse chromatic aberration shows that different positions in the transverse direction or the longitudinal direction present different colors. In addition, the background of the image has a certain color deviation or brightness according to the difference of the material or NA of the objective lens.
The correction method of the embodiment of the invention is carried out on a Win10 system, the used python version is 3.6.10, the PyTorch version is 1.4.0, the CUDA version is 10.2, and the cudnn version is 7.6.5.32, the whole implementation flow is shown in figure 1, and the specific implementation steps are as follows:
firstly, establishing a picture database: shooting a group of images with poor quality by using image acquisition equipment to be corrected; and then, shooting another group of images with better quality by using image acquisition equipment with the same magnification and smaller chromatic aberration as reference images, wherein the two groups of shot images cover most of colors of a using scene as much as possible, and the two groups of shot images cover the same view field range as much as possible. The paired data sets are divided into training sets and test sets.
Secondly, carrying out image alignment on the shot chromatic aberration and the reference image: and matching the two images by using a template matching algorithm, calculating the offset x and y between the image fields in the horizontal direction and the vertical direction, cutting the images according to the offset, and leaving the overlapped part of the two corresponding image fields as a data set of the network.
The specific process is as follows:
(2-1) taking a part of the image common to the image with poor quality (image to be corrected) and the corresponding reference image as a template image, and recording the position coordinates (x) of the template image on the image to be corrected1,y1)。
And (2-2) taking the template as a sliding window, and respectively calculating the square sum of pixel differences of corresponding points when the template is at different positions on the reference image as a template matching value.
(2-3) recording the matching value of the template in the process of sliding on the reference image, wherein the closer the value is to 0, the higher the matching degree is.
(2-4) recording coordinate value (x) of the position where the matching value is closest to 02,y2) And calculating the visual field offset of the reference image and the image to be corrected into delta x and delta y.
Third, the data is augmented, the two sets of data are randomly scaled, and then randomly cropped to a 224 × 224 image. The image was horizontally flipped and randomly rotated by 90 degrees with a probability of 0.5 in the horizontal and vertical directions, respectively. And finally, normalizing the data to enable the value to be 0-1.
And fourthly, constructing an image correction model, wherein the network structure of the model is shown in fig. 2, the input of the model is the image processed in the second step, and the number of the 3D lookup tables and the number n of the channels of the weight prediction network are set to be 3. And copying the input image, and then transmitting the copied input image into a convolution network to respectively obtain 3 downsampled feature maps and 3 images searched by a 3D lookup table. And interpolating and upsampling the feature map to the size of the original image, and performing Hadamard product and summation on the upsampled image and the searched image respectively to obtain an output image of the network.
In the invention, interpolation up-sampling operation is added after down-sampling, and the feature map is up-sampled into a weight map with the same size as the input image for performing item-by-item product with the output image of the 3D lookup table. An identity map is added between the input and output of the network.
And fifthly, constructing a loss function. Setting the loss function to the loss function set to
Loss=LmsesRsmRm
Figure BDA0003439245900000071
Figure BDA0003439245900000072
Wherein L ismseFor the mean square value, λ, of the deviations of the predicted and reference imagessAnd λmTo control RsAnd RmTwo coefficients of influence on the training, λs=0.0001,λm10, g (·) is a standard ReLU function,
Figure BDA0003439245900000073
and the R/G/B value is output after the image passes through the 3D lookup table. OmeganAnd predicting the mean square value of the image feature map output by the network for the weight.
And sixthly, setting a model optimization algorithm: parameter optimization is carried out by adopting an Adam parameter optimization algorithm, and the learning rate and the exponential decay rate beta of the first-order moment estimation and the second-order moment estimation are respectively set1And beta2. During training, the network weight is updated by taking a minimum loss function as a target in each parameter propagation.
And seventhly, inputting the poor-quality images in the test set into the network for forward propagation when each epoch is finished, obtaining the comparison between the output images and the good-quality images, and solving the PSNR value for judging the network effect in real time.
Figure BDA0003439245900000074
Where MSE is the mean square value of the deviations of the predicted and reference pictures, MAXIIs the maximum value of the image gray scale.
And eighthly, iteratively adjusting parameters. And repeating the operation of the sixth step and the seventh step until the network is stable, thereby obtaining the trained network.
And step nine, inputting the picture to be detected by using the trained network and transmitting the picture to be detected in the forward direction, so as to obtain the image after chromatic aberration correction.
In order to verify the effect of the present invention, the present embodiment has collected multiple groups of H & E stained tumor sections using a 20-fold objective lens with NA of OLYMPUS of 0.8 and another 20-fold objective lens with NA of 0.75, respectively, and the average PSNR values obtained by the above image chromatic aberration correction algorithm are shown in the following table, which shows that the image quality is greatly improved.
TABLE 1 comparison of the results
Original image Corrected image
PSNR 16.70 25.96
The average processing time for a single image was tested to be 0.0109 s.
The experimental results show that the corrected image generated by the method has higher quality, the original image is greatly improved, and the calculated PSNR value proves the effectiveness of the method.
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. An image correction method based on deep learning is characterized by comprising the following steps:
(1) shooting a group of colored difference images by using image acquisition equipment to be corrected, and then shooting another group of images as reference images by using image acquisition equipment with smaller color difference and the same magnification; the colored difference images and the reference images are in one-to-one correspondence to form a plurality of groups of image pairs;
(2) carrying out image alignment on the chromatic aberration images and the reference images in each group of image pairs, and dividing the aligned groups of image pairs into a training set and a test set after amplification;
(3) constructing an image correction model, wherein the image correction model comprises a weight prediction network and n learnable 3D lookup tables; the 3D lookup table is used for establishing mapping from a color difference image to a prediction reference image, and the number of channels of the weight prediction network is the same as the number of the 3D lookup table;
when the image is input into the image correction model, interpolation of a feature map output by the weight prediction network is up-sampled to the size of an original image through a weight prediction network and n 3D lookup tables respectively, and the original image and a prediction reference image output by the 3D lookup tables are weighted and then added to obtain the output of the image correction model;
(4) training the image correction model by using a training set; inputting the chromatic aberration images in the training set into a model, comparing the output images of the model with a reference image to obtain a value of a loss function, and updating network parameters by minimizing the loss function as a target;
(5) and after the training of the image correction model is finished, inputting the image to be corrected into the image correction model to obtain a corrected image.
2. The image correction method based on deep learning of claim 1, wherein in step (2), the specific process of performing image alignment is as follows:
(2-1) taking a part of image in each group of image pair having color difference image and reference image as template image, and recording its position coordinate (x) on the color difference image1,y1);
(2-2) taking the template image as a sliding window, and respectively calculating the square sum of pixel differences of corresponding points when the template is at different positions on the reference image to be used as a template matching value;
(2-3) recording a matching value in the sliding process of the template on the reference image, wherein the closer the value is to 0, the higher the matching degree is;
(2-4) recording coordinate value (x) of the position where the matching value is closest to 02,y2) Calculating a reference mapThe field of view shift of the image and the tinted images is Δ x and Δ y.
(2-5) clipping the images according to the shift amount, leaving a portion where the corresponding two image fields overlap.
3. The deep learning-based image correction method according to claim 1, wherein in step (3), the input of the 3D lookup table is a color image with three RGB color channels and a resolution of 224 x 224; the image passes through a 3 x 3D convolutional layer with cubic linear interpolation and stride of 1, 3 channel output in sequence.
4. The image correction method based on deep learning of claim 1, wherein in step (3), the structure of the weight prediction network includes successively connected upsampling bilinear2d two-dimensional two-line upsampling layer, 1 × 1 convolution layer with stride of 1, 32 channel output, ReLU active layer, InstanceNorm2d layer, upsampling bilinear2d two-dimensional two-line upsampling layer, 1 × 1 convolution layer with stride of 1, 64 channel output, ReLU active layer, InstanceNorm2d layer, upsampling bilinear2d two-dimensional two-line upsampling layer, 1 × 1 convolution layer with stride of 1, 128 channel output, ReLU active layer, Dropout layer, 1 × 1 convolution layer with stride of 1, 3 channel output; the pictures are subjected to a weight prediction network to obtain n characteristic graphs obtained by carrying out 8-time down-sampling on the original images.
5. The deep learning-based image correction method according to claim 1, wherein in the step (4), the loss function is
Loss=LmsesRsmRm
Figure FDA0003439245890000021
Figure FDA0003439245890000022
Wherein L ismseFor mean square error, λ, of predicted and reference picturessAnd λmTo control RsAnd RmTwo coefficients of influence on the training, λs=0.0001,λm10; g (-) is a standard ReLU function,
Figure FDA0003439245890000031
the R/G/B value is output after the image passes through a 3D lookup table; omeganAnd predicting the mean square value of the image feature map learned by the network for the weight.
6. The image correction method based on deep learning of claim 5, characterized in that in the training process, parameter optimization is performed by using Adam parameter optimization algorithm, and learning rate, exponential decay rate β of first-order and second-order moment estimation are respectively set1And beta2(ii) a During training, the network weight is updated with the objective of minimizing the loss function in each parameter propagation.
7. The method according to claim 5, wherein in the step (4), in the training process, the chromatic aberration images in the test set are input into the model and propagated in the forward direction at the end of each epoch, so as to obtain the contrast between the output image and the corresponding reference image and calculate the PSNR value, so as to determine the network effect in real time.
8. The deep learning-based image correction method according to claim 7, wherein the formula for finding the PSNR value is as follows:
Figure FDA0003439245890000032
where MSE is the mean square value of the deviations of the predicted and reference pictures, MAXIIs the maximum value of the image gray scale.
CN202111623814.7A 2021-12-28 2021-12-28 Image correction method based on deep learning Active CN114463196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111623814.7A CN114463196B (en) 2021-12-28 2021-12-28 Image correction method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111623814.7A CN114463196B (en) 2021-12-28 2021-12-28 Image correction method based on deep learning

Publications (2)

Publication Number Publication Date
CN114463196A true CN114463196A (en) 2022-05-10
CN114463196B CN114463196B (en) 2023-07-25

Family

ID=81408554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111623814.7A Active CN114463196B (en) 2021-12-28 2021-12-28 Image correction method based on deep learning

Country Status (1)

Country Link
CN (1) CN114463196B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115802173A (en) * 2023-02-06 2023-03-14 北京小米移动软件有限公司 Image processing method and device, electronic equipment and storage medium
CN117649661A (en) * 2024-01-30 2024-03-05 青岛超瑞纳米新材料科技有限公司 Carbon nanotube preparation state image processing method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492271A (en) * 2018-03-26 2018-09-04 中国电子科技集团公司第三十八研究所 A kind of automated graphics enhancing system and method for fusion multi-scale information
CN110728637A (en) * 2019-09-21 2020-01-24 天津大学 Dynamic dimming backlight diffusion method for image processing based on deep learning
CN111915484A (en) * 2020-07-06 2020-11-10 天津大学 Reference image guiding super-resolution method based on dense matching and self-adaptive fusion
CN111988593A (en) * 2020-08-31 2020-11-24 福州大学 Three-dimensional image color correction method and system based on depth residual optimization
CN112562019A (en) * 2020-12-24 2021-03-26 Oppo广东移动通信有限公司 Image color adjusting method and device, computer readable medium and electronic equipment
CN112581373A (en) * 2020-12-14 2021-03-30 北京理工大学 Image color correction method based on deep learning
CN113066017A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Image enhancement method, model training method and equipment
CN113297937A (en) * 2021-05-17 2021-08-24 杭州朗和科技有限公司 Image processing method, device, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492271A (en) * 2018-03-26 2018-09-04 中国电子科技集团公司第三十八研究所 A kind of automated graphics enhancing system and method for fusion multi-scale information
CN110728637A (en) * 2019-09-21 2020-01-24 天津大学 Dynamic dimming backlight diffusion method for image processing based on deep learning
CN111915484A (en) * 2020-07-06 2020-11-10 天津大学 Reference image guiding super-resolution method based on dense matching and self-adaptive fusion
CN111988593A (en) * 2020-08-31 2020-11-24 福州大学 Three-dimensional image color correction method and system based on depth residual optimization
CN112581373A (en) * 2020-12-14 2021-03-30 北京理工大学 Image color correction method based on deep learning
CN112562019A (en) * 2020-12-24 2021-03-26 Oppo广东移动通信有限公司 Image color adjusting method and device, computer readable medium and electronic equipment
CN113066017A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Image enhancement method, model training method and equipment
CN113297937A (en) * 2021-05-17 2021-08-24 杭州朗和科技有限公司 Image processing method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HUI ZENG ET AL.: "Learning Image-Adaptive 3D Lookup Tables for High Performance Photo Enhancement in Real-Time", 《ARXIV:2009.14468V1》, vol. 14, no. 8, pages 1 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115802173A (en) * 2023-02-06 2023-03-14 北京小米移动软件有限公司 Image processing method and device, electronic equipment and storage medium
CN117649661A (en) * 2024-01-30 2024-03-05 青岛超瑞纳米新材料科技有限公司 Carbon nanotube preparation state image processing method
CN117649661B (en) * 2024-01-30 2024-04-12 青岛超瑞纳米新材料科技有限公司 Carbon nanotube preparation state image processing method

Also Published As

Publication number Publication date
CN114463196B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN112288658B (en) Underwater image enhancement method based on multi-residual joint learning
CN108549892B (en) License plate image sharpening method based on convolutional neural network
CN114463196B (en) Image correction method based on deep learning
CN108288256B (en) Multispectral mosaic image restoration method
CN110288550B (en) Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition
CN110176023B (en) Optical flow estimation method based on pyramid structure
CN113298810A (en) Trace detection method combining image enhancement and depth convolution neural network
CN112270691B (en) Monocular video structure and motion prediction method based on dynamic filter network
CN111598775B (en) Light field video time domain super-resolution reconstruction method based on LSTM network
CN111652815B (en) Mask plate camera image restoration method based on deep learning
CN111754433B (en) Defogging method for aerial image
CN114998141A (en) Space environment high dynamic range imaging method based on multi-branch network
CN116681606A (en) Underwater uneven illumination image enhancement method, system, equipment and medium
CN115731146A (en) Multi-exposure image fusion method based on color gradient histogram feature light stream estimation
CN113284061A (en) Underwater image enhancement method based on gradient network
CN111932452A (en) Infrared image convolution neural network super-resolution method based on visible image enhancement
CN113935917B (en) Optical remote sensing image thin cloud removing method based on cloud image operation and multiscale generation countermeasure network
CN112419163A (en) Single image weak supervision defogging method based on priori knowledge and deep learning
CN106683044B (en) Image splicing method and device of multi-channel optical detection system
Yamaguchi et al. Image demosaicking via chrominance images with parallel convolutional neural networks
CN116433525A (en) Underwater image defogging method based on edge detection function variation model
CN112819707B (en) End-to-end anti-blocking effect low-illumination image enhancement method
CN113724139B (en) Unsupervised infrared single-image super-resolution method for generating countermeasure network based on double discriminators
CN109672874A (en) A kind of consistent three-dimensional video-frequency color calibration method of space-time
CN113191959B (en) Digital imaging system limit image quality improving method based on degradation calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant