CN109859120A - Image defogging method based on multiple dimensioned residual error network - Google Patents

Image defogging method based on multiple dimensioned residual error network Download PDF

Info

Publication number
CN109859120A
CN109859120A CN201910015947.2A CN201910015947A CN109859120A CN 109859120 A CN109859120 A CN 109859120A CN 201910015947 A CN201910015947 A CN 201910015947A CN 109859120 A CN109859120 A CN 109859120A
Authority
CN
China
Prior art keywords
image
residual error
multiple dimensioned
free images
error network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910015947.2A
Other languages
Chinese (zh)
Other versions
CN109859120B (en
Inventor
秦勇
曹志威
谢征宇
柳青红
赵汝豪
吴云鹏
马小平
张赫
黄永辉
杨怀志
闫香玲
孙雨萌
贾星威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN201910015947.2A priority Critical patent/CN109859120B/en
Publication of CN109859120A publication Critical patent/CN109859120A/en
Application granted granted Critical
Publication of CN109859120B publication Critical patent/CN109859120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of image defogging methods based on multiple dimensioned residual error network.This method comprises: obtaining the fog free images under different scenes, fog free images data set is formed;The depth information for extracting fog free images interferes according to the mist that the depth information of fog free images applies various concentration to fog free images, obtains foggy image, all foggy image composing training data sets that will be obtained according to fog free images;Multiple dimensioned residual error network is constructed, inputs training dataset in multiple dimensioned residual error network, multiple dimensioned residual error network is trained, obtains the image defogging model of training completion;Foggy image to be processed is input to the image defogging model of training completion, which exports the corresponding fog free images of foggy image to be processed.Method of the invention can preferably handle the mist figure under various concentration and different scale, solve the problems, such as that training data is less, better effect be obtained with less training data, suitable for the mist figure under various concentration and different scale.

Description

Image defogging method based on multiple dimensioned residual error network
Technical field
The invention belongs to technical field of image processing more particularly to a kind of image defogging sides based on multiple dimensioned residual error network Method.
Background technique
In foggy environment, since there are random medium, such as suspended particulate, mist, haze, light are anti-from body surface in atmosphere The process being mapped to up to camera can scatter, so that light deviates from original propagation path and decays, to generate foggy image. Foggy image has many characteristics, such as low visibility, color dimness and low contrast.
Currently, there are mainly four types of image defogging methods:
The first is the restored method based on physical model, and by the information such as estimating depth and global atmosphere light, foundation has The physical model of mist image, to realize image defogging, the estimation of parameter or prediction are affected to model in the method;
Second is the image enchancing method based on non-physical model, removes the noise of foggy image, improves pair of image Than degree, to recover clear image, representative image enhancement defogging algorithm has HE, AHE, CLAHE, Retinex calculation Method, wavelet transformation scheduling algorithm, but such method defog effect is poor, the image after defogging will appear information loss and cross-color Situations such as;
The third is based on optical image defogging, and this method is utilized on the basis of other algorithms by optical design The different characteristic of different-waveband light penetration power, purposefully the filtering selection stronger infrared ray of penetration power is for being imaged, in severe Under the bad weathers such as haze, flue dust, steam, the effect of clear image is presented.The disadvantages of this method is to rely on special optics Equipment;
4th kind is the image defogging method based on deep learning, and this method needs to learn based on convolutional neural networks The model of a large amount of foggy images and corresponding clear image, foundation has good defog effect.The disadvantages of this method is mould Type is influenced vulnerable to image data set, and foggy image or certain special scenes for various concentration need to resurvey big spirogram As being trained, and cycle of training is longer.
Summary of the invention
The embodiment provides a kind of image defogging methods based on multiple dimensioned residual error network, to overcome existing skill The problem of art.
To achieve the goals above, this invention takes following technical solutions.
A kind of image defogging method based on multiple dimensioned residual error network, comprising:
Step 1, the fog free images under different scenes are obtained, fog free images data set is formed;
Step 2, the depth information for extracting the fog free images in fog free images data set, according to the depth of the fog free images The mist that information applies various concentration to fog free images interferes, and obtains foggy image, each has mist for what is obtained according to fog free images Image construction training dataset;
Step 3, multiple dimensioned residual error network is constructed, training dataset is inputted in the multiple dimensioned residual error network, to described Multiple dimensioned residual error network is trained, and obtains the image defogging model of training completion;
Step 4: foggy image to be processed being input to described image defogging model, described image defogging model exports institute State the corresponding fog free images of foggy image to be processed.
Further, the multiple dimensioned residual error network constructed in the step 3 has 41 convolutional layers, 1 down-sampling operation and 1 A up-sampling operation, 41 convolutional layers include 5 single convolutional layers and 18 residual blocks.
Further, what is constructed in the step 3 inputs training dataset in multiple dimensioned residual error network, to multiple dimensioned residual Poor network is trained, and obtains the image defogging model of training completion, comprising:
Step 3.1, the foggy image that training data is concentrated is inputted in multiple dimensioned residual error network, uses the convolution kernel of 7*7 Convolution algorithm is carried out to foggy image, the step-length of the convolution algorithm is 1, obtains the output result F of first layer1
Step 3.2, by the output result F of the first layer1First carry out down-sampling, the convolution kernel of down-sampling is 2*2, will under The result of sampling inputs the residual block group that three kinds of convolution kernels of different sizes are constituted, three residual block groups be named as Group1, Group2 and Group3 respectively obtains result F2 1、F2 2And F2 3
Step 3.3, by F2 1And F2 2Concatenate is carried out, and the result of concatenate is inputted into next convolution Layer, obtains F3 1, similarly, by F2 2And F2 3Concatenate is carried out, the result of concatenate is inputted into next convolutional layer, Obtain F3 2, the concatenate function based on PyTorch in the step is as follows:
torch.cat((F2 1,F2 2),1)
torch.cat((F2 2,F2 3),1);
Step 3.4, by F3 1、F2 2And F3 2Concatenate is carried out, and the result of concatenate is inputted into a 3*3 Convolutional layer, the step-length of the 3*3 convolutional layer are 1, are up-sampled to the output of the 3*3 convolutional layer, the convolution of the up-sampling Core is 2*2, the up-sampling the result is that F4, the concatenate function based on PyTorch in the step is as follows:
torch.cat((F3 1,F2 2,F3 2),1);
Step 3.5, using the convolution kernel of 7*7 to F4Convolution operation is carried out, the step-length of the convolution operation is 1, obtains F5, make With tanh function to F5Activation obtains the image defogging model that the multiple dimensioned residual error network training is completed;
Further, described Group1, Group2 and Group3 respectively have 6 residual blocks, and each residual block includes two volumes Lamination, two normalized functions, a PReLU activation primitive, the convolution kernel size point of described Group1, Group2 and Group3 It is not 1*1,3*3 and 5*5, step-length is 1.
Further, in the step 3.3 by F2 1And F2 2、F2 2And F2 3After carrying out concatenate respectively, input The convolution kernel size of next convolutional layer be respectively 1*1 and 5*5, step-length is 1.
Further, the calculation formula of the loss function Loss of described image defogging model are as follows:
Loss=1.8*lossMAE+1.6*lossMSE
Wherein, lossMAEIt is image averaging absolute error, lossMSEIt is image mean square error.
As can be seen from the technical scheme provided by the above-mentioned embodiment of the present invention, the image defogging method of the embodiment of the present invention By can more effectively capture the more information of image using multiple dimensioned convolution kernel and depth residual error neural network, more preferably it is located in Manage the mist figure under various concentration and different scale;The predicted value of model and true can be preferably measured by assembling loss function The inconsistent degree of value, minimizes the loss function of model as far as possible, improves the robustness of model.
The additional aspect of the present invention and advantage will be set forth in part in the description, these will become from the following description Obviously, or practice through the invention is recognized.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill of field, without creative efforts, it can also be obtained according to these attached drawings others Attached drawing.
Fig. 1 is a kind of process flow of the image defogging method based on multiple dimensioned residual error network provided in an embodiment of the present invention Figure.
Fig. 2 is a kind of overall schematic of multiple dimensioned residual error network provided in an embodiment of the present invention.
Fig. 3 is a kind of residual error block structural diagram of multiple dimensioned residual error network provided in an embodiment of the present invention.
Fig. 4 is defog effect comparison diagram of the existing algorithm to test image
Fig. 5 is the defog effect comparison diagram using the method for the embodiment of the present invention to low concentration foggy image.
Fig. 6 is the defog effect comparison diagram using the method for the embodiment of the present invention to high concentration foggy image.
Specific embodiment
Embodiments of the present invention are described below in detail, the example of the embodiment is shown in the accompanying drawings, wherein from beginning Same or similar element or element with the same or similar functions are indicated to same or similar label eventually.Below by ginseng The embodiment for examining attached drawing description is exemplary, and for explaining only the invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singular " one " used herein, " one It is a ", " described " and "the" may also comprise plural form.It is to be further understood that being arranged used in specification of the invention Diction " comprising " refer to that there are the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition Other one or more features, integer, step, operation, element, component and/or their group.It should be understood that when we claim member Part is " connected " or when " coupled " to another element, it can be directly connected or coupled to other elements, or there may also be Intermediary element.In addition, " connection " used herein or " coupling " may include being wirelessly connected or coupling.Wording used herein "and/or" includes one or more associated any cells for listing item and all combinations.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art Language and scientific term) there is meaning identical with the general understanding of those of ordinary skill in fields of the present invention.Should also Understand, those terms such as defined in the general dictionary, which should be understood that, to be had and the meaning in the context of the prior art The consistent meaning of justice, and unless defined as here, it will not be explained in an idealized or overly formal meaning.
In order to facilitate understanding of embodiments of the present invention, it is done by taking several specific embodiments as an example below in conjunction with attached drawing further Explanation, and each embodiment does not constitute the restriction to the embodiment of the present invention.
The embodiment of the invention provides a kind of image defogging method based on multiple dimensioned residual error network, this method passes through more rulers The combination for spending convolution kernel and loss function, solves the problems, such as that training data is less, preferably processing various concentration and different scale Under mist figure.
A kind of process flow such as Fig. 1 of image defogging method based on multiple dimensioned residual error network provided in an embodiment of the present invention It is shown, comprising the following steps:
Step S110 obtains the fog free images under different scenes, forms fog free images data set.
It shoots to obtain the fog free images under different scenes by photographing device, all fog free images is formed into fog free images Data set.Here different scenes include the outdoor landscapes such as street, building, the woods, and the time range of covering is after sunrise to day Before falling.
Step S120 extracts the depth information of the fog free images in fog free images data set, according to the depth of fog free images The mist that information applies various concentration to fog free images interferes, and obtains foggy image, all has mist for what is obtained according to fog free images Image construction training dataset.Specifically, the present invention uses DCNF-FCSP algorithm (Deep Convolutional Neural Field with Fully Convolutional networks and Superpixel Pooling) extract fog free images Depth information obtains foggy image by fog free images according to mist figure degradation model.
Mist figure degradation model is as follows:
I (x)=J (x) t (x)+A (1-t (x))
T (x)=e-βd(x)
Wherein, I indicates foggy image, and J indicates fog free images, and A indicates global atmosphere light, and t indicates transmissivity, and β indicates big Gas scattering coefficient, d indicate scene depth.
In above-mentioned formula, random overall situation atmosphere light A=[k, k, k] is generated, the value range of k is [0.7,1.0].J it is known that D is extracted by DCNF-FCSP algorithm, and β value range is [0.5,1.0], and what the value for changing β can be obtained various concentration has mist figure Picture.
Step S130 constructs multiple dimensioned residual error network, training dataset is inputted in multiple dimensioned residual error network, to multiple dimensioned Residual error network is trained, and obtains the image defogging model of training completion.
Step S140: foggy image to be processed is input to the image defogging model of training completion, the image defogging mould Type exports the clearly corresponding fog free images of above-mentioned foggy image to be processed.
Fig. 2 is a kind of overall schematic of multiple dimensioned residual error network provided in an embodiment of the present invention, and Fig. 3 is multiple dimensioned residual error The residual error block structural diagram of network.The multiple dimensioned residual error network constructed in the step S130 has 41 convolutional layers, 1 down-sampling behaviour Make and 1 up-sampling operation, 41 convolutional layers include 5 single convolutional layers and 18 residual blocks.
The step S130 specifically includes the following steps:
Step 3.1, the foggy image that training data is concentrated is inputted in multiple dimensioned residual error network, uses the convolution kernel of 7*7 Convolution algorithm is carried out to foggy image, the step-length of the convolution algorithm is 1, obtains the output result F of first layer1
Step 3.2, by the output result F of first layer1Down-sampling is first carried out, the convolution kernel of down-sampling is 2*2, by down-sampling Result input the residual block group that is constituted of three kinds of convolution kernels of different sizes, three residual block groups be named as Group1, Group2 and Group3 respectively obtains result F2 1、F2 2And F2 3
Group1, Group2 and Group3 respectively have 6 residual blocks, and each residual block includes two convolutional layers, two normalizings Change function, a PReLU activation primitive;
The convolution kernel size of Group1, Group2 and Group3 are 1*1,3*3 and 5*5 respectively, and step-length is 1;
Step 3.3, by F2 1And F2 2It carries out concatenate (connection), and the input of the result of concatenate is next A convolutional layer, obtains F3 1, similarly, by F2 2And F2 3Concatenate is carried out, the result of concatenate is inputted into next volume Lamination obtains F3 2, the concatenate function based on PyTorch in the step is as follows:
torch.cat((F2 1,F2 2),1)
torch.cat((F2 2,F2 3),1);
By F2 1And F2 2、F2 2And F2 3After carrying out concatenate respectively, the convolution kernel size of next convolutional layer of input Respectively 1*1 and 5*5, step-length are 1.
Step 3.4, by F3 1、F2 2And F3 2Concatenate is carried out, and the result of concatenate is inputted into a 3*3 Convolutional layer, the step-length of above-mentioned 3*3 convolutional layer are 1, are up-sampled to the output of above-mentioned 3*3 convolutional layer, the convolution kernel of up-sampling is 2*2, up-sampling the result is that F4, the concatenate function based on PyTorch in the step is as follows:
torch.cat((F3 1,F2 2,F3 2),1);
Step 3.5, using the convolution kernel of 7*7 to F4Convolution operation is carried out, the step-length of the convolution operation is 1, obtains F5, make With tanh function to F5Activation obtains the image defogging model that the multiple dimensioned residual error network training is completed, the image defogging mould It include the non-linear relation between foggy image and fog free images in type;
Step 4: using the to be processed of the image defogging model treatment various concentration of multiple dimensioned residual error network training completion Foggy image obtains the clearly corresponding fog free images of above-mentioned foggy image to be processed.
Specifically, the calculation formula of the loss function Loss of described image defogging model are as follows:
Loss=1.8*lossMAE+1.6*lossMSE
Wherein, lossMAEIt is image averaging absolute error, lossMSEIt is image mean square error.
Difference between loss function predictive metrics value and true value is produced by the predicted value of sample and the true value of label Raw error back propagation instructs network parameter study optimization.
The process of image defogging of the present invention is that reading foggy image, the image for then transferring trained completion are gone first Mist model, according to the non-linear relation between the foggy image and fog free images established in image defogging model, to foggy image It is restored to fog free images, finally obtains clearly fog free images.
Specifically, learning rate is set as 0.0001, maximum number of iterations is set as 10,000 times, is carried out under PyTorch frame Training.
In order to verify effectiveness of the invention and superiority, the 150 of CHINAMM 2018Dehaze racing data collection are chosen Outdoor test image is opened to be verified.The present invention does not include test image in training pattern, chooses 4 images and is tested, Compared with existing algorithm, specifically include dark defogging algorithm, Non-local image dehazing algorithm, MSCNN algorithm, Color attenuation prior dehazing algorithm and DehazeNet algorithm.
Fig. 4 is defog effect comparison diagram of the existing algorithm to test image, and Fig. 5 is the method pair using the embodiment of the present invention The defog effect comparison diagram of low concentration foggy image, Fig. 6 are the method using the embodiment of the present invention to high concentration foggy image Defog effect comparison diagram.The present invention uses Y-PSNR (PSNR:Peak Signal to Noise Ratio) and structure phase The image defogging ability of various algorithms, the bigger expression of Y-PSNR are measured like property (SSIM:structural similarity) It is more similar to original image to export image, structural similarity maximum value is 1, indicate that output image is more similar to original image closer to 1, Table 1 shows the average value of the PSNR and SSIM of test image.
The average value of the PSNR and SSIM of 1 defogging test of heuristics image of table compare
As can be seen from Table 1, PSNR average value of the invention is much larger than the PSNR average value of other algorithms, and of the invention SSIM average value also greater than other algorithms SSIM average value, so that better defog effect can be obtained by demonstrating the present invention.
In conclusion the image defogging method of the embodiment of the present invention, which passes through, utilizes multiple dimensioned convolution kernel and depth residual error nerve Network can more effectively capture the more information of image, preferably the mist figure under processing various concentration and different scale;Pass through Sharp assembling loss function can preferably measure the predicted value of model and the inconsistent degree of true value, make the loss function of model It minimizes as far as possible, improves the robustness of model.
The image defogging method of the embodiment of the present invention is compared with other methods, to the amount of training data requirement under special scenes It is small, it can preferably handle the mist figure under various concentration and different scale.By the combination of multiple dimensioned convolution kernel and loss function, It solves the problems, such as that training data is less, better effect is obtained with less training data, is suitable for various concentration and different rulers Mist figure under degree, preferably the mist figure under processing various concentration and different scale.
Those of ordinary skill in the art will appreciate that: attached drawing is the schematic diagram of one embodiment, module in attached drawing or Process is not necessarily implemented necessary to the present invention.
As seen through the above description of the embodiments, those skilled in the art can be understood that the present invention can It realizes by means of software and necessary general hardware platform.Based on this understanding, technical solution of the present invention essence On in other words the part that contributes to existing technology can be embodied in the form of software products, the computer software product It can store in storage medium, such as ROM/RAM, magnetic disk, CD, including some instructions are used so that a computer equipment (can be personal computer, server or the network equipment etc.) executes the certain of each embodiment or embodiment of the invention Method described in part.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device or For system embodiment, since it is substantially similar to the method embodiment, so describing fairly simple, related place is referring to method The part of embodiment illustrates.Apparatus and system embodiment described above is only schematical, wherein the conduct The unit of separate part description may or may not be physically separated, component shown as a unit can be or Person may not be physical unit, it can and it is in one place, or may be distributed over multiple network units.It can root According to actual need that some or all of the modules therein is selected to achieve the purpose of the solution of this embodiment.Ordinary skill Personnel can understand and implement without creative efforts.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by anyone skilled in the art, It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with scope of protection of the claims Subject to.

Claims (6)

1. a kind of image defogging method based on multiple dimensioned residual error network characterized by comprising
Step 1, the fog free images under different scenes are obtained, fog free images data set is formed;
Step 2, the depth information for extracting the fog free images in fog free images data set, according to the depth information of the fog free images The mist interference that fog free images are applied with various concentration, obtains foggy image, each foggy image that will be obtained according to fog free images Composing training data set;
Step 3, multiple dimensioned residual error network is constructed, training dataset is inputted in the multiple dimensioned residual error network, to more rulers Degree residual error network is trained, and obtains the image defogging model of training completion;
Step 4: foggy image to be processed is input to described image defogging model, the output of described image defogging model it is described to The corresponding fog free images of the foggy image of processing.
2. the method according to claim 1, wherein the multiple dimensioned residual error network constructed in the step 3 has 41 A convolutional layer, 1 down-sampling operation and 1 up-sampling operation, 41 convolutional layers include 5 single convolutional layers and 18 residual errors Block.
3. according to the method described in claim 2, it is characterized in that, constructed in the step 3 in multiple dimensioned residual error network Training dataset is inputted, multiple dimensioned residual error network is trained, obtains the image defogging model of training completion, comprising:
Step 3.1, the foggy image that training data is concentrated is inputted in multiple dimensioned residual error network, convolution kernel using 7*7 is to having Mist image carries out convolution algorithm, and the step-length of the convolution algorithm is 1, obtains the output result F of first layer1
Step 3.2, by the output result F of the first layer1Down-sampling is first carried out, the convolution kernel of down-sampling is 2*2, by down-sampling Result input the residual block group that is constituted of three kinds of convolution kernels of different sizes, three residual block groups be named as Group1, Group2 and Group3, respectively obtains resultWith
Step 3.3, willWithConcatenate is carried out, and the result of concatenate is inputted into next convolutional layer, is obtained It arrivesSimilarly, willWithConcatenate is carried out, the result of concatenate is inputted into next convolutional layer, is obtainedThe concatenate function based on PyTorch in the step is as follows:
Step 3.4, willWithConcatenate is carried out, and the result of concatenate is inputted into a 3*3 convolution Layer, the step-length of the 3*3 convolutional layer are 1, are up-sampled to the output of the 3*3 convolutional layer, the convolution kernel of the up-sampling is 2*2, the up-sampling the result is that F4, the concatenate function based on PyTorch in the step is as follows:
Step 3.5, using the convolution kernel of 7*7 to F4Convolution operation is carried out, the step-length of the convolution operation is 1, obtains F5, use Tanh function is to F5Activation obtains the image defogging model that the multiple dimensioned residual error network training is completed;
4. according to the method described in claim 3, it is characterized in that, described Group1, Group2 and Group3 respectively have 6 residual errors Block, each residual block include two convolutional layers, two normalized functions, a PReLU activation primitive, the Group1, The convolution kernel size of Group2 and Group3 is 1*1,3*3 and 5*5 respectively, and step-length is 1.
5. according to the method described in claim 4, it is characterized in that, general in the step 3.3WithWith After carrying out concatenate respectively, the convolution kernel size of next convolutional layer of input is respectively 1*1 and 5*5, and step-length is 1.
6. method according to any one of claims 1 to 5, which is characterized in that the loss function of described image defogging model The calculation formula of Loss are as follows:
Loss=1.8*lossMAE+1.6*lossMSE
Wherein, lossMAEIt is image averaging absolute error, lossMSEIt is image mean square error.
CN201910015947.2A 2019-01-08 2019-01-08 Image defogging method based on multi-scale residual error network Active CN109859120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910015947.2A CN109859120B (en) 2019-01-08 2019-01-08 Image defogging method based on multi-scale residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910015947.2A CN109859120B (en) 2019-01-08 2019-01-08 Image defogging method based on multi-scale residual error network

Publications (2)

Publication Number Publication Date
CN109859120A true CN109859120A (en) 2019-06-07
CN109859120B CN109859120B (en) 2021-03-02

Family

ID=66894186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910015947.2A Active CN109859120B (en) 2019-01-08 2019-01-08 Image defogging method based on multi-scale residual error network

Country Status (1)

Country Link
CN (1) CN109859120B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667421A (en) * 2020-05-25 2020-09-15 武汉大学 Image defogging method
CN111772619A (en) * 2020-06-19 2020-10-16 厦门纳龙科技有限公司 Electrocardiogram heart beat identification method, terminal device and storage medium
CN111861923A (en) * 2020-07-21 2020-10-30 济南大学 Target identification method and system based on lightweight residual error network image defogging
CN112184577A (en) * 2020-09-17 2021-01-05 西安理工大学 Single image defogging method based on multi-scale self-attention generation countermeasure network
CN112365414A (en) * 2020-11-04 2021-02-12 天津大学 Image defogging method based on double-path residual convolution neural network
CN112488943A (en) * 2020-12-02 2021-03-12 北京字跳网络技术有限公司 Model training and image defogging method, device and equipment
CN112785517A (en) * 2021-01-08 2021-05-11 南京邮电大学 Image defogging method and device based on high-resolution representation
CN113962901A (en) * 2021-11-16 2022-01-21 中国矿业大学(北京) Mine image dust removing method and system based on deep learning network
CN114581861A (en) * 2022-03-02 2022-06-03 北京交通大学 Track area identification method based on deep learning convolutional neural network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230264A (en) * 2017-12-11 2018-06-29 华南农业大学 A kind of single image to the fog method based on ResNet neural networks
CN108376387A (en) * 2018-01-04 2018-08-07 复旦大学 Image deblurring method based on polymerization expansion convolutional network
CN108447036A (en) * 2018-03-23 2018-08-24 北京大学 A kind of low light image Enhancement Method based on convolutional neural networks
CN108564549A (en) * 2018-04-20 2018-09-21 福建帝视信息科技有限公司 A kind of image defogging method based on multiple dimensioned dense connection network
CN108710830A (en) * 2018-04-20 2018-10-26 浙江工商大学 A kind of intensive human body 3D posture estimation methods for connecting attention pyramid residual error network and equidistantly limiting of combination
CN108923984A (en) * 2018-07-16 2018-11-30 西安电子科技大学 Space-time video compress cognitive method based on convolutional network
CN109063710A (en) * 2018-08-09 2018-12-21 成都信息工程大学 Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features
CN109146810A (en) * 2018-08-08 2019-01-04 国网浙江省电力有限公司信息通信分公司 A kind of image defogging method based on end-to-end deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230264A (en) * 2017-12-11 2018-06-29 华南农业大学 A kind of single image to the fog method based on ResNet neural networks
CN108376387A (en) * 2018-01-04 2018-08-07 复旦大学 Image deblurring method based on polymerization expansion convolutional network
CN108447036A (en) * 2018-03-23 2018-08-24 北京大学 A kind of low light image Enhancement Method based on convolutional neural networks
CN108564549A (en) * 2018-04-20 2018-09-21 福建帝视信息科技有限公司 A kind of image defogging method based on multiple dimensioned dense connection network
CN108710830A (en) * 2018-04-20 2018-10-26 浙江工商大学 A kind of intensive human body 3D posture estimation methods for connecting attention pyramid residual error network and equidistantly limiting of combination
CN108923984A (en) * 2018-07-16 2018-11-30 西安电子科技大学 Space-time video compress cognitive method based on convolutional network
CN109146810A (en) * 2018-08-08 2019-01-04 国网浙江省电力有限公司信息通信分公司 A kind of image defogging method based on end-to-end deep learning
CN109063710A (en) * 2018-08-09 2018-12-21 成都信息工程大学 Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
CHRISTIAN LEDIG ET AL.: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network", 《ARXIV》 *
COSMIN ANCUTI ET AL.: "NTIRE 2018 Challenge on Image Dehazing: Methods and Results", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS》 *
HE ZHANG ET AL.: "Multi-scale Single Image Dehazing using Perceptual Pyramid Deep Network", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS》 *
HOU JIANG ET AL.: "Multi-Scale Residual Convolutional Neural Network for Haze Removal of Remote Sensing Images", 《REMOTE SENSING》 *
JUNCHENG LI ET AL.: "Multi-scale Residual Network for Image Super-Resolution", 《COMPUTER VISION – ECCV 2018》 *
KAIMING HE ET AL.: "Deep Residual Learning for Image Recognition", 《ARXIV》 *
MANJUN QIN ET AL.: "Dehazing for Multispectral Remote Sensing Images Based on a Convolutional Neural Network With the Residual Architecture", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667421B (en) * 2020-05-25 2022-07-19 武汉大学 Image defogging method
CN111667421A (en) * 2020-05-25 2020-09-15 武汉大学 Image defogging method
CN111772619A (en) * 2020-06-19 2020-10-16 厦门纳龙科技有限公司 Electrocardiogram heart beat identification method, terminal device and storage medium
CN111861923A (en) * 2020-07-21 2020-10-30 济南大学 Target identification method and system based on lightweight residual error network image defogging
CN112184577A (en) * 2020-09-17 2021-01-05 西安理工大学 Single image defogging method based on multi-scale self-attention generation countermeasure network
CN112184577B (en) * 2020-09-17 2023-05-26 西安理工大学 Single image defogging method based on multiscale self-attention generation countermeasure network
CN112365414A (en) * 2020-11-04 2021-02-12 天津大学 Image defogging method based on double-path residual convolution neural network
CN112365414B (en) * 2020-11-04 2022-11-08 天津大学 Image defogging method based on double-path residual convolution neural network
CN112488943A (en) * 2020-12-02 2021-03-12 北京字跳网络技术有限公司 Model training and image defogging method, device and equipment
CN112488943B (en) * 2020-12-02 2024-02-02 北京字跳网络技术有限公司 Model training and image defogging method, device and equipment
CN112785517A (en) * 2021-01-08 2021-05-11 南京邮电大学 Image defogging method and device based on high-resolution representation
CN112785517B (en) * 2021-01-08 2023-03-14 南京邮电大学 Image defogging method and device based on high-resolution representation
CN113962901A (en) * 2021-11-16 2022-01-21 中国矿业大学(北京) Mine image dust removing method and system based on deep learning network
CN113962901B (en) * 2021-11-16 2022-08-23 中国矿业大学(北京) Mine image dust removing method and system based on deep learning network
CN114581861A (en) * 2022-03-02 2022-06-03 北京交通大学 Track area identification method based on deep learning convolutional neural network

Also Published As

Publication number Publication date
CN109859120B (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN109859120A (en) Image defogging method based on multiple dimensioned residual error network
Yang et al. Single image deraining: From model-based to data-driven and beyond
Ancuti et al. Ntire 2019 image dehazing challenge report
CN108230278B (en) Image raindrop removing method based on generation countermeasure network
CN110570363A (en) Image defogging method based on Cycle-GAN with pyramid pooling and multi-scale discriminator
CN110570371A (en) image defogging method based on multi-scale residual error learning
CN109146810A (en) A kind of image defogging method based on end-to-end deep learning
CN110414670A (en) A kind of image mosaic tampering location method based on full convolutional neural networks
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN105981050B (en) For extracting the method and system of face characteristic from the data of facial image
CN113450288B (en) Single image rain removing method and system based on deep convolutional neural network and storage medium
CN113344806A (en) Image defogging method and system based on global feature fusion attention network
CN112767466B (en) Light field depth estimation method based on multi-mode information
CN109801232A (en) A kind of single image to the fog method based on deep learning
CN108734675A (en) Image recovery method based on mixing sparse prior model
CN110738622A (en) Lightweight neural network single image defogging method based on multi-scale convolution
CN110782458B (en) Object image 3D semantic prediction segmentation method of asymmetric coding network
Guan et al. Srdgan: learning the noise prior for super resolution with dual generative adversarial networks
CN110634103A (en) Image demosaicing method based on generation of countermeasure network
CN115311186B (en) Cross-scale attention confrontation fusion method and terminal for infrared and visible light images
Qian et al. FAOD‐Net: a fast AOD‐Net for dehazing single image
CN113034388B (en) Ancient painting virtual repair method and construction method of repair model
CN113628143A (en) Weighted fusion image defogging method and device based on multi-scale convolution
CN117350923A (en) Panchromatic and multispectral remote sensing image fusion method based on GAN and transducer
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant