CN110264423A - A method of the image visual effect enhancing based on full convolutional network - Google Patents

A method of the image visual effect enhancing based on full convolutional network Download PDF

Info

Publication number
CN110264423A
CN110264423A CN201910534020.XA CN201910534020A CN110264423A CN 110264423 A CN110264423 A CN 110264423A CN 201910534020 A CN201910534020 A CN 201910534020A CN 110264423 A CN110264423 A CN 110264423A
Authority
CN
China
Prior art keywords
image
network model
post
context
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910534020.XA
Other languages
Chinese (zh)
Inventor
杨梦宁
李小斌
李亚涛
汪涵
向刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Michiro Science And Technology Co Ltd
Original Assignee
Chongqing Michiro Science And Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Michiro Science And Technology Co Ltd filed Critical Chongqing Michiro Science And Technology Co Ltd
Priority to CN201910534020.XA priority Critical patent/CN110264423A/en
Publication of CN110264423A publication Critical patent/CN110264423A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The method for the image visual effect enhancing based on full convolutional network that the present invention relates to a kind of, step include: S1 acquisition image, and all high-definition images of acquisition constitute original image set;S2, which carries out post-processing to above-mentioned image, enhances visual effect, the image after acquisition process, the image collection of all treated one post-processings of image construction;S3 establishes the context converging network model based on full convolution, the image sequence of original image and post processing image input network is trained, and use supervised training mode, updates the parameter in context converging network model, the network model after being trained;Image to be processed is input to the network model after training by S4, obtains the image of visual effect enhancing.The method of the present invention is simple and effective, and used network model has the advantages that across resolution ratio training and test, short to the training time, while the requirement to hardware is relatively low.

Description

A method of the image visual effect enhancing based on full convolutional network
Technical field
The present invention relates to technical field of image processing, in particular to a kind of image visual effect based on full convolutional network increases Strong method.
Background technique
As economic society continues to develop, science and technology is constantly progressive, and image obtains and the cost of storage constantly reduces, digitized map Picture and its processing have slowly penetrated into each corner of our life and works.Digital Image Processing refers to through computer pair Image is removed noise, enhancing, recovery, segmentation, the methods and techniques for extracting the processing such as feature.Wherein image enhancement is number The important branch of image procossing, purpose improve the visual effect of image primarily directed to the application of given image.
It is currently used very more to image progress effect reinforcing method, it is traditional such as two-pass filter operator, gaussian filtering Operator etc., but they can only make whole or part unified change to one whole figure, be generally regarded as portrait post-processing Some tool come using.In addition to this, there are many people to carry out correlation to field of image enhancement using these operators both at home and abroad to grind Study carefully, as Lore KG, Akintayo A, Sarkar S et al. propose a kind of low-luminosity picture based on deepness auto encoder Characteristic recognition method.Adaptive blast is carried out to image in high dynamic range images, and will not excessively amplify/be saturated compared with highlights Point.Li Weikai, Wang Zhengxia, Jiang Wei etc. establish a kind of new adaptive fractional rank partial differential using gradient and gray value as parameter Image enhancement model.The average ladder that the model improves traditional algorithm to the disadvantage of dark space image enhancement deficiency, after image enhancement Degree is promoted obviously, improves the visual effect of image well.
In recent years, deep learning achieves many successes in field of image enhancement, as Tao L, Zhu C, Xiang G are proposed A kind of low-luminosity picture Enhancement Method based on CNN, with the designed LLCNN neural network of residual error thought utilizes more rulers Feature Mapping is spent, and then avoids gradient disappearance problem, network is trained using SSIM Loss, the experimental results showed that The contrast of twilight image can adaptively be enhanced.
Although above-mentioned method also can experimental image visual effect enhancing, all exist a disadvantage, i.e., processing scheme Picture, especially 4K and when the image of the above resolution ratio, it is excessively high to hardware requirement and longer the time required to training.
Summary of the invention
In view of the above-mentioned problems existing in the prior art, it is simple that the object of the present invention is to provide one kind, and enhances image vision The good method of effect.
To achieve the above object, the present invention adopts the following technical scheme: a kind of image vision effect based on full convolutional network The method of fruit enhancing, step include:
S1: all original images of acquisition image, acquisition constitute original image set x, are denoted as x={ x1,x2,x3,…, xn};;
S2: on the basis of not modifying to image composition, carrying out post-processing to above-mentioned image enhances visual effect, Image after acquisition process, the image collection X of all treated one post-processings of image construction, is denoted as X={ X1,X2, X3,…,Xn, wherein XtIt is xtBy the image of post-processing, t ∈ [1, n];
S3: the context converging network model based on full convolution, the context converging network model training process are established In, by without the original image of post-processing and the image sequence of the corresponding image construction by post-processing<x0, X0>,< X1, X1>,<x2, X2>...<xt, Xt>} as input, using supervised training mode, update in context converging network model Parameter, the network model after being trained;
S4: image to be processed is input to the network model after training, obtains the image of visual effect enhancing.
Refer to as an improvement, the S2 middle and later periods is handled to image progress color, saturation degree, contrast brightness adjustment behaviour Make.
As an improvement, the S2 middle and later periods, which is handled, to be referred to if the image of S1 acquisition is portrait, color is carried out to image Coloured silk, saturation degree, contrast brightness and mill skin adjustment operation.
As an improvement, the context converging network model based on full convolution that step S3 is established is as follows:
If the context converging network model is d layers shared, remember that the context converging network model is { L0,L1,…,Ld, Middle first layer L0With the last layer LdDimension be q × p × 3, first layer L0Indicate input picture, the last layer LdIndicate defeated Image out, q × p indicate resolution ratio;
Each middle layer LsDimension is q × p × w, and 1≤s≤d-1, w are the port number of each middle layer, middle layer LsIt is interior Hold according to preceding layer Ls-1Be calculated, calculate as shown in formula (1):
WhereinIndicate s layers of i-th of channel,Indicate s layers of i-th of amount of bias,Indicate the of s layers J-th of channel of i convolution kernel;OperatorExpression refers to that voidage is rsEmpty convolution,Increase with depth, rs=2s -1, herein, 1≤s≤d-2, for Ld-1Layer, rs=1, for output layer Ld, using 31 × 1 convolution kernels, end layer is projected To RGB color;
Φ is LReLU activation primitive, as shown in formula (2),
LReLU (x)=max (α x, x), α=0.2; (2);
Wherein max is the function being maximized;
ΨsIt is adaptive normalized function, as shown in formula (3):
Ψs(x)=λsx+μsBN(x) (3);
Wherein λss∈ R is the weight learnt by the backpropagation of neural network;BN then refers to batch standardization;
Picture pair is inputted into the context converging network model, i.e., without the original image of post-processing and corresponding By the image of post-processing, input picture to the image collection X of traversal original image set x and post-processing, it is described up and down Literary converging network model carries out backpropagation according to loss function and is updated to the parameter of context converging network model, if institute Stating context converging network model modification number is T, shown in loss function such as formula (4):
Wherein xtExpression refers to the original image without post-processing for being input to network model, XtRefer to and repairs figure by profession The target image of teacher's post-processing, and xtAnd XtResolution ratio it is identical;NtIt is image xtNumber of pixels;F(xt) refer to by institute Enhance image obtained from the neural network model of foundation.
Compared with the existing technology, the present invention at least has the advantages that
Method provided by the invention is simple and effective, and used network model has across resolution ratio training and tests excellent Point, it is short to the training time, while the requirement to hardware is relatively low.The experimental results showed that the network model energy that the method for the present invention is established Enough effectively study is to from untreated original image to the mapping relations the Target Photo by professional's modification, instruction Network model after white silk can be used directly enhancing image visual effect.
Detailed description of the invention
Fig. 1 is the general flow chart of the method for the present invention.
Fig. 2 is the context converging network main frame used in the method for the present invention based on full convolution.
Fig. 3, Fig. 4, Fig. 5 are the experimental result pictures of the method for the present invention.
Specific embodiment
In order to make the objectives, technical solutions and advantages of the present invention clearer, With reference to embodiment and join According to attached drawing, the present invention is described in more detail.
Heretofore described image refers to 4K and images above.
The present invention is established based on full convolutional network and the context converging network model of empty convolution come to portrait photo Carry out visual effect enhancing.The present invention by the high definition original photo directly shot by camera and by these photos after profession Phase repairs input of the target picture after figure Shi Jingxiu as network, directly learns original image reflecting to Target Photo by network Penetrate relationship.Because the network architecture is based on full convolutional network and empty convolution, network being capable of across resolution ratio training and survey Examination.According to hardware case when network training, picture can be scaled to twice of even three times and be trained and (can not also scale), then It is tested using former resolution ratio.
Referring to Fig. 1 and Fig. 2, Fig. 2 is the network architecture used in the method for the present invention, first layer and layer second from the bottom and common Convolution is the same, and convolution kernel is 3x3, and the empty convolution that the second layer to layer third from the bottom is, the last layer is to use 1x1 convolution kernel, It is directly linear that end layer is projected into RGB color, totally 9 layers of the network that the method for the present invention uses.
A method of the image visual effect enhancing based on full convolutional network, step include:
S1: all original images of acquisition image, acquisition constitute original image set x, are denoted as x={ x1,x2,x3,…, xn};;Image can be photographer's professional portrait shooting;
S2: on the basis of not modifying to image composition, carrying out post-processing to above-mentioned image enhances visual effect, Image after acquisition process, the image collection X of all treated one post-processings of image construction, is denoted as X={ X1,X2, X3,…,Xn, wherein XtIt is xtBy the image of post-processing, t ∈ [1, n];Post-processing herein, which refers to, carries out image Color, saturation degree, the operation of contrast brightness adjustment;If the image of S1 acquisition is portrait, the S2 middle and later periods, which is handled, to be referred to, Color, saturation degree, contrast brightness and mill skin adjustment operation are carried out to image;
S3: context converging network (Context Aggregation Network) model based on full convolution, institute are established It, will be without the original image of post-processing and corresponding by post-processing during stating context converging network model training Image construction image sequence<x0, X0>,<x1, X1>,<x2, X2>...<xt, Xt>} as input, using supervised training side Formula updates the parameter in context converging network model, the network model after being trained;
Full convolutional neural networks --- Fully Convolutional Networks (FCN) framework refers to traditional CNN In full articulamentum be converted to convolutional layer so that all layers are all convolutional layers, therefore referred to as full convolutional network.This network it is good Place first is that can receive the input picture of arbitrary size, and not have to all training image and test image is required to have same Size, it can across resolution ratio training and test picture.On full convolutional network, convolution operation therein is replaced with into sky Hole convolution.Be mainly using the advantages of empty convolution: convolution kernel number of parameters does not become, this means that calculation amount does not become;Receptive field It greatly increases, makes it possible to preferably learn to global information.It is proposed that empty convolution is in order to substitute pond layer, because of pond layer meeting Information is lost, precision is reduced;It pond layer is not added again can make and experience the visual field and become smaller, learn less than global characteristics;If removing pond layer, expand Big convolution kernel certainly will lead to the disaster calculated again, therefore the use of empty convolution be exactly best selection.
Specifically, the context converging network model based on full convolution that step S3 is established is as follows:
If the context converging network model is d layers shared, remember that the context converging network model is { L0,L1,…,Ld, Middle first layer L0With the last layer LdDimension be q × p × 3, first layer L0Indicate input picture, the last layer LdIndicate defeated Image out, q × p indicate resolution ratio, refer to the resolution ratio of input picture;
First layer with layer second from the bottom as common convolution, the empty convolution that the second layer to layer third from the bottom is, most Later layer is with 1x1 convolution kernel.
Each middle layer LsDimension is q × p × w, and 1≤s≤d-1, w be the port number of each middle layer, i.e. characteristic pattern Quantity, middle layer LsContent according to preceding layer Ls-1Be calculated, calculate as shown in formula (1):
WhereinIndicate s layers of i-th of channel,Indicate s layers of i-th of amount of bias,Indicate the of s layers J-th of channel of i convolution kernel, when specific implementationCan be 3 × 3 sizes convolution kernel, port number then with upper one layer Port number is the same;OperatorExpression refers to that voidage is rsEmpty convolution,Increase with depth, rs=2s-1, herein, 1≤s ≤ d-2, for Ld-1Layer, rs=1, for output layer Ld, using 31 × 1 convolution kernels, end layer is projected into RGB color sky Between;
Φ is LReLU activation primitive, as shown in formula (2),
LReLU (x)=max (ax, x), α=0.2; (2);
Wherein max is the function being maximized;
ΨsIt is adaptive normalized function, as shown in formula (3):
Ψs(x)=λsx+μsBN(x) (3);
Wherein λss∈ R is the weight learnt by the backpropagation of neural network;BN then refers to batch standardization (Batch Normalization);
Because training is supervised training, be into the context converging network model input picture pair, i.e., without The original image and the corresponding image by post-processing of post-processing are crossed, input picture is to traversal original image set x with after Context converging network model described in the image collection X of phase processing carries out backpropagation to context polymeric network according to loss function The parameter of network model is updated, if the context converging network model modification number is T, i.e., the described context converging network The parameter of model updates T times, when context converging network model starts to train, need to assign to the parameter of context converging network model Initial value, the usual initial value is empirical value.In order to make the modelling effect trained good as far as possible and be unlikely to occur Fitting, inventor analyze by test of many times and data, find the parameter update times T=180* of context converging network model N, i.e. input n are trained image, and one picture of every input is primary to updating, and then recycle 180 times, and learning rate is set When being set to 0.0001.The visual effect enhancing for the context converging network model treatment image established after T times updates is It is very good.
The parameter of the context converging network model refers to λs, μsWithShown in the loss function such as formula (4):
Wherein xtExpression refers to the original image without post-processing for being input to network model, XtRefer to and repairs figure by profession The target image of teacher's post-processing, and xtAnd XtResolution ratio it is identical;NtIt is image xtNumber of pixels;F(xt) refer to by institute Enhance image obtained from the neural network model of foundation.
S4: image to be processed is input to the network model after training, obtains the image of visual effect enhancing.
In the present invention model with without the original image of post-processing and it is corresponding by post-processing image sequence < X0, X0>,<x1, X1>,<x2, X2>...<xt, Xt>} as input, obtain original image arrive target image mapping relations, this Mapping relations are exactly the network model after training.Again by new without post-processing, the figure that is directly shot by camera As being input in the model after training, the corresponding image Jing Guo network processes is obtained.
Experiment test:
Experimental data set
Experimental data set is that 800 ultra high-definition portrait figures and 800 corresponding professions repair the ultra high-definition portrait that figure teacher repaired Figure, is directly learnt, is tested with 80.Resolution ratio is the image of 4K or more, network model number of plies d=9.
Evaluation index
The present invention objectively evaluates index using common two kinds of image enhancement: Y-PSNR PSNR (Peak Signal- To-Noise Ratio) and structural similarity SSIM (Structural similar index).
Y-PSNR PSNR (dB) is the evaluation method based on pixel domain, and this method calculates simply, is most universal at present Index is objectively evaluated with a kind of the most widely used image.It is that it is quick to be based on error based on the error between corresponding pixel points The image quality evaluation of sense, calculation formula is such as shown in (a):
Wherein, MSE indicates the mean square error of image X to be evaluated and referenced target image Y, and reflectionEstimatorWith A kind of measurement of difference degree between the amount of being estimated, calculation formula is such as shown in (b):
Wherein, f ' (i, j) is image to be evaluated, and f (i, j) is referenced target image, and M and N are the length of image respectively And width.(a) in formula, n is the bit number of every pixel, generally takes 8, i.e. pixel gray level number is 256.The unit of PSNR is dB, numerical value Bigger expression distortion is smaller.
The above is the calculation method for gray level image, and if it is color image, usually there are three types of methods to calculate: 1. points Not Ji Suan tri- channels RGB PSNR, be then averaged;2. calculating the MSE of RGB triple channel, then it is averaged;3. will figure Piece is converted into YCbCr format, then only calculates the PSNR of Y-component i.e. luminance component.Wherein, second and the third method ratio More typically, the method for the present invention uses the third method.
Structural similarity SSIM, and a kind of image quality evaluation index referred to entirely, it respectively from brightness, contrast, Image similarity is measured in terms of structure three.Calculation formula is such as shown in (c)
SSIM (x, y)=[l (x, y)αc(x,y)βs(x,y)γ] (c);
Wherein, l (x, y) is that brightness is compared, and c (x, y) is that contrast compares, and s (x, y) is that structure compares, their calculating Formula such as (d), (e), (f) is shown:
Wherein, x is Target Photo, and y is test picture, μxAnd μyRespectively represent the average value of x and y, σxAnd σyIt respectively represents The standard deviation of x, y, σxyRepresent the covariance of x and y.C1, c2, c3 are respectively constant, and avoiding denominator is 0 bring system mistake. In general, α in (c), beta, gamma can be respectively set to 1, c3=c2/2 by we, SSIM is reduced to formula (g) at this time:
SSIM is the number between one 0 to 1, and the bigger gap for indicating output image and undistorted image is smaller.
Experimental result and analysis
The present invention repairs the ultra high-definition portrait figure that figure teacher repaired using 800 ultra high-definition portrait figures and 800 corresponding professions As network inputs, directly learnt, is tested with 80.
Learning rate is set as 0.0001, it is contemplated that learning time and hardware device by original photo and pass through post-processing Target Photo length and width respectively reduce 3 times after be input in network and learnt, recycle altogether 180 times, train 44 hours altogether.
Error evaluation carries out final image assessment using the PSNR and SSIM that most generally use, what assessment was obtained by network Photo and process profession repair the error between the target picture that figure teacher's post-processing obtains.Table 1 lists wherein ten photos The error amount of error amount and all test photos.
Comparing result between the photo and target picture that table 1 is obtained through the method for the present invention
Imge_name PSNR SSIM
1B8A0269.jpg 31.36042957 0.929473715
1B8A0438.jpg 27.28562706 0.950078301
1B8A3917.jpg 31.04367569 0.967865515
5L1B5130.jpg 25.49614188 0.882033029
DSC_1224.jpg 26.87598072 0.946764911
DSC_1390.jpg 24.85806285 0.934533647
DSC_2123.jpg 34.89320932 0.948763413
DSC_2260.jpg 29.35932286 0.878251359
DSC_5842.jpg 29.88075312 0.892711911
DSC_7659.jpg 28.75876102 0.981277603
…… …… ……
It is average 27.98119641 0.93117534
The average value that can see PSNR from upper table is 27.98, shows the photo obtained by network still unusual high definition, It is not distorted;The average value of SSIM is 0.9312, show network can accurate study to original image and Target Photo Mapping relations.
Since PSNR and SSIM are the numerical value difference calculated between image, almost without human perception factor and figure is considered There are certain threshold values for the distortion of visual redundancy as in, i.e. human eye to image, and when amount distortion is lower than threshold value, the mankind can not It perceives, it is larger to then result in image subjective evaluation result difference.Therefore, original image, target figure and result are given in attached drawing 3 The comparison of figure, wherein original image refers to unmodified, and the original photo directly obtained by camera imaging, target figure refers to by special Industry repairs the photo that figure teacher's post-processing obtains, and result figure refers to the photograph by network modification for obtaining original image as network inputs Piece.
It can be seen that the result figure obtained after being handled original image by the method for the invention, phase by Fig. 3 Fig. 4 and Fig. 5 Than the target image ratio obtained after professional photo fix teacher processing, visually see almost without difference.Brightness, color, contrast with And mill bark effect with target image seemingly, it was demonstrated that the visual effect that the method for the present invention can effectively to enhancing picture.
Finally, it is stated that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting, although referring to compared with Good embodiment describes the invention in detail, those skilled in the art should understand that, it can be to skill of the invention Art scheme is modified or replaced equivalently, and without departing from the objective and range of technical solution of the present invention, should all be covered at this In the scope of the claims of invention.

Claims (4)

1. a kind of method of the image visual effect enhancing based on full convolutional network, which is characterized in that step includes:
S1: all original images of acquisition image, acquisition constitute original image set x, are denoted as x={ x1,x2,x3,…,xn};;
S2: on the basis of not modifying to image composition, carrying out post-processing to above-mentioned image enhances visual effect, acquisition Treated image, the image collection X of all treated one post-processings of image construction, is denoted as X={ X1,X2,X3,…, Xn, wherein XtIt is xtBy the image of post-processing, t ∈ [1, n];
S3: establishing the context converging network model based on full convolution, will during the context converging network model training Without post-processing original image and the corresponding image construction by post-processing image sequence<x0, X0>,<x1, X1 >,<x2, X2>...<xt, Xt>} as input, using supervised training mode, update the parameter in context converging network model, Network model after being trained;
S4: image to be processed is input to the network model after training, obtains the image of visual effect enhancing.
2. the method for the image visual effect enhancing based on full convolutional network as described in claim 1, which is characterized in that described The S2 middle and later periods, which is handled, to be referred to image progress color, saturation degree, the operation of contrast brightness adjustment.
3. the method enhanced such as claim 1 based on the image visual effect of full convolutional network, which is characterized in that if S1 is acquired Image be portrait, then the S2 middle and later periods, which handle, refers to, to image carry out color, saturation degree, contrast brightness and grind skin tune Whole operation.
4. the method for the image visual effect enhancing based on full convolutional network as described in claims 1 or 2 or 3, feature exist In the context converging network model based on full convolution that step S3 is established is as follows:
If the context converging network model is d layers shared, remember that the context converging network model is { L0,L1,…,Ld, wherein the One layer of L0With the last layer LdDimension be q × p × 3, first layer L0Indicate input picture, the last layer LdIndicate output figure Picture, q × p indicate resolution ratio;
Each middle layer LsDimension is q × p × w, and 1≤s≤d-1, w are the port number of each middle layer, middle layer LsContent root According to preceding layer Ls-1Be calculated, calculate as shown in formula (1):
WhereinIndicate s layers of i-th of channel,Indicate s layers of i-th of amount of bias,Indicate i-th volume of s layers J-th of channel of product core;OperatorExpression refers to that voidage is rsEmpty convolution,Increase with depth, rs=2s-1, this Place, 1≤s≤d-2, for Ld-1Layer, rs=1, for output layer Ld, using 31 × 1 convolution kernels, end layer is projected into RGB Color space;
Φ is LReLU activation primitive, as shown in formula (2),
LReLU (x)=max (α x, x), α=0.2; (2);
Wherein max is the function being maximized;
ΨsIt is adaptive normalized function, as shown in formula (3):
Ψs(x)=λsx+μsBN(x) (3);
Wherein λss∈ R is the weight learnt by the backpropagation of neural network;BN then refers to batch standardization;
Picture pair is inputted into the context converging network model, i.e., without the original image of post-processing and corresponding process The image of post-processing, for input picture to the image collection X for traversing original image set x and post-processing, the context is poly- It closes network model to be updated the parameter of context converging network model according to loss function progress backpropagation, if on described Hereafter converging network model modification number is T, shown in loss function such as formula (4):
Wherein xtExpression refers to the original image without post-processing for being input to network model, XtRefer to after profession repairs figure teacher The target image of phase processing, and xtAnd XtResolution ratio it is identical;NtIt is image xtNumber of pixels;F(xt) refer to by being established Neural network model obtained from enhance image.
CN201910534020.XA 2019-06-19 2019-06-19 A method of the image visual effect enhancing based on full convolutional network Pending CN110264423A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910534020.XA CN110264423A (en) 2019-06-19 2019-06-19 A method of the image visual effect enhancing based on full convolutional network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910534020.XA CN110264423A (en) 2019-06-19 2019-06-19 A method of the image visual effect enhancing based on full convolutional network

Publications (1)

Publication Number Publication Date
CN110264423A true CN110264423A (en) 2019-09-20

Family

ID=67919540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910534020.XA Pending CN110264423A (en) 2019-06-19 2019-06-19 A method of the image visual effect enhancing based on full convolutional network

Country Status (1)

Country Link
CN (1) CN110264423A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445437A (en) * 2020-02-25 2020-07-24 杭州火烧云科技有限公司 Method, system and equipment for processing image by skin processing model constructed based on convolutional neural network
CN113139909A (en) * 2020-01-19 2021-07-20 杭州喔影网络科技有限公司 Image enhancement method based on deep learning
CN115100061A (en) * 2022-06-28 2022-09-23 重庆长安汽车股份有限公司 Image beautifying method, device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364269A (en) * 2018-03-08 2018-08-03 深圳市唯特视科技有限公司 A kind of whitepack photo post-processing method based on intensified learning frame
CN109191389A (en) * 2018-07-31 2019-01-11 浙江杭钢健康产业投资管理有限公司 A kind of x-ray image adaptive local Enhancement Method
CN109740586A (en) * 2018-12-19 2019-05-10 南京华科和鼎信息科技有限公司 A kind of anti-dazzle certificate automatic reading system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364269A (en) * 2018-03-08 2018-08-03 深圳市唯特视科技有限公司 A kind of whitepack photo post-processing method based on intensified learning frame
CN109191389A (en) * 2018-07-31 2019-01-11 浙江杭钢健康产业投资管理有限公司 A kind of x-ray image adaptive local Enhancement Method
CN109740586A (en) * 2018-12-19 2019-05-10 南京华科和鼎信息科技有限公司 A kind of anti-dazzle certificate automatic reading system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QIFENG CHEN ET AL.: "Fast Image Processing with Fully-Convolutional Networks", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139909A (en) * 2020-01-19 2021-07-20 杭州喔影网络科技有限公司 Image enhancement method based on deep learning
CN111445437A (en) * 2020-02-25 2020-07-24 杭州火烧云科技有限公司 Method, system and equipment for processing image by skin processing model constructed based on convolutional neural network
CN115100061A (en) * 2022-06-28 2022-09-23 重庆长安汽车股份有限公司 Image beautifying method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN110232670B (en) Method for enhancing visual effect of image based on high-low frequency separation
CN107767413B (en) Image depth estimation method based on convolutional neural network
WO2018214671A1 (en) Image distortion correction method and device and electronic device
CN110956581B (en) Image modality conversion method based on dual-channel generation-fusion network
CN110264423A (en) A method of the image visual effect enhancing based on full convolutional network
CN110458792B (en) Method and device for evaluating quality of face image
Attar et al. Image quality assessment using edge based features
CN111415304A (en) Underwater vision enhancement method and device based on cascade deep network
CN112330613B (en) Evaluation method and system for cytopathology digital image quality
Brunet A study of the structural similarity image quality measure with applications to image processing
CN111882555A (en) Net detection method, device, equipment and storage medium based on deep learning
Morzelona Human visual system quality assessment in the images using the IQA model integrated with automated machine learning model
CN113298763A (en) Image quality evaluation method based on significance window strategy
CN117152163A (en) Bridge construction quality visual detection method
CN111354048B (en) Quality evaluation method and device for obtaining pictures by facing camera
Srigowri Enhancing unpaired underwater images with cycle consistent network
CN104966271A (en) Image denoising method based on biological vision receptive field mechanism
Han et al. Novel Fused Image Quality Measures Based on Structural Similarity.
CN113888515B (en) Dual-channel stereoscopic image quality evaluation method based on deep learning and human visual characteristics
CN110232671B (en) Image visual effect enhancement method based on image tonality
Chen et al. GADO-Net: an improved AOD-Net single image dehazing algorithm
CN116523767B (en) Image defogging method and system combined with fog concentration classification
CN113191992B (en) Underwater image enhancement method based on low-rank decomposition and fusion
Junyao et al. Image quality assessment based on structure and edge similarity
CN112419177B (en) Single image motion blur removing-oriented perception quality blind evaluation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190920