CN109325928A - A kind of image rebuilding method, device and equipment - Google Patents
A kind of image rebuilding method, device and equipment Download PDFInfo
- Publication number
- CN109325928A CN109325928A CN201811188731.8A CN201811188731A CN109325928A CN 109325928 A CN109325928 A CN 109325928A CN 201811188731 A CN201811188731 A CN 201811188731A CN 109325928 A CN109325928 A CN 109325928A
- Authority
- CN
- China
- Prior art keywords
- image
- training sample
- training
- convolutional neural
- neural networks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 238000012549 training Methods 0.000 claims abstract description 270
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 141
- 238000012545 processing Methods 0.000 claims description 33
- 230000006866 deterioration Effects 0.000 claims description 26
- 238000004891 communication Methods 0.000 claims description 19
- 230000002708 enhancing effect Effects 0.000 claims description 17
- 238000003475 lamination Methods 0.000 claims description 16
- 238000003709 image segmentation Methods 0.000 claims description 11
- 238000003062 neural network model Methods 0.000 claims description 11
- 239000006002 Pepper Substances 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 230000006835 compression Effects 0.000 description 13
- 238000007906 compression Methods 0.000 description 13
- 238000003860 storage Methods 0.000 description 13
- 230000000694 effects Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 238000001914 filtration Methods 0.000 description 5
- 230000006837 decompression Effects 0.000 description 4
- 230000008030 elimination Effects 0.000 description 4
- 238000003379 elimination reaction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000001537 neural effect Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention provides a kind of image rebuilding method, device and equipment, wherein this method comprises: obtaining image to be processed, image to be processed is the low-quality image that picture quality is unsatisfactory for preset condition;By image to be processed input convolutional neural networks model trained in advance, obtain the corresponding reconstruction image of image to be processed, reconstruction image is the high quality graphic that picture quality meets preset condition, trained convolutional neural networks model is that treated that the training of the second training sample is obtained by default respectively by multiple first training samples and each first training sample, wherein, first training sample is the high quality graphic that picture quality meets preset condition, and the second training sample is the low-quality image that picture quality is unsatisfactory for preset condition.It so, it is possible the complexity of reduction calculating process.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image rebuilding method, device and equipment.
Background technique
Due to the limitation of software and hardware before, durings the production of the multimedia documents such as picture, video, transmission, storage etc.,
Adverse effect will be generated to the quality of multimedia document.Typical to be distorted as outmoded photographic equipment bring is shot, video turns
Bring compression artefacts etc. in code compression.Nowadays, major Internet company quotes mutually the multimedia document of other side, leads to figure
The Repeated Compression of piece, video causes the low-quality of multimedia document to quantify phenomenon.But due to the continuous development of software and hardware, people
It wants to obtain higher-quality multimedia document, such as the recasting of the high definition of classic film, game and low-resolution video
High resolution etc..
In the prior art, during image reconstruction, influence of many factors to picture quality is eliminated, if desired to mention
When the quality of hi-vision, need to be handled for each factor respectively.As needed to eliminate noise and compression to picture quality
When influence, first passes through filtering processing and elimination noise processed is carried out to image, then pass through decompression corresponding with compress mode again
Mode unzips it processing to image;Image is decompressed alternatively, first passing through decompression mode corresponding with compress mode
Contracting processing, then elimination noise processed is carried out to image by filtering processing.
However, inventor has found in the implementation of the present invention, at least there are the following problems for the prior art: when to due to
When many factors handle the image that picture quality impacts, is handled respectively for each factor, can make to calculate
Process is more complex.
Summary of the invention
The embodiment of the present invention is designed to provide a kind of image rebuilding method, device and equipment, to reduce calculating process
Complexity.Specific technical solution is as follows:
In a first aspect, the embodiment of the invention provides a kind of image rebuilding methods, comprising:
Image to be processed is obtained, the image to be processed is the low-quality image that picture quality is unsatisfactory for preset condition;
By the image input to be processed convolutional neural networks model trained in advance, it is corresponding to obtain the image to be processed
Reconstruction image, wherein the reconstruction image is that picture quality meets the high quality graphic of the preset condition;Trained volume
Product neural network model is that by presetting, treated respectively by multiple first training samples and each first training sample
The training of second training sample obtains, and first training sample is the high-quality that picture quality meets the preset condition
Picture, second training sample are the low-quality image that picture quality is unsatisfactory for the preset condition.
Optionally, after the acquisition image to be processed, the method also includes:
By the image segmentation to be processed at multiple small figures to be processed, the length-width ratio of the small figure to be processed is default length and width
Than;
The convolutional neural networks model that the image input to be processed is trained in advance, obtains the image to be processed
Corresponding reconstruction image, comprising:
Multiple small figures to be processed are input to the convolutional neural networks model, obtain multiple small figures to be processed
Corresponding multiple small figures of reconstruction;
Small figure splicing is rebuild by the multiple, obtains the corresponding reconstruction image of the image to be processed.
Optionally, trained the step of obtaining the convolutional neural networks model, includes;
Obtain multiple first training samples;
It is handled by deterioration, the multiple first training sample is respectively converted into corresponding second training sample;
Respectively by each first training sample and corresponding second training sample of first training sample, it is input to default
Convolutional neural networks model is trained the default convolutional neural networks model, obtains trained convolutional neural networks
Model.
Optionally, after multiple first training samples of acquisition, the method also includes:
First training sample is divided into the small figure of multiple first training, the length-width ratio of the small figure of first training respectively
For the default length-width ratio;
It is described to be handled by deterioration, the multiple first training sample is respectively converted into corresponding second training sample,
Include:
It is handled, each first training sample corresponding multiple described first is trained small by the deterioration
Figure is converted to the small figure of corresponding second training;
It is described respectively by each first training sample and corresponding second training sample of first training sample, be input to
Default convolutional neural networks model, is trained the default convolutional neural networks model, obtains trained convolutional Neural
Network model, comprising:
Respectively by the corresponding first small figure of training of each first training sample and the small figure of each first training corresponding the
The two small figures of training are input to default convolutional neural networks model, are trained, obtain to the default convolutional neural networks model
Trained convolutional neural networks model.
Optionally, it is handled described by deterioration, the multiple first training sample is respectively converted into corresponding second
After training sample, the method also includes:
For each first training sample, by corresponding second training of first training sample and first training sample
Sample carries out enhancing processing;
It is described respectively by each first training sample and corresponding second training sample of first training sample, be input to
Default convolutional neural networks model, is trained the default convolutional neural networks model, obtains trained convolutional Neural
Network model, comprising:
Enhancing treated the first training sample and second training sample will be carried out, default convolutional Neural net is input to
Network model is trained the default convolutional neural networks model, obtains trained convolutional neural networks model.
Optionally, the default convolutional neural networks model includes multilayer convolutional layer and one layer of warp lamination;Alternatively,
The default convolutional neural networks model includes multilayer residual unit and one layer of warp lamination, the residual unit by
Two layers of convolutional layer and one layer of direct-connected layer composition.
Optionally, the deterioration processing includes down-sampled and adds a variety of various forms of noises, described various forms of
Noise includes Gaussian noise, salt-pepper noise and coding noise.
Second aspect, the embodiment of the invention provides a kind of equipment for reconstructing image, comprising:
First obtains module, and for obtaining image to be processed, the image to be processed is that picture quality is unsatisfactory for default item
The low-quality image of part;
Determining module obtains described for the convolutional neural networks model that the image input to be processed is trained in advance
The corresponding reconstruction image of image to be processed, wherein the reconstruction image is the high quality that picture quality meets the preset condition
Image;Trained convolutional neural networks model is passed through respectively by multiple first training samples and each first training sample
Cross it is default treated that the training of the second training sample obtains, first training sample is that picture quality meets the default item
The high quality graphic of part, second training sample are the low-quality image that picture quality is unsatisfactory for the preset condition.
Optionally, described device further include:
First segmentation module, is used for the image segmentation to be processed into multiple small figures to be processed, the small figure to be processed
Length-width ratio be default length-width ratio;
The determining module, comprising:
It determines submodule, for multiple small figures to be processed to be input to the convolutional neural networks model, obtains more
A small figure to be processed is corresponding multiple to rebuild small figures;
Splice submodule, for rebuilding small figure splicing for the multiple, obtains the corresponding reconstruction figure of the image to be processed
Picture.
Optionally, described device further include:
Second obtains module, for obtaining multiple first training samples;
The multiple first training sample is respectively converted into corresponding second for handling by deterioration by conversion module
Training sample;
Training module, for respectively by the corresponding second training sample of each first training sample and first training sample
This, is input to default convolutional neural networks model, is trained, obtains trained to the default convolutional neural networks model
Convolutional neural networks model.
Optionally, described device further include: first training sample is divided into more by the second segmentation module for respectively
The length-width ratio of the small figure of a first training, the small figure of first training is the default length-width ratio;
The conversion module respectively corresponds each first training sample specifically for being handled by the deterioration
The small figures of multiple first training, be converted to the small figure of corresponding second training;
The training module is specifically used for respectively by the small figure of corresponding first training of each first training sample, and respectively
The small figure of corresponding second training of the small figure of a first training is input to default convolutional neural networks model, to the default convolutional Neural
Network model is trained, and obtains trained convolutional neural networks model.
Optionally, described device further include: enhancing module, for being directed to each first training sample, by first training
Sample and corresponding second training sample of first training sample carry out enhancing processing;
The training module, specifically for enhancing treated first training sample and the second training sample will be carried out
This, is input to default convolutional neural networks model, is trained, obtains trained to the default convolutional neural networks model
Convolutional neural networks model.
Optionally, the default convolutional neural networks model includes multilayer convolutional layer and one layer of warp lamination;Alternatively,
The default convolutional neural networks model includes multilayer residual unit and one layer of warp lamination, the residual unit by
Two layers of convolutional layer and one layer of direct-connected layer composition.
Optionally, the deterioration processing includes down-sampled and adds a variety of various forms of noises, described various forms of
Noise includes Gaussian noise, salt-pepper noise and coding noise.
The third aspect, the embodiment of the invention provides a kind of image reconstruction devices, including processor, communication interface, storage
Device and communication bus, wherein the processor, the communication interface, the memory are completed mutual by the communication bus
Between communication;
The memory, for storing computer program;
The processor when for executing the program stored on the memory, is realized described in above-mentioned first aspect
Method and step.
At the another aspect that the present invention is implemented, a kind of computer readable storage medium is additionally provided, it is described computer-readable
Instruction is stored in storage medium, when run on a computer, so that computer executes side described in above-mentioned first aspect
Method step.
At the another aspect that the present invention is implemented, the embodiment of the invention also provides a kind of, and the computer program comprising instruction is produced
Product, when run on a computer, so that computer executes method and step described in above-mentioned first aspect.
Image rebuilding method, device and equipment provided in an embodiment of the present invention, available image to be processed, figure to be processed
Low-quality image as being unsatisfactory for preset condition for picture quality;By image to be processed input convolutional neural networks trained in advance
Model, obtains the corresponding reconstruction image of image to be processed, and reconstruction image is the high quality graphic that picture quality meets preset condition;
Wherein, trained convolutional neural networks model is passed through respectively by multiple first training samples and each first training sample
Cross it is default treated that the training of the second training sample obtains, the first training sample is that picture quality meets the high-quality of preset condition
Spirogram picture, the second training sample are the low-quality image that picture quality is unsatisfactory for preset condition.In this way, when to due to many factors
When handling the image that picture quality impacts, convolutional neural networks model trained in advance, same hour hands can be passed through
Many factors handle image, low-quality image is converted into high quality graphic.It so can reduce calculating process
Complexity.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described.
Fig. 1 is a kind of flow chart of image rebuilding method provided in an embodiment of the present invention;
Fig. 2 is the flow chart of training convolutional neural networks model provided in an embodiment of the present invention;
Fig. 3 (a) is a kind of structure chart of default convolutional neural networks model provided in an embodiment of the present invention;
Fig. 3 (b) is another structure chart of default convolutional neural networks model provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram of convolutional neural networks model training process provided in an embodiment of the present invention;
Fig. 5 is another flow chart of image rebuilding method provided in an embodiment of the present invention;
Fig. 6 is the flow diagram of specific embodiment provided in an embodiment of the present invention;
Fig. 7 (a) is the schematic diagram of image to be processed in the embodiment of the present invention;
Fig. 7 (b) is the schematic diagram of reconstruction image in the embodiment of the present invention;
Fig. 8 is the structural schematic diagram of equipment for reconstructing image provided in an embodiment of the present invention;
Fig. 9 is the structural schematic diagram of image reconstruction device provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention is described.
Due to the continuous development of software and hardware, it is desirable to access higher-quality multimedia document, as classic film,
The high definition recasting of game and the high resolution of low-resolution video etc..These demands are not completely resolved also at present, difficult
Point can only be based on first is that low-quality classification and degree in real multimedia document are difficult to be measured with the mathematical formulae of determination
It is substantially distinguished assuming that making;Difficult point is second is that human eye has certain uncertainty to the high quality cognition of multimedia document, in height
Many disagreements are had in the identification of quality.
The quality of image is lower, it may be possible to as caused by many factors, such as influence of noise, capture apparatus itself bring
Resolution ratio is influenced, is compressed on image information bring influence, etc..
In the prior art, during image reconstruction, influence of many factors to picture quality is eliminated, if desired to mention
When the quality of hi-vision, need to be respectively processed for each factor.As needed to eliminate noise and compression to picture quality
When influence, first passes through filtering processing and elimination noise processed is carried out to image, then pass through decompression corresponding with compress mode again
Mode unzips it processing to image;Image is decompressed alternatively, first passing through decompression mode corresponding with compress mode
Contracting processing, then elimination noise processed is carried out to image by filtering processing.When causing shadow to picture quality to due to many factors
When loud image is handled, needs to be respectively processed for each factor, calculating process can be made more complex.
In order to reduce computation complexity, the embodiment of the invention provides a kind of image rebuilding methods.Preparatory training convolutional mind
Through network model, when needing to handle image, image to be processed is obtained, which caused by many factors
, picture quality be unsatisfactory for the low-quality image of preset condition.Directly pass through convolutional neural networks model, it can obtain wait locate
Manage the corresponding reconstruction image of image.I.e. in image reconstruction process, for the lower many factors of picture quality are caused, pass through volume
Product neural network model is handled simultaneously, so, it is possible to reduce computation complexity.
Image rebuilding method provided in an embodiment of the present invention is described in detail below.
Image rebuilding method provided in an embodiment of the present invention can be applied to electronic equipment.Specifically, electronic equipment can be with
For smart phone, tablet computer, laptop or desktop computer etc..
The embodiment of the invention provides a kind of image rebuilding methods, as shown in Figure 1, comprising:
S101 obtains image to be processed.
Image to be processed is the low-quality image that picture quality is unsatisfactory for preset condition.
Image to be processed may include the low-quality image as caused by many factors.Many factors may include to image matter
Measure the factor impacted.It specifically, may include down-sampled, Gaussian Blur, Gaussian noise, salt-pepper noise and coding noise
In it is a variety of.Wherein, coding noise can be noise caused by jpg compresses etc..
Preset condition can be resolution ratio and reach preset threshold, and specifically, which can come according to the actual situation
It determines, such as can be 4K resolution ratio.Or preset condition can be clarity and reach preset standard, preset standard can be according to reality
Border situation determines.Or it is also possible to the abundant degree of detailed information and meets condition, etc..
Specifically, image to be processed can be the image of picture quality to be improved.Such as wait mention high-resolution image, pass through
Image, image to be reinforced, the image of noise to be removed in the classic film of obsolete equipment shooting, etc..
When needing to improve picture quality, electronic equipment obtains the image to be processed of picture quality to be improved.
Image to be processed input convolutional neural networks model trained in advance it is corresponding to be obtained image to be processed by S102
Reconstruction image.
Wherein, reconstruction image is the high quality graphic that picture quality meets preset condition.
Trained convolutional neural networks model is by multiple first training samples and each first training sample point
What the second training sample training that Jing Guo not preset that treated obtained, wherein it is default that the first training sample is that picture quality meets
The high quality graphic of condition, the second training sample are the low-quality image that picture quality is unsatisfactory for preset condition.To the first training
Sample carries out default processing and obtains the second training sample, can be to the first training sample addition noise, compression, Fuzzy Processing etc.
Deng.
Preparatory training convolutional neural networks model, in this way, trained convolutional neural networks model can learn to will be low
Quality image is converted to high quality graphic mapping relations.In this way, image to be processed is input to the convolutional neural networks model, then
The corresponding reconstruction image of available image to be processed.Specifically, the process of training convolutional neural networks model is in following reality
Applying in example will be described in detail, and wouldn't repeat here.
It, can when to being handled due to many factors the image that picture quality impacts in the embodiment of the present invention
To handle by convolutional neural networks model trained in advance, while for many factors image, by low-quality spirogram
As being converted to high quality graphic.It so can reduce the complexity of calculating process.
Image to be processed is handled based on convolutional neural networks model trained in advance in the embodiment of the present invention, is obtained
The corresponding reconstruction image of image to be processed.In order to enable scheme becomes apparent from, on the basis of the above embodiments, the present invention also provides
The step of training obtains convolutional neural networks model, specifically, as shown in Fig. 2, may include;
S201 obtains multiple first training samples.
First training sample is the high quality graphic that picture quality meets preset condition.
Such as, available resolution ratio is the high definition photographed data of 4k, as the first training sample.
S202 is handled by deterioration, and multiple first training samples are respectively converted into corresponding second training sample.
Specifically, deterioration processing may include down-sampled and a variety of various forms of noises of addition, various forms of noises
Including Gaussian noise, salt-pepper noise and coding noise.
Deterioration processing is carried out to the first training sample, it is understood that be a variety of influences that analog image may be subject to, such as
Multiple compression artefacts are simulated in noise, compression etc., unsharp etc., that is, it is lower to be processed to simulate the quality obtained in practical application
Image.
Second training sample is the low-quality image that picture quality is unsatisfactory for preset condition.
Specifically, deterioration processing can be carried out to the first training sample respectively as follows:
(1) down-sampled to 2 times of the first training sample progress;
(2) Gaussian noise that the image overall addition parameter sigma obtained to (1) is 1-10;
(3) salt-pepper noise for image addition 20% scale of image vegetarian refreshments that (2) are obtained, and the salt-pepper noise is logical in RGB
It is the random number of 180-255 on road;
(4) the image addition core size obtained to (3) is 3 × 3, the Gaussian Blur of intensity 0.5-1.5;
(5) (4) are obtained and image is compressed with the jpg that 0.6 probability carries out 40-70 mass, 0.3 probability carries out 20-
The jpg of 40 mass compresses, and 0.1 probability carries out the jpg compression of 10-20 mass.
Wherein, it is compressed according to the jpg that the picture quality distribution of application scenarios carries out 1-3 times, low-quality spirogram in simulation reality
The jpg blocking artifact of picture;Gaussian noise, salt-pepper noise cooperate the film noise of the old film of Gaussian Blur simulation reality.
S203, respectively by each first training sample and corresponding second training sample of first training sample, input
To default convolutional neural networks model, default convolutional neural networks model is trained, trained convolutional Neural net is obtained
Network model.
By being trained to default convolutional neural networks model, so that default convolutional neural networks model learning low quality
Image to high quality graphic mapping relations.
Specifically, for a sample pair, i.e. first training sample and first training sample corresponding second
Training sample, is input to default convolutional neural networks model, and the first training sample can be understood as presetting convolution mind with reference to true value
It may include parameter to be measured through network model, by sample to default convolutional neural networks model is inputted, adjust parameter to be measured, so that
Second training sample infinitely approaches the first training sample by the output after presetting convolutional neural networks model, such as default convolution
When cost function between the output of neural network model and the first training sample is restrained, parameter to be measured is determined, what is obtained includes
The default convolutional neural networks model of determining parameter to be measured is trained convolutional neural networks model.Wherein, ginseng to be measured
Number may include: the hidden layer number of plies, the quantity of hidden layer neuron, batch size, learning rate and/or the number of iterations, etc..
In order to improve the performance of trained convolutional neural networks model, in a kind of achievable mode of the embodiment of the present invention, in advance
If convolutional neural networks model may include multilayer convolutional layer and one layer of warp lamination.
Wherein, feature of the multilayer convolutional layer to extract image neighbor pixel, it is main to determine that picture quality enhances function
Effect;Warp lamination is to learn low-resolution image to high-resolution expression, main decision image super-resolution function
Effect.
Specifically, as shown in Fig. 3 (a).Default convolutional neural networks model includes 6 layers of convolutional layer conv_layer and 1 layer
Warp lamination, wherein an active coating conv_relu_layer is separately connected in every layer of convolutional layer.It is input to convolutional layer, 6 layers
Convolutional layer is connected and is connect with warp lamination, is preset convolutional neural networks model by this and is obtained the second training sample and the first instruction
Practice the difference of sample, so that difference convergence completes training when such as reaching minimum value.
In the achievable mode of another kind, as shown in Fig. 3 (b).Default convolutional neural networks model may include multilayer residual error
Unit and one layer of warp lamination, residual unit are made of two layers of convolutional layer res_conv_layer and one layer of direct-connected layer.By this
Default convolutional neural networks model obtains the difference of the second training sample and the first training sample, so that difference convergence such as reaches
Training is completed when minimum value.
The default neural network model of the structure wishes that study obtains the difference between low-quality image and high quality graphic,
Rather than low-quality image is to the expression of high quality graphic.The default volume of structure shown in Fig. 3 (a) compared with training speed under the number of plies
Product neural network model is improved.
In this way, directly figure to be processed can be obtained by the convolutional neural networks model after getting image to be processed
As corresponding reconstruction image.When to being handled due to many factors the image that picture quality impacts, it is directed to simultaneously
Many factors handle image, and low-quality image is converted to high quality graphic.It so can reduce calculating process
Complexity.And image is carried out by convolutional neural networks (Convolutional Neural Network, abbreviation CNN) model and is surpassed
Resolution reconstruction, denoising.It is wider than conventional method applicability, it is less side effects.By the noise of reality simulation application scenarios, such as
The Repeated Compression noise of jpg coding, the film bottom of old film are made an uproar, and denoising effect well is realized.
In a kind of optional embodiment of the present invention, in order to improve training speed, in step S201: obtaining multiple first training
After sample, can also include:
The first training sample is divided into the small figure of multiple first training respectively.
Wherein, the length-width ratio of the first small figure of training is default length-width ratio.
Default length-width ratio can be 4/3,16/9,1, etc..
Specifically, multiple first training can be divided into small first training sample according to the size of the first training sample
Figure.First training sample is divided into the small figure of multiple first training of N × N size.Understand as simple, by the of 48 × 48 sizes
One training sample is divided into the first small figure of training of 144 4 × 4 sizes.
Accordingly, step S202: being handled by deterioration, and multiple first training samples are respectively converted into corresponding second instruction
Practice sample, may include;
It is handled by deterioration, by the small figure of corresponding multiple first training of each first training sample, is converted to correspondence
The second small figure of training.
Specifically the process of the small figure processing of the first training processes the first training sample in above-mentioned steps S202
Journey is similar, just repeats no more here.Difference is, is respectively processed to the small figure of multiple first training, obtains each first instruction
Practice the small figure of corresponding second training of small figure.
Step S203, respectively by each first training sample and corresponding second training sample of first training sample,
It is input to default convolutional neural networks model, default convolutional neural networks model is trained, obtains trained convolution mind
Through network model, may include:
Respectively by the corresponding first small figure of training of each first training sample and the small figure of each first training corresponding the
The two small figures of training are input to default convolutional neural networks model, are trained, are trained to default convolutional neural networks model
Good convolutional neural networks model.
Specifically, training process just repeats no more here with step S202 in above-described embodiment.
Generally speaking, for each first training sample, the first training sample is divided to obtain the small figure of multiple first training,
Deterioration processing is carried out to the first small figure of training, obtains the small figure of corresponding second training of the small figure of each first training, and by each the
The one small figure of training and the small figure of corresponding second training of the small figure of each first training, as training to being input to default convolution
Neural network model is trained, to obtain trained convolutional neural networks model.It is specific as shown in Figure 4.
In addition, in order to improve the robustness of convolutional neural networks model, in a kind of optional embodiment of the present invention, in step
S202 is handled by deterioration, after multiple first training samples are respectively converted into corresponding second training sample, can also be wrapped
It includes:
For each first training sample, by corresponding second training of first training sample and first training sample
Sample carries out enhancing processing.
Wherein, enhancing processing may include by the corresponding second training sample of the first training sample and first training sample
This is overturn, is translated etc..
Accordingly, step S203, respectively by corresponding second instruction of each first training sample and first training sample
Practice sample, is input to default convolutional neural networks model, default convolutional neural networks model is trained, is obtained trained
Convolutional neural networks model may include:
Enhancing treated the first training sample and the second training sample will be carried out, default convolutional neural networks mould is input to
Type is trained default convolutional neural networks model, obtains trained convolutional neural networks model.
Training process is similar with above-mentioned embodiment illustrated in fig. 2, just repeats no more here.
It so, it is possible when image to be processed is flipped, translates, training obtains through the embodiment of the present invention
Convolutional neural networks model still can handle, and improve the robustness of image reconstruction process of the embodiment of the present invention.
On the basis of the above embodiments, in order to improve calculating speed, in a kind of optional embodiment of the present invention, such as Fig. 5
It is shown, in step S101: after obtaining image to be processed, can also include:
S103, by image segmentation to be processed at multiple small figures to be processed.
Wherein, the length-width ratio of small figure to be processed is default length-width ratio.The default length-width ratio and convolutional Neural trained in advance
Network model is corresponding, it being interpreted as default length-width ratio is the figure being capable of handling with convolutional neural networks model trained in advance
The length-width ratio of picture is identical.Specifically, default length-width ratio can be 4/3,16/9, etc..
It specifically, can be according to the size of image to be processed, by image segmentation to be processed at multiple small figures to be processed.It will be to
Image segmentation is handled into multiple small figures to be processed of N × N size;Or in order to eliminate the splicing trace in subsequent splicing,
It can also include redundancy, such as by image segmentation to be processed at multiple small figures to be processed of (N+k) × (N+k) size.Wherein, N, k
It can be determine according to actual needs.Understand as simple, by the image segmentation to be processed of 48 × 48 sizes at 144 4 × 4 sizes
Small figure to be processed.Or can increase redundancy, k 2, by the image segmentation to be processed at the to be processed small of 64 6 × 6 sizes
Figure.
Accordingly, step S102: image to be processed input convolutional neural networks model trained in advance obtains to be processed
The corresponding reconstruction image of image may include:
Multiple small figures to be processed are input to convolutional neural networks model, it is right respectively to obtain multiple small figures to be processed by S1021
The small figure of multiple reconstructions answered.
Image to be processed is the low-quality image that picture quality is unsatisfactory for preset condition.Then accordingly small figure to be processed is low
Quality image.Multiple small figures to be processed are input to convolutional neural networks model, then available small figure to be processed respectively corresponds
The small figure of multiple reconstructions.Rebuilding small figure is the high quality graphic that picture quality meets preset condition.
S1022 rebuilds small figure splicing for multiple, obtains the corresponding reconstruction image of image to be processed.
Each position for rebuilding the corresponding small figure to be processed of small figure in image to be processed be it is determining, according to each reconstruction
Position of the corresponding small figure to be processed of small figure in image to be processed rebuilds multiple small figures and splices.Such as, small figure to be processed
Position 1 in image to be processed then makes the corresponding position 1 for rebuilding small figure in reconstruction image of small figure to be processed.
By the way that image to be processed to be split, small figure to be processed is obtained, and then treat by convolutional neural networks model
It handles small figure to be handled, obtains that each small figure to be processed is corresponding to rebuild small figure, can be improved calculating speed.
It is directed to low quality scene.Image high definition to be processed can be amplified twice in the embodiment of the present invention, and removed big
Part common noise, such as compression noise, fuzzy noise etc..The embodiment of the present invention can also be moved to video processing by image procossing
On, and can be applied to image enhancement, the other fields such as compression of images.It can mainly apply on video denoising, it is specific to wrap
It includes 1080P and turns 4K resolution ratio, old film high definition is remake, in the scenes such as cover plot quality enhancing.One kind of the embodiment of the present invention is optional
Mode in, be applied to image denoising during, in convolutional neural networks model convolutional layer can to image to be processed carry out method
Processing, then accordingly after having handled, obtained reconstruction image is zoomed in and out, so that treated image and original image is big
It is small identical.
In one specific embodiment, as shown in Figure 6.
S601 obtains image to be processed.
Image to be processed is carried out edge and mends 0 by S602;Then the image to be processed is split.
S603 obtains multiple small figures to be processed.
Multiple small figures to be processed are input to convolutional neural networks model trained in advance, obtained each to be processed by S604
Small figure is corresponding to rebuild small figure.
S605 rebuilds multiple small figures and splices, obtains spliced big figure, i.e., corresponding reconstructions of image to be processed is schemed
Picture.
Specifically, it rebuilds shown in effect such as Fig. 7 (a) and Fig. 7 (b).Fig. 7 (a) is the schematic diagram of image to be processed;Fig. 7 (b)
For the schematic diagram of reconstruction image.From the point of view of naked eyes, Fig. 7 (b) is more apparent than Fig. 7 (a) namely Fig. 7 (b) more than Fig. 7 (a) picture quality
It is high.
The embodiment of the present invention can be applied in denoising scene, be compared to traditional filtering method denoising, and the present invention is real
Apply example offer method applicability more extensively, effect it is more preferable.And in the 4K film making of lower-resolution source, in lines such as cartoons
The apparent application scenarios effect of color lump is prominent, other scenes, which can guarantee, is free from side effects.The reparation of old film, at present to film
Bottom is made an uproar, and Video coding compression noise has good effect.Video compress can accomplish the extensive of high quality to down-sampled film
It is multiple.
The embodiment of the invention also provides a kind of equipment for reconstructing image, such as Fig. 8 to show, comprising:
First obtains module 801, and for obtaining image to be processed, image to be processed is that picture quality is unsatisfactory for preset condition
Low-quality image;
Determining module 802 obtains to be processed for the convolutional neural networks model that image to be processed input is trained in advance
The corresponding reconstruction image of image, wherein reconstruction image is the high quality graphic that picture quality meets preset condition;Trained volume
Product neural network model is that by presetting, treated respectively by multiple first training samples and each first training sample
The training of second training sample obtains, and the first training sample is the high quality graphic that picture quality meets preset condition, the second instruction
Practicing sample is the low-quality image that picture quality is unsatisfactory for preset condition.
It, can when to being handled due to many factors the image that picture quality impacts in the embodiment of the present invention
To handle by convolutional neural networks model trained in advance, while for many factors image, by low-quality spirogram
As being converted to high quality graphic.It so can reduce the complexity of calculating process.
Optionally, the device further include:
First segmentation module, is used for image segmentation to be processed into multiple small figures to be processed, the length-width ratio of small figure to be processed
To preset length-width ratio;
Determining module 802, comprising:
It determines submodule, for multiple small figures to be processed to be input to convolutional neural networks model, obtains multiple to be processed
Small figure is corresponding multiple to rebuild small figures;
Splice submodule, for rebuilding small figure splicing for multiple, obtains the corresponding reconstruction image of image to be processed.
Optionally, the device further include:
Second obtains module, for obtaining multiple first training samples;
Multiple first training samples are respectively converted into corresponding second training for handling by deterioration by conversion module
Sample;
Training module, for respectively by the corresponding second training sample of each first training sample and first training sample
This, is input to default convolutional neural networks model, is trained to default convolutional neural networks model, obtains trained convolution
Neural network model.
Optionally, the device further include: the second segmentation module, for the first training sample to be divided into multiple first respectively
The small figure of training, the length-width ratio of the first small figure of training are default length-width ratio;
Conversion module is specifically used for handling by deterioration, by corresponding multiple first instructions of each first training sample
Practice small figure, is converted to the small figure of corresponding second training;
Training module, being specifically used for respectively will the corresponding first trained small figure of each first training sample and each the
The small figure of corresponding second training of the one small figure of training is input to default convolutional neural networks model, to default convolutional neural networks model
It is trained, obtains trained convolutional neural networks model.
Optionally, the device further include: enhancing module, for being directed to each first training sample, by the first training sample
This and corresponding second training sample of first training sample carry out enhancing processing;
Training module is input to specifically for that will carry out enhancing treated the first training sample and the second training sample
Default convolutional neural networks model, is trained default convolutional neural networks model, obtains trained convolutional neural networks
Model.
Optionally, presetting convolutional neural networks model includes multilayer convolutional layer and one layer of warp lamination;Alternatively,
Default convolutional neural networks model includes multilayer residual unit and one layer of warp lamination, and residual unit is by two layers of convolution
Layer and one layer of direct-connected layer composition.
Optionally, deteriorate processing to include down-sampled and add a variety of various forms of noises, various forms of noises include
Gaussian noise, salt-pepper noise and coding noise.
It should be noted that equipment for reconstructing image provided in an embodiment of the present invention is the dress using above-mentioned image rebuilding method
It sets, then all embodiments of above-mentioned image rebuilding method are suitable for the device, and can reach the same or similar beneficial to effect
Fruit.
The embodiment of the invention also provides a kind of image reconstruction devices, as shown in figure 9, including processor 901, communication interface
902, memory 903 and communication bus 904, wherein processor 901, communication interface 902, memory 903 pass through communication bus
904 complete mutual communication.
Memory 903, for storing computer program;
Processor 901 when for executing the program stored on memory 903, realizes the side of above-mentioned image rebuilding method
Method step.
It, can when to being handled due to many factors the image that picture quality impacts in the embodiment of the present invention
To handle by convolutional neural networks model trained in advance, while for many factors image, by low-quality spirogram
As being converted to high quality graphic.It so can reduce the complexity of calculating process.
The communication bus that above-mentioned image reconstruction device is mentioned can be Peripheral Component Interconnect standard (Peripheral
Component Interconnect, abbreviation PCI) bus or expanding the industrial standard structure (Extended Industry
Standard Architecture, abbreviation EISA) bus etc..The communication bus can be divided into address bus, data/address bus, control
Bus processed etc..Only to be indicated with a thick line in figure convenient for indicating, it is not intended that an only bus or a type of total
Line.
Communication interface is for the communication between above-mentioned image reconstruction device and other equipment.
Memory may include random access memory (Random Access Memory, abbreviation RAM), also may include
Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.Optionally, memory may be used also
To be storage device that at least one is located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit,
Abbreviation CPU), network processing unit (Network Processor, abbreviation NP) etc.;It can also be digital signal processor
(Digital Signal Processing, abbreviation DSP), specific integrated circuit (Application Specific
Integrated Circuit, abbreviation ASIC), field programmable gate array (Field-Programmable Gate Array,
Abbreviation FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hardware components.
In another embodiment provided by the invention, a kind of computer readable storage medium is additionally provided, which can
It reads to be stored with instruction in storage medium, when run on a computer, so that computer executes above-mentioned image rebuilding method
Method and step.
It, can when to being handled due to many factors the image that picture quality impacts in the embodiment of the present invention
To handle by convolutional neural networks model trained in advance, while for many factors image, by low-quality spirogram
As being converted to high quality graphic.It so can reduce the complexity of calculating process.
In another embodiment provided by the invention, a kind of computer program product comprising instruction is additionally provided, when it
When running on computers, so that computer executes the method and step of above-mentioned image rebuilding method.
It, can when to being handled due to many factors the image that picture quality impacts in the embodiment of the present invention
To handle by convolutional neural networks model trained in advance, while for many factors image, by low-quality spirogram
As being converted to high quality graphic.It so can reduce the complexity of calculating process.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real
It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.The computer program
Product includes one or more computer instructions.When loading on computers and executing the computer program instructions, all or
It partly generates according to process or function described in the embodiment of the present invention.The computer can be general purpose computer, dedicated meter
Calculation machine, computer network or other programmable devices.The computer instruction can store in computer readable storage medium
In, or from a computer readable storage medium to the transmission of another computer readable storage medium, for example, the computer
Instruction can pass through wired (such as coaxial cable, optical fiber, number from a web-site, computer, server or data center
User's line (DSL)) or wireless (such as infrared, wireless, microwave etc.) mode to another web-site, computer, server or
Data center is transmitted.The computer readable storage medium can be any usable medium that computer can access or
It is comprising data storage devices such as one or more usable mediums integrated server, data centers.The usable medium can be with
It is magnetic medium, (for example, floppy disk, hard disk, tape), optical medium (for example, DVD) or semiconductor medium (such as solid state hard disk
Solid State Disk (SSD)) etc..
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device,
For equipment, computer readable storage medium and computer program product embodiments, since it is substantially similar to the method embodiment,
So being described relatively simple, the relevent part can refer to the partial explaination of embodiments of method.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all
Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention
It is interior.
Claims (15)
1. a kind of image rebuilding method characterized by comprising
Image to be processed is obtained, the image to be processed is the low-quality image that picture quality is unsatisfactory for preset condition;
By the image input to be processed convolutional neural networks model trained in advance, it is corresponding heavy to obtain the image to be processed
Build image, wherein the reconstruction image is the high quality graphic that picture quality meets the preset condition;Trained convolution mind
It is that treated second by default respectively by multiple first training samples and each first training sample through network model
Training sample training obtains, and first training sample is the high quality graphic that picture quality meets the preset condition, institute
Stating the second training sample is the low-quality image that picture quality is unsatisfactory for the preset condition.
2. the method according to claim 1, wherein the method is also after the acquisition image to be processed
Include:
By the image segmentation to be processed at multiple small figures to be processed, the length-width ratio of the small figure to be processed is default length-width ratio;
The convolutional neural networks model that the image input to be processed is trained in advance, it is corresponding to obtain the image to be processed
Reconstruction image, comprising:
Multiple small figures to be processed are input to the convolutional neural networks model, obtain multiple small figure difference to be processed
Corresponding multiple small figures of reconstruction;
Small figure splicing is rebuild by the multiple, obtains the corresponding reconstruction image of the image to be processed.
3. the method according to claim 1, wherein the step of training obtains convolutional neural networks model packet
It includes;
Obtain multiple first training samples;
It is handled by deterioration, the multiple first training sample is respectively converted into corresponding second training sample;
Respectively by each first training sample and corresponding second training sample of first training sample, it is input to default convolution
Neural network model is trained the default convolutional neural networks model, obtains trained convolutional neural networks model.
4. according to the method described in claim 3, it is characterized in that, it is described obtain multiple first training samples after, it is described
Method further include:
First training sample is divided into the small figure of multiple first training respectively, the length-width ratio of the small figure of first training is institute
State default length-width ratio;
It is described to be handled by deterioration, the multiple first training sample is respectively converted into corresponding second training sample, comprising:
It is handled by the deterioration, the small figure of each corresponding multiple first training of first training sample turns
It is changed to the small figure of corresponding second training;
It is described respectively by each first training sample and corresponding second training sample of first training sample, be input to default
Convolutional neural networks model is trained the default convolutional neural networks model, obtains trained convolutional neural networks
Model, comprising:
Respectively by the small figure of corresponding first training of each first training sample and corresponding second instruction of the small figure of each first training
Practice small figure and be input to default convolutional neural networks model, the default convolutional neural networks model is trained, is trained
Good convolutional neural networks model.
5. according to the method described in claim 3, the multiple first is instructed it is characterized in that, handled described by deterioration
Practice sample to be respectively converted into after corresponding second training sample, the method also includes:
For each first training sample, by first training sample and corresponding second training sample of first training sample
Carry out enhancing processing;
It is described respectively by each first training sample and corresponding second training sample of first training sample, be input to default
Convolutional neural networks model is trained the default convolutional neural networks model, obtains trained convolutional neural networks
Model, comprising:
Enhancing treated the first training sample and second training sample will be carried out, default convolutional neural networks mould is input to
Type is trained the default convolutional neural networks model, obtains trained convolutional neural networks model.
6. according to the method described in claim 3, it is characterized in that, the default convolutional neural networks model includes multilayer convolution
Layer and one layer of warp lamination;Alternatively,
The default convolutional neural networks model includes multilayer residual unit and one layer of warp lamination, and the residual unit is by two layers
Convolutional layer and one layer of direct-connected layer composition.
7. according to the described in any item methods of claim 3 to 6, which is characterized in that the deterioration processing includes down-sampled and adds
Kind of a various forms of noises are added, various forms of noises include Gaussian noise, salt-pepper noise and coding noise.
8. a kind of equipment for reconstructing image characterized by comprising
First obtains module, and for obtaining image to be processed, the image to be processed is that picture quality is unsatisfactory for preset condition
Low-quality image;
Determining module obtains described wait locate for the convolutional neural networks model that the image input to be processed is trained in advance
Manage the corresponding reconstruction image of image, wherein the reconstruction image is the high quality graphic that picture quality meets the preset condition;
Trained convolutional neural networks model be passed through respectively by multiple first training samples and each first training sample it is pre-
If treated, the training of the second training sample is obtained, and first training sample is that picture quality meets the preset condition
High quality graphic, second training sample are the low-quality image that picture quality is unsatisfactory for the preset condition.
9. device according to claim 8, which is characterized in that described device further include:
First segmentation module, is used for the image segmentation to be processed into multiple small figures to be processed, the length of the small figure to be processed
Width is than being default length-width ratio;
The determining module, comprising:
It determines submodule, for multiple small figures to be processed to be input to the convolutional neural networks model, obtains multiple institutes
State the corresponding multiple small figures of reconstruction of small figure to be processed;
Splice submodule, for rebuilding small figure splicing for the multiple, obtains the corresponding reconstruction image of the image to be processed.
10. device according to claim 8, which is characterized in that described device further include:
Second obtains module, for obtaining multiple first training samples;
The multiple first training sample is respectively converted into corresponding second training for handling by deterioration by conversion module
Sample;
Training module, for respectively by each first training sample and corresponding second training sample of first training sample,
It is input to default convolutional neural networks model, the default convolutional neural networks model is trained, trained volume is obtained
Product neural network model.
11. device according to claim 10, which is characterized in that described device further include: the second segmentation module, for dividing
First training sample small figure of multiple first training is not divided into, the length-width ratio of the small figure of first training is described default
Length-width ratio;
The conversion module is corresponding more by each first training sample specifically for being handled by the deterioration
The small figure of a first training is converted to the small figure of corresponding second training;
The training module, being specifically used for respectively will the corresponding first trained small figure of each first training sample and each the
The small figure of corresponding second training of the one small figure of training is input to default convolutional neural networks model, to the default convolutional neural networks
Model is trained, and obtains trained convolutional neural networks model.
12. device according to claim 10, which is characterized in that described device further include: enhancing module, for for each
A first training sample carries out first training sample and first training sample corresponding second training sample at enhancing
Reason;
The training module, it is defeated specifically for enhancing treated the first training sample and second training sample will be carried out
Enter to default convolutional neural networks model, the default convolutional neural networks model is trained, trained convolution is obtained
Neural network model.
13. device according to claim 10, which is characterized in that the default convolutional neural networks model includes multilayer volume
Lamination and one layer of warp lamination;Alternatively,
The default convolutional neural networks model includes multilayer residual unit and one layer of warp lamination, and the residual unit is by two layers
Convolutional layer and one layer of direct-connected layer composition.
14. device according to any one of claims 10 to 13, which is characterized in that deterioration processing include it is down-sampled and
A variety of various forms of noises are added, various forms of noises include Gaussian noise, salt-pepper noise and coding noise.
15. a kind of image reconstruction device, which is characterized in that including processor, communication interface, memory and communication bus, wherein
The processor, the communication interface, the memory complete mutual communication by the communication bus;
The memory, for storing computer program;
The processor when for executing the program stored on the memory, realizes side as claimed in claim 1 to 7
Method step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811188731.8A CN109325928A (en) | 2018-10-12 | 2018-10-12 | A kind of image rebuilding method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811188731.8A CN109325928A (en) | 2018-10-12 | 2018-10-12 | A kind of image rebuilding method, device and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109325928A true CN109325928A (en) | 2019-02-12 |
Family
ID=65261978
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811188731.8A Pending CN109325928A (en) | 2018-10-12 | 2018-10-12 | A kind of image rebuilding method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109325928A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109996085A (en) * | 2019-04-30 | 2019-07-09 | 北京金山云网络技术有限公司 | Model training method, image processing method, device and electronic equipment |
CN110264407A (en) * | 2019-06-28 | 2019-09-20 | 北京金山云网络技术有限公司 | Image Super-resolution model training and method for reconstructing, device, equipment and storage medium |
CN110503619A (en) * | 2019-06-27 | 2019-11-26 | 北京奇艺世纪科技有限公司 | Image processing method, device and readable storage medium storing program for executing |
CN110930333A (en) * | 2019-11-22 | 2020-03-27 | 北京金山云网络技术有限公司 | Image restoration method and device, electronic equipment and computer-readable storage medium |
CN111080528A (en) * | 2019-12-20 | 2020-04-28 | 北京金山云网络技术有限公司 | Image super-resolution and model training method, device, electronic equipment and medium |
CN111681165A (en) * | 2020-06-02 | 2020-09-18 | 上海闻泰信息技术有限公司 | Image processing method, image processing device, computer equipment and computer readable storage medium |
CN111724330A (en) * | 2019-03-19 | 2020-09-29 | 广州金山移动科技有限公司 | Image processing method and device and computer readable storage medium |
WO2020246861A1 (en) * | 2019-06-06 | 2020-12-10 | Samsung Electronics Co., Ltd. | Method and apparatus for training neural network model for enhancing image detail |
CN112288638A (en) * | 2019-07-27 | 2021-01-29 | 华为技术有限公司 | Image enhancement apparatus and system |
CN112581363A (en) * | 2019-09-29 | 2021-03-30 | 北京金山云网络技术有限公司 | Image super-resolution reconstruction method and device, electronic equipment and storage medium |
CN112861836A (en) * | 2019-11-28 | 2021-05-28 | 马上消费金融股份有限公司 | Text image processing method, text and card image quality evaluation method and device |
CN113254680A (en) * | 2020-02-10 | 2021-08-13 | 北京百度网讯科技有限公司 | Cover page graph processing method of multimedia information, client and electronic equipment |
CN113688752A (en) * | 2021-08-30 | 2021-11-23 | 厦门美图宜肤科技有限公司 | Face pigment detection model training method, device, equipment and storage medium |
US20220156884A1 (en) * | 2019-05-06 | 2022-05-19 | Sony Group Corporation | Electronic device, method and computer program |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102360498A (en) * | 2011-10-27 | 2012-02-22 | 江苏省邮电规划设计院有限责任公司 | Reconstruction method for image super-resolution |
CN103489173A (en) * | 2013-09-23 | 2014-01-01 | 百年金海科技有限公司 | Video image super-resolution reconstruction method |
CN107358576A (en) * | 2017-06-24 | 2017-11-17 | 天津大学 | Depth map super resolution ratio reconstruction method based on convolutional neural networks |
CN107945146A (en) * | 2017-11-23 | 2018-04-20 | 南京信息工程大学 | A kind of space-time Satellite Images Fusion method based on depth convolutional neural networks |
US20180268284A1 (en) * | 2017-03-15 | 2018-09-20 | Samsung Electronics Co., Ltd. | System and method for designing efficient super resolution deep convolutional neural networks by cascade network training, cascade network trimming, and dilated convolutions |
-
2018
- 2018-10-12 CN CN201811188731.8A patent/CN109325928A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102360498A (en) * | 2011-10-27 | 2012-02-22 | 江苏省邮电规划设计院有限责任公司 | Reconstruction method for image super-resolution |
CN103489173A (en) * | 2013-09-23 | 2014-01-01 | 百年金海科技有限公司 | Video image super-resolution reconstruction method |
US20180268284A1 (en) * | 2017-03-15 | 2018-09-20 | Samsung Electronics Co., Ltd. | System and method for designing efficient super resolution deep convolutional neural networks by cascade network training, cascade network trimming, and dilated convolutions |
CN107358576A (en) * | 2017-06-24 | 2017-11-17 | 天津大学 | Depth map super resolution ratio reconstruction method based on convolutional neural networks |
CN107945146A (en) * | 2017-11-23 | 2018-04-20 | 南京信息工程大学 | A kind of space-time Satellite Images Fusion method based on depth convolutional neural networks |
Non-Patent Citations (1)
Title |
---|
李素梅等: "基于卷积神经网络的深度图超分辨率重建", 《光学学报》 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111724330A (en) * | 2019-03-19 | 2020-09-29 | 广州金山移动科技有限公司 | Image processing method and device and computer readable storage medium |
CN109996085A (en) * | 2019-04-30 | 2019-07-09 | 北京金山云网络技术有限公司 | Model training method, image processing method, device and electronic equipment |
CN109996085B (en) * | 2019-04-30 | 2021-05-14 | 北京金山云网络技术有限公司 | Model training method, image processing method and device and electronic equipment |
US20220156884A1 (en) * | 2019-05-06 | 2022-05-19 | Sony Group Corporation | Electronic device, method and computer program |
US11521011B2 (en) | 2019-06-06 | 2022-12-06 | Samsung Electronics Co., Ltd. | Method and apparatus for training neural network model for enhancing image detail |
WO2020246861A1 (en) * | 2019-06-06 | 2020-12-10 | Samsung Electronics Co., Ltd. | Method and apparatus for training neural network model for enhancing image detail |
CN110503619A (en) * | 2019-06-27 | 2019-11-26 | 北京奇艺世纪科技有限公司 | Image processing method, device and readable storage medium storing program for executing |
CN110503619B (en) * | 2019-06-27 | 2021-09-03 | 北京奇艺世纪科技有限公司 | Image processing method, device and readable storage medium |
CN110264407A (en) * | 2019-06-28 | 2019-09-20 | 北京金山云网络技术有限公司 | Image Super-resolution model training and method for reconstructing, device, equipment and storage medium |
CN110264407B (en) * | 2019-06-28 | 2023-01-03 | 北京金山云网络技术有限公司 | Image super-resolution model training and reconstruction method, device, equipment and storage medium |
CN112288638A (en) * | 2019-07-27 | 2021-01-29 | 华为技术有限公司 | Image enhancement apparatus and system |
CN112581363A (en) * | 2019-09-29 | 2021-03-30 | 北京金山云网络技术有限公司 | Image super-resolution reconstruction method and device, electronic equipment and storage medium |
CN110930333A (en) * | 2019-11-22 | 2020-03-27 | 北京金山云网络技术有限公司 | Image restoration method and device, electronic equipment and computer-readable storage medium |
CN112861836B (en) * | 2019-11-28 | 2022-04-22 | 马上消费金融股份有限公司 | Text image processing method, text and card image quality evaluation method and device |
CN112861836A (en) * | 2019-11-28 | 2021-05-28 | 马上消费金融股份有限公司 | Text image processing method, text and card image quality evaluation method and device |
CN111080528A (en) * | 2019-12-20 | 2020-04-28 | 北京金山云网络技术有限公司 | Image super-resolution and model training method, device, electronic equipment and medium |
CN111080528B (en) * | 2019-12-20 | 2023-11-07 | 北京金山云网络技术有限公司 | Image super-resolution and model training method and device, electronic equipment and medium |
CN113254680A (en) * | 2020-02-10 | 2021-08-13 | 北京百度网讯科技有限公司 | Cover page graph processing method of multimedia information, client and electronic equipment |
CN113254680B (en) * | 2020-02-10 | 2023-07-25 | 北京百度网讯科技有限公司 | Cover map processing method of multimedia information, client and electronic equipment |
CN111681165A (en) * | 2020-06-02 | 2020-09-18 | 上海闻泰信息技术有限公司 | Image processing method, image processing device, computer equipment and computer readable storage medium |
CN113688752A (en) * | 2021-08-30 | 2021-11-23 | 厦门美图宜肤科技有限公司 | Face pigment detection model training method, device, equipment and storage medium |
WO2023029233A1 (en) * | 2021-08-30 | 2023-03-09 | 厦门美图宜肤科技有限公司 | Face pigment detection model training method and apparatus, device, and storage medium |
CN113688752B (en) * | 2021-08-30 | 2024-02-02 | 厦门美图宜肤科技有限公司 | Training method, device, equipment and storage medium for face color detection model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109325928A (en) | A kind of image rebuilding method, device and equipment | |
Chang et al. | Spatial-adaptive network for single image denoising | |
Liu et al. | A generalized framework for edge-preserving and structure-preserving image smoothing | |
CN108022212B (en) | High-resolution picture generation method, generation device and storage medium | |
CN110136066B (en) | Video-oriented super-resolution method, device, equipment and storage medium | |
CN107403415B (en) | Compressed depth map quality enhancement method and device based on full convolution neural network | |
JP6961139B2 (en) | An image processing system for reducing an image using a perceptual reduction method | |
CN110675336A (en) | Low-illumination image enhancement method and device | |
RU2706891C1 (en) | Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts | |
TW202040986A (en) | Method for video image processing and device thereof | |
CN112991231B (en) | Single-image super-image and perception image enhancement joint task learning system | |
CN113066027B (en) | Screen shot image moire removing method facing to Raw domain | |
CN110830808A (en) | Video frame reconstruction method and device and terminal equipment | |
CN111079764A (en) | Low-illumination license plate image recognition method and device based on deep learning | |
CN113298728B (en) | Video optimization method and device, terminal equipment and storage medium | |
Guan et al. | Srdgan: learning the noise prior for super resolution with dual generative adversarial networks | |
Byun et al. | BitNet: Learning-based bit-depth expansion | |
CN110717864A (en) | Image enhancement method and device, terminal equipment and computer readable medium | |
Tan et al. | Low-light image enhancement with geometrical sparse representation | |
Peter et al. | Deep spatial and tonal data optimisation for homogeneous diffusion inpainting | |
CN113222856A (en) | Inverse halftone image processing method, terminal equipment and readable storage medium | |
CN113298740A (en) | Image enhancement method and device, terminal equipment and storage medium | |
Luo et al. | A fast denoising fusion network using internal and external priors | |
TW202217742A (en) | Image quality improvement method and image processing apparatus using the same | |
KR102083166B1 (en) | Image processing method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190212 |