CN107578455A - Arbitrary dimension sample texture synthetic method based on convolutional neural networks - Google Patents

Arbitrary dimension sample texture synthetic method based on convolutional neural networks Download PDF

Info

Publication number
CN107578455A
CN107578455A CN201710781915.4A CN201710781915A CN107578455A CN 107578455 A CN107578455 A CN 107578455A CN 201710781915 A CN201710781915 A CN 201710781915A CN 107578455 A CN107578455 A CN 107578455A
Authority
CN
China
Prior art keywords
mrow
neural networks
convolutional neural
texture image
msup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710781915.4A
Other languages
Chinese (zh)
Other versions
CN107578455B (en
Inventor
宋彬
吴科永
郭洁
蔡秀霞
秦浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710781915.4A priority Critical patent/CN107578455B/en
Publication of CN107578455A publication Critical patent/CN107578455A/en
Application granted granted Critical
Publication of CN107578455B publication Critical patent/CN107578455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of arbitrary dimension sample texture synthetic method based on convolutional neural networks.Its step is:(1) the pending texture image of one 512 × 512 is inputted;(2) structure and training convolutional neural networks;(3) pending texture image is divided;(4) the composograph matrix of pending texture image is generated;(5) composograph of pending texture image is generated.Convolutional neural networks are incorporated into texture image synthesis field and come by the present invention, overcoming the result for being easily caused to obtain using best match in the prior art is local optimum and can not synthesize the deficiency of the texture image of arbitrary dimension, the texture image profile of synthesis becomes apparent from, more true nature.

Description

Arbitrary dimension sample texture synthetic method based on convolutional neural networks
Technical field
The invention belongs to technical field of image processing, further relates to a kind of base in natural image processing technology field In the arbitrary dimension sample texture synthetic method of convolutional neural networks.The present invention is directed to all irregular texture images, adopts With convolutional neural networks, the synthesis available for arbitrary dimension texture image.
Background technology
At present, textures synthesis has become a critically important research theme in technical field of image processing.According to sample The elementary cell chosen during this textures synthesis is different, under MRF (Markov Random Field, Markov random field) model Texture synthesis method can substantially be divided into two classes:First, the textures synthesis based on pixel;Second, patch-based texture synthesis. Two class methods have his own strong points, and the method based on pixel is good at catching the local detail of texture, it is difficult to shows especially and the global characteristics that coincide; Comparatively, block-based method can catch textural characteristics interior in a big way, but more inferior in terms of BORDER PROCESSING. The quality of texture synthesis method depends on type (random grain, structural texture, the separating element texture of given texture sample Deng) and used synthesis strategy, therefore it is very necessary for research textures synthesis to study the characteristic of texture and its classification. In addition, the quality of textures synthesis outcome quality needs the yardstick and standard of measurement.For the other research of texture classes, researchers Once in-depth study was carried out, it is proposed that such as:Static texture, global change texture, dynamic texture, separating element texture etc. are more Type.
The patent document " a kind of image texture synthetic method based on best match " that Institutes Of Technology Of Tianjin applies at it is (specially Sharp application number 201410112095.6, publication number CN103839271A) in disclose a kind of new image based on best match Texture synthesis method.The similitude of color is not only allowed in this method, is also added into gradient-structure information, the color of texture Difference and gradient-structure information analyse in depth best match texture block as the similarity measurement between weighing two match blocks Influence of the size to synthesis, texture block size is should determine that according to different texture is adaptive, to improve the speed of textures synthesis and quality. But the weak point that this method still has is the best match used that to be easily caused the feature obtained on a small quantity less, obtain Result is local optimum, while matching easily mistake occurs between pixel so that the texture image of synthesis is easy to fuzzy, and can not Synthesize the texture image of arbitrary size.
Paper " the Self Tuning Texture Optimization " that Alexandre Kaspar et al. deliver at it (Computer Graphics Forum,2015,34(2):A kind of Euclidean based between different masses is disclosed in 349-359) The method of distance.This method is directed to the grain details that multiple yardsticks are usually contained in real world middle high-resolution texture, at present Method be difficult to the related texture of synthesis.This method solves Euclidean distance to each Color Channel of input picture first, and next is asked Solution meets the matching between block and block.The advantage of this method is to propose an imparametrization that is general, automatically correcting Texture synthesis method, synthesize texture by introducing some critical improvement, illustrate the textures synthesis energy of comparative superiority Power.But the weak point that this method still has is, the texture that synthesis is easily caused using the Euclidean distance in pixel domain is included Many broken structures, for the input picture of low resolution, algorithm can not complete textures synthesis.
The content of the invention
The defects of it is an object of the invention to overcome above-mentioned prior art, propose a kind of based on any of convolutional neural networks Size sample texture synthetic method.The present invention extracts the global characteristics of texture image during textures synthesis, obtains more Texture information, the texture image finally synthesized more true nature, while the invention can synthesize the texture image of arbitrary dimension.
The present invention's comprises the following steps that:
(1) the pending texture image of one 512 × 512 is inputted;
(2) structure and training convolutional neural networks:
(2a) structure contains 7 layers of convolutional neural networks;
Texture picture is input to convolutional neural networks, training convolutional neural networks, until the loss of its output layer by (2b) Functional value is less than or equal to 0.0001, the convolutional neural networks trained;
(3) pending texture image is divided:
Pending texture image is input to first 5 layers of the convolutional neural networks trained by (3a), obtains first 5 layers of feature Figure;
The characteristic pattern matrix of pending texture image forms gram from being multiplied in each layer of (3b) convolutional neural networks Gram matrixes;
(3c) according to the following formula, generates the sub-block matrix of pending texture image:
Wherein,Represent to operate y value, y represents the sub-block matrix of pending texture image, and min represents minimum value Operation, s represent the sub-block weight coefficient of pending texture image, and s ∈ { 1000,2000 }, ∈ represent to belong to symbol, and ∑ represents to ask And operation, wrRepresent the weighted value of convolutional neural networks r layers trained, Nr、MrThe convolutional Neural net trained is represented respectively The row and column of network r layer characteristic vectors, GrRepresent the gram Gram matrixes of the r layers in the convolutional neural networks trained;
During (3d) s=1000, the sub-block matrix 1 of pending texture image is generated, according to array scan mode, successively will The composograph matrix of pending texture image is put into the composograph position of each self-corresponding pending texture image, is obtained The sub-block 1 of pending texture image;
During (3e) s=2000, the sub-block matrix 2 of pending texture image is generated, according to array scan mode, successively will The composograph matrix of pending texture image is put into the composograph position of each self-corresponding pending texture image, is obtained The sub-block 2 of pending texture image;
(4) according to the following formula, the composograph matrix of pending texture image is generated:
Wherein,Represent to operate T values, T represents the composograph matrix of pending texture image, and min represents minimum Value Operations, λ represent model parameter, and λ ∈ [0,1], and ∑ represents sum operation, wqRepresent the convolutional neural networks that train the The weighted value of q layers, Nq、MqThe row and column of q layer characteristic vectors, G are represented respectivelyqRepresent in the convolutional neural networks trained The gram Gram matrixes of the sub-block 1 of the pending texture image of q layers, FqRepresent the q in the convolutional neural networks trained The characteristic pattern matrix of the sub-block 2 of the pending texture image of layer;
(5) composograph of pending texture image is generated:
According to array scan mode, successively by the composograph matrix of pending texture image, it is put into each self-corresponding In the composograph position of pending texture image, the composograph of pending texture image is obtained.
The present invention has advantages below compared with prior art:
First, because the present invention uses 7 layers of convolutional neural networks, pass through the network self study of multilayer in convolutional neural networks More textural characteristics are arrived in feature, study, and it is local to overcome the result for being easily caused to obtain using best match in the prior art The deficiency of texture image optimal and that arbitrary dimension can not be synthesized so that the present invention can obtain globally optimal solution, Neng Gouhe Into arbitrary dimension texture image.
Second, because the present invention is using the composograph matrix for generating pending texture image, by calculating convolutional Neural In each layer characteristic vector gram Gram matrixes, obtain the global statistics feature of texture image, overcome and adopt in the prior art The texture that synthesis is easily caused with the Euclidean distance in pixel domain contains many broken structures and for the defeated of low resolution The deficiency of texture image can not be synthesized by entering texture image so that the present invention can successfully suppress noise, can obtain abundant Texture image detailed information, enhance the definition of texture image.
Brief description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is the Stone texture test images that the present invention uses in emulation experiment;
Fig. 3 is 200 × 200 Stone textures synthesis images that the present invention obtains in emulation experiment;
Fig. 4 is 800 × 800 Stone textures synthesis images that the present invention obtains in emulation experiment;
Fig. 5 is 1024 × 1024 Stone textures synthesis images that the present invention obtains in emulation experiment.
Embodiment
The present invention will be further described below in conjunction with the accompanying drawings.
1 pair of step of the invention is further described referring to the drawings.
Step 1, the pending texture image of one 512 × 512 is inputted.
Step 2, structure and training convolutional neural networks.
For structure containing 7 layers of convolutional neural networks, the structure of this 7 layers of convolutional neural networks is convolutional layer conv1_ successively 1, convolutional layer conv2_1, convolutional layer conv3_1, pond layer pool4, convolutional layer conv5_1, full articulamentum fc6, layer of classifying softmax7。
The step of structure is containing 7 layers of convolutional neural networks are as follows:
1st step, the texture maps of 512 × 512 pixel sizes are inputted into convolutional layer conv1_1, with 64 convolution kernels, it entered The convolution operation that row block size is 3 × 3 pixels and step-length is 1 pixel, obtains the characteristic pattern of 64 510 × 510 pixel sizes.
2nd step, the 64 width characteristic patterns that convolutional layer conv1_1 is exported are input to convolutional layer conv2_1, with 128 convolution Core, the convolution operation that block size is 3 × 3 pixels and step-length is 1 pixel is carried out to it, obtains 128 508 × 508 pixel sizes Characteristic pattern.
3rd step, the 128 width characteristic patterns that convolutional layer conv2_1 is exported input convolutional layer conv3_1, with 256 convolution Core, the convolution operation that block size is 3 × 3 pixels and step-length is 1 pixel is carried out to it, obtain 256 width resolution ratio as 506 × 506 The characteristic pattern of pixel.
4th step, the 256 width characteristic patterns that convolutional layer conv3_1 is exported input pond layer pool4, maximum pond are carried out to it Change operation, the size of pond block is 2 × 2 pixels, and step-length is 2 pixels, obtains the feature that 256 width resolution ratio are 253 × 253 pixels Figure.
5th step, 256 width characteristic patterns of pond layer pool4 outputs are inputted into convolutional layer conv5_1, with 512 convolution kernels, The convolution operation that block size is 3 × 3 pixels and step-length is 1 pixel is carried out to it, it is 251 × 251 pixels to obtain 512 width resolution ratio Characteristic pattern.
6th step, the 512 width characteristic patterns that convolutional layer conv5_1 is exported input full articulamentum fc6, according to the following formula, to wherein Each pixel enters line activating, the value of the pixel of the characteristic pattern after being activated, and the characteristic pattern after activation is suitable with what is arranged Sequence is arranged in 1 dimensional vector, obtains the characteristic pattern of 1 × 3136 dimension:
Wherein, f (x) represents the value of the pixel of the characteristic pattern after activation, and x represents the pixel of characteristic pattern before activating Value, e represent a natural constant.
7th step, by the characteristic vector input classification layer softmax7 of full articulamentum fc6 outputs, obtain the classification of texture maps Label.
8th step, according to the following formula, the probability of the tag along sort of classification layer softmax7 outputs is calculated, exports each contingency table The probability of label:
P (β=t | α;θ)=eθ
Wherein, the probability of the tag along sort of p () presentation class layer softmax7 outputs, β are represented in convolutional neural networks The α characteristic pattern of full articulamentum fc6 outputs, the tag along sort value that t presentation class layers softmax7 is exported, t ∈ 1,2 ..., 20, | limitation symbol is represented, e represents a natural constant, and θ represents model parameter.
9th step, according to the following formula, calculate classification layer softmax7 loss function:
Wherein, J (θ) represents loss function, and m represents the quantity of texture sample, and e represents a natural constant, and θ represents model Parameter.
The step of training convolutional neural networks, is as follows:
1st step, propagation stage forward, sample input convolutional neural networks are calculated into corresponding reality output, in this stage, Information, by converting step by step, is sent to convolutional neural networks output layer from convolutional neural networks input layer.
2nd step, in the back-propagation stage, calculate the preferable output corresponding with sample label of convolutional neural networks reality output Difference, by the method for minimization error, backpropagation adjusts the weights of convolutional neural networks.
3rd step, the operation of the 1st step and the 2nd step is repeated, the output after convolutional neural networks classification layer softmax7 Loss function J (θ)≤0.0001 untill, the convolutional neural networks that are trained.
Texture picture is input to convolutional neural networks, training convolutional neural networks, until the loss function of its output layer Value is less than or equal to 0.0001, the convolutional neural networks trained.
Step 3, pending texture image is divided.
Pending texture image is input to first 5 layers of the convolutional neural networks trained, obtains first 5 layers of characteristic pattern.
The characteristic pattern matrix of pending texture image forms gram Gram squares from being multiplied in each layer of convolutional neural networks Battle array.
According to the following formula, the sub-block matrix of pending texture image is generated:
Wherein,Represent to operate y value, y represents the sub-block matrix of pending texture image, and min represents minimum value Operation, s represent the sub-block weight coefficient of pending texture image, and s ∈ { 1000,2000 }, ∈ represent to belong to symbol, and ∑ represents to ask And operation, wrRepresent the weighted value of convolutional neural networks r layers trained, Nr、MrThe convolutional Neural net trained is represented respectively The row and column of network r layer characteristic vectors, GrRepresent the gram Gram matrixes of the r layers in the convolutional neural networks trained.
During s=1000, the sub-block matrix 1 of pending texture image is generated, according to array scan mode, will wait to locate successively The composograph matrix of reason texture image is put into the composograph position of each self-corresponding pending texture image, obtains waiting to locate Manage the sub-block 1 of texture image.
The step of array scan mode, is as follows:
From left to right, each element being successively read from top to bottom in matrix.
During s=2000, the sub-block matrix 2 of pending texture image is generated, according to array scan mode, will wait to locate successively The composograph matrix of reason texture image is put into the composograph position of each self-corresponding pending texture image, obtains waiting to locate Manage the sub-block 2 of texture image.
Step 4, according to the following formula, the composograph matrix of pending texture image is generated:
Wherein,Represent to operate T values, T represents the composograph matrix of pending texture image, and min represents minimum Value Operations, λ represent model parameter, and λ ∈ [0,1], and ∑ represents sum operation, wqRepresent the convolutional neural networks that train the The weighted value of q layers, Nq、MqThe row and column of q layer characteristic vectors, G are represented respectivelyqRepresent in the convolutional neural networks trained The gram Gram matrixes of the sub-block 1 of the pending texture image of q layers, FqRepresent the q in the convolutional neural networks trained The characteristic pattern matrix of the sub-block 2 of the pending texture image of layer.
Step 5, the composograph of pending texture image is generated.
According to array scan mode, successively by the composograph matrix of pending texture image, it is put into each self-corresponding In the composograph position of pending texture image, the composograph of pending texture image is obtained.
The effect of the present invention can be described further by following emulation experiment.
1. emulation experiment condition:
The present invention experiment simulation environment be:
Software:Ubuntu 14.04, Ipython2.7
Processor:Intel Xeon(R)CPU [email protected]×8
Internal memory:125.9GB
Image used in the emulation experiment of the present invention is as shown in Figure 2.The image sources are in standard picture storehouse.
2. emulation experiment content:
The emulation experiment of the present invention is specifically divided into three emulation experiments.
Emulation experiment one:It is that accompanying drawing 2 is used as input, the texture maps of synthesis 200 × 200 to texture image by the use of the present invention Picture, as a result as shown in Figure 3.
Emulation experiment two:It is that accompanying drawing 2 is used as input, the texture maps of synthesis 800 × 800 to texture image by the use of the present invention Picture, as a result as shown in Figure 4.
Emulation experiment three:It is that accompanying drawing 2 is used as input, the texture maps of synthesis 1024 × 1024 to texture image by the use of the present invention Picture, as a result as shown in Figure 5.
3. the simulation experiment result is analyzed:
The obtained synthesis texture image of the present invention, which is can be seen that, from Fig. 3, Fig. 4, Fig. 5 does not occur fuzzy sign, the three of synthesis The texture edge clear of kind size texture image, illustrates that the present invention can successfully suppress noise, can synthesize arbitrary size on a large scale Texture image, and synthesize texture properties it is fine.

Claims (5)

1. a kind of arbitrary dimension sample texture synthetic method based on convolutional neural networks, it is characterised in that comprise the following steps:
(1) the pending texture image of one 512 × 512 is inputted;
(2) structure and training convolutional neural networks:
(2a) structure contains 7 layers of convolutional neural networks;
Texture picture is input to convolutional neural networks, training convolutional neural networks, until the loss function of its output layer by (2b) Value is less than or equal to 0.0001, the convolutional neural networks trained;
(3) pending texture image is divided:
Pending texture image is input to first 5 layers of the convolutional neural networks trained by (3a), obtains first 5 layers of characteristic pattern;
The characteristic pattern matrix of pending texture image forms gram Gram squares from being multiplied in each layer of (3b) convolutional neural networks Battle array;
(3c) according to the following formula, generates the sub-block matrix of pending texture image:
<mrow> <munder> <mi>arg</mi> <mi>y</mi> </munder> <mi>min</mi> <mo>{</mo> <mi>s</mi> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>r</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>5</mn> </munderover> <msub> <mi>w</mi> <mi>r</mi> </msub> <mfrac> <mn>1</mn> <mrow> <mn>4</mn> <msup> <msub> <mi>N</mi> <mi>r</mi> </msub> <mn>2</mn> </msup> <msubsup> <mi>M</mi> <mi>r</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <msup> <mrow> <mo>(</mo> <msup> <mi>G</mi> <mi>r</mi> </msup> <mo>-</mo> <mi>y</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>}</mo> </mrow>
Wherein,Representing to operate y value, y represents the sub-block matrix of pending texture image, and min represents minimum Value Operations, S represents the sub-block weight coefficient of pending texture image, and s ∈ { 1000,2000 }, ∈ represent to belong to symbol, and ∑ represents summation behaviour Make, wrRepresent the weighted value of convolutional neural networks r layers trained, Nr、MrThe convolutional neural networks trained are represented respectively The row and column of r layer characteristic vectors, GrRepresent the gram Gram matrixes of the r layers in the convolutional neural networks trained;
During (3d) s=1000, the sub-block matrix 1 of pending texture image is generated, according to array scan mode, will wait to locate successively The composograph matrix of reason texture image is put into the composograph position of each self-corresponding pending texture image, obtains waiting to locate Manage the sub-block 1 of texture image;
During (3e) s=2000, the sub-block matrix 2 of pending texture image is generated, according to array scan mode, will wait to locate successively The composograph matrix of reason texture image is put into the composograph position of each self-corresponding pending texture image, obtains waiting to locate Manage the sub-block 2 of texture image;
(4) according to the following formula, the composograph matrix of pending texture image is generated:
<mrow> <munder> <mi>arg</mi> <mi>T</mi> </munder> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mo>{</mo> <mi>&amp;lambda;</mi> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>5</mn> </munderover> <msub> <mi>w</mi> <mi>q</mi> </msub> <mfrac> <mn>1</mn> <mrow> <mn>4</mn> <msup> <msub> <mi>N</mi> <mi>q</mi> </msub> <mn>2</mn> </msup> <msubsup> <mi>M</mi> <mi>q</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <msup> <mrow> <mo>(</mo> <msup> <mi>G</mi> <mi>q</mi> </msup> <mo>-</mo> <mi>T</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&amp;lambda;</mi> <mo>)</mo> </mrow> <mn>2</mn> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>5</mn> </munderover> <msub> <mi>w</mi> <mi>q</mi> </msub> <msup> <mrow> <mo>(</mo> <msup> <mi>F</mi> <mi>q</mi> </msup> <mo>-</mo> <mi>T</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>}</mo> </mrow>
Wherein,Expression pairTValue operates,TThe composograph matrix of pending texture image is represented, min represents minimum value behaviour Make, λ represents model parameter, and λ ∈ [0,1], and ∑ represents sum operation, wqThe convolutional neural networks for representing to train are in q layers Weighted value, Nq、MqThe row and column of q layer characteristic vectors, G are represented respectivelyqRepresent the q in the convolutional neural networks trained The gram Gram matrixes of the sub-block 1 of the pending texture image of layer, FqRepresent that q layers are treated in the convolutional neural networks trained Handle the characteristic pattern matrix of the sub-block 2 of texture image;
(5) composograph of pending texture image is generated:
According to array scan mode, successively by the composograph matrix of pending texture image, it is put into and each self-corresponding waits to locate In the composograph position for managing texture image, the composograph of pending texture image is obtained.
2. the arbitrary dimension sample texture synthetic method according to claim 1 based on convolutional neural networks, its feature exist In:The structure of 7 layers of convolutional neural networks is convolutional layer conv1_1 successively described in step (2a), convolutional layer conv2_1, convolution Layer conv3_1, pond layer pool4, convolutional layer conv5_1, full articulamentum fc6, classification layer softmax7.
3. the arbitrary dimension sample texture synthetic method according to claim 1 based on convolutional neural networks, its feature exist In:The step of structure is containing 7 layers of convolutional neural networks described in step (2a) are as follows:
1st step, the texture maps of 512 × 512 pixel sizes are inputted into convolutional layer conv1_1, with 64 convolution kernels, block is carried out to it The convolution operation that size is 3 × 3 pixels and step-length is 1 pixel, obtains the characteristic pattern of 64 510 × 510 pixel sizes;
2nd step, the 64 width characteristic patterns that convolutional layer conv1_1 is exported are input to convolutional layer conv2_1, right with 128 convolution kernels It carries out the convolution operation that block size is 3 × 3 pixels and step-length is 1 pixel, obtains the feature of 128 508 × 508 pixel sizes Figure;
3rd step, the 128 width characteristic patterns that convolutional layer conv2_1 is exported input convolutional layer conv3_1, right with 256 convolution kernels It carries out the convolution operation that block size is 3 × 3 pixels and step-length is 1 pixel, obtains 256 width resolution ratio as 506 × 506 pixels Characteristic pattern;
4th step, the 256 width characteristic patterns that convolutional layer conv3_1 is exported input pond layer pool4, and maximum pondization behaviour is carried out to it Make, the size of pond block is 2 × 2 pixels, and step-length is 2 pixels, obtains the characteristic pattern that 256 width resolution ratio are 253 × 253 pixels;
5th step, 256 width characteristic patterns of pond layer pool4 outputs are inputted into convolutional layer conv5_1, with 512 convolution kernels, to it The convolution operation that block size is 3 × 3 pixels and step-length is 1 pixel is carried out, obtains the spy that 512 width resolution ratio are 251 × 251 pixels Sign figure;
6th step, the 512 width characteristic patterns that convolutional layer conv5_1 is exported input full articulamentum fc6, according to the following formula, to each of which Individual pixel enters line activating, the value of the pixel of the characteristic pattern after being activated, and the characteristic pattern after activation is arranged with the order arranged 1 dimensional vector is arranged into, obtains the characteristic pattern of 1 × 3136 dimension:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msup> <mi>e</mi> <mi>x</mi> </msup> <mo>-</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mi>x</mi> </mrow> </msup> </mrow> <mrow> <msup> <mi>e</mi> <mi>x</mi> </msup> <mo>+</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mi>x</mi> </mrow> </msup> </mrow> </mfrac> </mrow>
Wherein, f (x) represents the value of the pixel of the characteristic pattern after activation, and x represents the value of the pixel of characteristic pattern before activating, e tables Show a natural constant;
7th step, by the characteristic vector input classification layer softmax7 of full articulamentum fc6 outputs, obtain the tag along sort of texture maps;
8th step, according to the following formula, the probability of the tag along sort of classification layer softmax7 outputs is calculated, exports each tag along sort Probability:
P (β=t | α;θ)=eθ
Wherein, the probability of the tag along sort of p () presentation class layer softmax7 outputs, β represent entirely to connect in convolutional neural networks Connect the α characteristic pattern of layer fc6 output, the tag along sort value that t presentation class layers softmax7 is exported, t ∈ 1,2 ..., 20, | Limitation symbol is represented, e represents a natural constant, and θ represents model parameter;
9th step, according to the following formula, calculate classification layer softmax7 loss function:
<mrow> <mi>J</mi> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mi>m</mi> </mfrac> <mo>&amp;lsqb;</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mi>i</mi> <mi> </mi> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msup> <mi>ie</mi> <mi>&amp;theta;</mi> </msup> </mrow> <mi>m</mi> </mfrac> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>i</mi> <mo>)</mo> </mrow> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msup> <mi>ie</mi> <mi>&amp;theta;</mi> </msup> </mrow> <mi>m</mi> </mfrac> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow>
Wherein, J (θ) presentation class layer softmax7 loss function, the quantity of m expression texture sample, e expressions one are naturally normal Number, θ represent model parameter.
4. the arbitrary dimension sample texture synthetic method according to claim 1 based on convolutional neural networks, its feature exist In:It is as follows the step of training convolutional neural networks described in step (2b):
1st step, sample input convolutional neural networks are calculated corresponding reality output, in this stage, information by propagation stage forward From convolutional neural networks input layer by converting step by step, convolutional neural networks output layer is sent to;
2nd step, in the back-propagation stage, the convolutional neural networks reality output preferable difference exported corresponding with sample label is calculated, By the method for minimization error, backpropagation adjusts the weights of convolutional neural networks;
3rd step, the operation of the 1st step and the 2nd step is repeated, the damage of the output after convolutional neural networks classification layer softmax7 Untill losing function J (θ)≤0.0001.
5. the arbitrary dimension sample texture synthetic method according to claim 1 based on convolutional neural networks, its feature exist In:Step (3d), step (3e), array scan mode is from left to right, is successively read square from top to bottom described in step (5) Each element in battle array.
CN201710781915.4A 2017-09-02 2017-09-02 Arbitrary dimension sample texture synthetic method based on convolutional neural networks Active CN107578455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710781915.4A CN107578455B (en) 2017-09-02 2017-09-02 Arbitrary dimension sample texture synthetic method based on convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710781915.4A CN107578455B (en) 2017-09-02 2017-09-02 Arbitrary dimension sample texture synthetic method based on convolutional neural networks

Publications (2)

Publication Number Publication Date
CN107578455A true CN107578455A (en) 2018-01-12
CN107578455B CN107578455B (en) 2019-11-01

Family

ID=61031158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710781915.4A Active CN107578455B (en) 2017-09-02 2017-09-02 Arbitrary dimension sample texture synthetic method based on convolutional neural networks

Country Status (1)

Country Link
CN (1) CN107578455B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280861A (en) * 2018-01-22 2018-07-13 厦门启尚科技有限公司 A kind of method that picture carries out intellectual search circular treatment
CN108986058A (en) * 2018-06-22 2018-12-11 华东师范大学 The image interfusion method of lightness Consistency Learning
CN110298899A (en) * 2019-06-10 2019-10-01 天津大学 One kind being based on the matched image texture synthetic method of convolutional neural networks characteristic pattern
CN110458919A (en) * 2018-02-07 2019-11-15 深圳市腾讯计算机***有限公司 A kind of dynamic texture video generation method, device, server and storage medium
CN110599530A (en) * 2019-09-03 2019-12-20 西安电子科技大学 MVCT image texture enhancement method based on double regular constraints
CN112541856A (en) * 2020-12-07 2021-03-23 重庆邮电大学 Medical image style migration method combining Markov field and Graham matrix characteristics
CN112967209A (en) * 2021-04-23 2021-06-15 上海大学 Endoscope image blood vessel texture enhancement method based on multiple sampling

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650653A (en) * 2016-12-14 2017-05-10 广东顺德中山大学卡内基梅隆大学国际联合研究院 Method for building deep learning based face recognition and age synthesis joint model

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650653A (en) * 2016-12-14 2017-05-10 广东顺德中山大学卡内基梅隆大学国际联合研究院 Method for building deep learning based face recognition and age synthesis joint model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
IVAN USTYUZHANINOV等: "Texture synthesis using shallow convolutional networks with random filters", 《ARXIV.ORG/ABS/1606.00021》 *
LEON A.GATYS等: "Texture synthesis using convolutional neural networks", 《ADVANCES IN NEURAL INFORMATION PROCEEDING SYSTEM 28》 *
XIUXIA CAI等: "Combining inconsistent textures using convolutional neural networks", 《JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280861A (en) * 2018-01-22 2018-07-13 厦门启尚科技有限公司 A kind of method that picture carries out intellectual search circular treatment
CN108280861B (en) * 2018-01-22 2021-08-27 厦门启尚科技有限公司 Method for intelligently searching and circularly processing pictures
CN110517335A (en) * 2018-02-07 2019-11-29 深圳市腾讯计算机***有限公司 A kind of dynamic texture video generation method, device, server and storage medium
CN110458919A (en) * 2018-02-07 2019-11-15 深圳市腾讯计算机***有限公司 A kind of dynamic texture video generation method, device, server and storage medium
CN108986058B (en) * 2018-06-22 2021-11-19 华东师范大学 Image fusion method for brightness consistency learning
CN108986058A (en) * 2018-06-22 2018-12-11 华东师范大学 The image interfusion method of lightness Consistency Learning
CN110298899A (en) * 2019-06-10 2019-10-01 天津大学 One kind being based on the matched image texture synthetic method of convolutional neural networks characteristic pattern
CN110298899B (en) * 2019-06-10 2023-04-07 天津大学 Image texture synthesis method based on convolutional neural network feature map matching
CN110599530A (en) * 2019-09-03 2019-12-20 西安电子科技大学 MVCT image texture enhancement method based on double regular constraints
CN110599530B (en) * 2019-09-03 2022-03-04 西安电子科技大学 MVCT image texture enhancement method based on double regular constraints
CN112541856A (en) * 2020-12-07 2021-03-23 重庆邮电大学 Medical image style migration method combining Markov field and Graham matrix characteristics
CN112541856B (en) * 2020-12-07 2022-05-03 重庆邮电大学 Medical image style migration method combining Markov field and Graham matrix characteristics
CN112967209A (en) * 2021-04-23 2021-06-15 上海大学 Endoscope image blood vessel texture enhancement method based on multiple sampling

Also Published As

Publication number Publication date
CN107578455B (en) 2019-11-01

Similar Documents

Publication Publication Date Title
CN107578455A (en) Arbitrary dimension sample texture synthetic method based on convolutional neural networks
Zhu et al. Data Augmentation using Conditional Generative Adversarial Networks for Leaf Counting in Arabidopsis Plants.
CN111753828B (en) Natural scene horizontal character detection method based on deep convolutional neural network
CN105657402B (en) A kind of depth map restoration methods
CN110210362A (en) A kind of method for traffic sign detection based on convolutional neural networks
CN103927531B (en) It is a kind of based on local binary and the face identification method of particle group optimizing BP neural network
CN110633661A (en) Semantic segmentation fused remote sensing image target detection method
CN108399362A (en) A kind of rapid pedestrian detection method and device
CN108154192A (en) High Resolution SAR terrain classification method based on multiple dimensioned convolution and Fusion Features
CN103413151B (en) Hyperspectral image classification method based on figure canonical low-rank representation Dimensionality Reduction
CN108491849A (en) Hyperspectral image classification method based on three-dimensional dense connection convolutional neural networks
CN110533721A (en) A kind of indoor objects object 6D Attitude estimation method based on enhancing self-encoding encoder
CN106682569A (en) Fast traffic signboard recognition method based on convolution neural network
CN107256246A (en) PRINTED FABRIC image search method based on convolutional neural networks
CN110728324B (en) Depth complex value full convolution neural network-based polarimetric SAR image classification method
CN107563442A (en) Hyperspectral image classification method based on sparse low-rank regular graph qualified insertion
CN105894045A (en) Vehicle type recognition method with deep network model based on spatial pyramid pooling
CN107229918A (en) A kind of SAR image object detection method based on full convolutional neural networks
CN107316004A (en) Space Target Recognition based on deep learning
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN106778852A (en) A kind of picture material recognition methods for correcting erroneous judgement
CN107944428A (en) A kind of indoor scene semanteme marking method based on super-pixel collection
CN107209942A (en) Method for checking object and image indexing system
CN108564111A (en) A kind of image classification method based on neighborhood rough set feature selecting
CN109472757A (en) It is a kind of that logo method is gone based on the image for generating confrontation neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant