CN107516290A - Image switching network acquisition methods, device, computing device and storage medium - Google Patents

Image switching network acquisition methods, device, computing device and storage medium Download PDF

Info

Publication number
CN107516290A
CN107516290A CN201710574583.2A CN201710574583A CN107516290A CN 107516290 A CN107516290 A CN 107516290A CN 201710574583 A CN201710574583 A CN 201710574583A CN 107516290 A CN107516290 A CN 107516290A
Authority
CN
China
Prior art keywords
image
network
style
sample
switching network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710574583.2A
Other languages
Chinese (zh)
Other versions
CN107516290B (en
Inventor
申发龙
颜水成
曾钢
程斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201710574583.2A priority Critical patent/CN107516290B/en
Publication of CN107516290A publication Critical patent/CN107516290A/en
Application granted granted Critical
Publication of CN107516290B publication Critical patent/CN107516290B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of image switching network acquisition methods, device, computing device and computer-readable storage medium, wherein, image switching network acquisition methods are based on trained first network and performed, and this method includes:Obtain the first image and the second image;First image and the second image are separately input into first network, and according to default blending weight, is weighted in the ranking operation layer of first network, obtains the second network corresponding with the style after the fusion of the first image and the second image.Technical scheme provided by the invention can be quickly obtained image switching network corresponding with the style after the fusion of two style images using trained first network, the efficiency for obtaining image switching network is effectively improved, optimizes image switching network processing mode.

Description

Image switching network acquisition methods, device, computing device and storage medium
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of image switching network acquisition methods, device, calculating Equipment and computer-readable storage medium.
Background technology
Using image stylization treatment technology, the style in style image can be transferred on the image of daily shooting, Image is enabled to obtain more preferable visual effect.In the prior art, it is to be directly inputted into a given style image In one neutral net (neural network), followed by substantial amounts of content images as sample image, by repeatedly changing Generation training obtains image switching network corresponding with given style image, is then realized using the image switching network in input Hold the style conversion of image.This will may require that the very long training time, cause the efficiency for obtaining image switching network low, separately Outside, it also is difficult to obtain image switching network corresponding with the style after the fusion of two style images using prior art.
The content of the invention
In view of the above problems, it is proposed that the present invention so as to provide one kind overcome above mentioned problem or at least in part solve on State image switching network acquisition methods, device, computing device and the computer-readable storage medium of problem.
According to an aspect of the invention, there is provided a kind of image switching network acquisition methods, this method is based on by instructing Experienced first network and perform, this method includes:
Obtain the first image and the second image;
First image and the second image are separately input into first network, and according to default blending weight, in the first net The ranking operation layer of network is weighted, and obtains the second net corresponding with the style after the fusion of the first image and the second image Network.
Further, the first image and the second image are separately input into first network, and according to default blending weight, It is weighted, obtains corresponding with the style after the fusion of the first image and the second image in the ranking operation layer of first network The second network further comprise:
First image is inputted into first network, a propagated forward computing is carried out in first network, determines first Ranking operation layer data corresponding to image;
Second image is inputted into first network, a propagated forward computing is carried out in first network, determines second Ranking operation layer data corresponding to image;
According to default blending weight, to the ranking operation number of plies corresponding to the first image in the ranking operation layer of first network According to the second image corresponding to ranking operation layer data be weighted, obtain with after the merging of the first image and the second image Style corresponding to the second network.
Further, ranking operation layer is bottleneck layer;Bottleneck layer is the vector with minimum in the convolutional layer of first network The level of dimension.
Further, ranking operation layer data is vector data.
Further, first network training sample image used includes:Multiple first samples of style image library storage Multiple second sample images of image and content images library storage.
Further, the training process of first network is completed by successive ignition;During an iteration, from style figure As extracting a first sample image in storehouse, at least one second sample image is extracted from content images storehouse, utilizes one the One sample image and at least one second sample image realize the training of first network.
Further, during successive ignition, one first sample image of fixed extraction, alternatively extract at least one Second sample image;After the second sample image extraction in content images storehouse, next first sample image is replaced, then Alternatively extract at least one second sample image.
Further, the training process of first network is completed by successive ignition;Wherein an iteration process includes:
Using the 3rd network corresponding with the style of first sample image, the 3rd sample corresponding with the second sample image is generated This image;
According to the style loss between the 3rd sample image and first sample image and the 3rd sample image and the second sample Content loss between this image, first network loss function is obtained, first network is realized using first network loss function Training.
Further, the training step of first network includes:
A first sample image is extracted from style image storehouse, at least one second sample is extracted from content images storehouse Image;
First sample image is inputted into first network, obtains the 3rd net corresponding with the style of first sample image Network;
Using the 3rd network corresponding with the style of first sample image, given birth to respectively at least one second sample image Into corresponding 3rd sample image;
According to the style loss and at least one 3rd between at least one 3rd sample image and first sample image Content loss between sample image and corresponding second sample image, obtains first network loss function, according to first network Loss function updates the weight parameter of first network;
Iteration performs the training step of first network, until meeting predetermined convergence condition.
Further, predetermined convergence condition includes:Iterations reaches default iterations;And/or first network loss The output valve of function is less than predetermined threshold value;And/or the visual effect parameter of the 3rd sample image corresponding with the second sample image Reach default visual effect parameter.
Further, first sample image is inputted into first network, obtained corresponding with the style of first sample image The 3rd network further comprise:
Style textural characteristics are extracted from first sample image;
Style textural characteristics are inputted into first network, obtain the 3rd network corresponding with style textural characteristics.
Further, first network is the metanetwork for being trained to obtain to neutral net, and the second network is that image is changed Network.
Further, the 3rd network is the switching network obtained in the training process of first network.
Further, this method is performed by terminal.
According to another aspect of the present invention, there is provided a kind of image stylization method for amalgamation processing, this method include:
Using the second network obtained by above-mentioned image switching network acquisition methods, sector-style is entered to the 3rd pending image Format processing, obtain the 4th image corresponding with the 3rd image.
According to another aspect of the present invention, there is provided a kind of image stylization intensity adjustment method, this method include:
Using the second network obtained by above-mentioned image switching network acquisition methods, in the first image and the second image Any one image carries out stylized processing, obtains corresponding 5th image.
According to another aspect of the present invention, there is provided a kind of image switching network acquisition device, the device are based on by instructing Experienced first network and run, the device includes:
Acquisition module, suitable for obtaining the first image and the second image;
Mapping block, suitable for the first image and the second image are separately input into first network, and according to default fusion Weights, it is weighted, obtains and the wind after the merging of the first image and the second image in the ranking operation layer of first network Second network corresponding to lattice.
Further, mapping block is further adapted for:
First image is inputted into first network, a propagated forward computing is carried out in first network, determines first Ranking operation layer data corresponding to image;
Second image is inputted into first network, a propagated forward computing is carried out in first network, determines second Ranking operation layer data corresponding to image;
According to default blending weight, to the ranking operation number of plies corresponding to the first image in the ranking operation layer of first network According to the second image corresponding to ranking operation layer data be weighted, obtain with after the merging of the first image and the second image Style corresponding to the second network.
Further, ranking operation layer is bottleneck layer;Bottleneck layer is the vector with minimum in the convolutional layer of first network The level of dimension.
Further, ranking operation layer data is vector data.
Further, first network training sample image used includes:Multiple first samples of style image library storage Multiple second sample images of image and content images library storage.
Further, the device also includes:First network training module;The training process of first network passes through successive ignition Complete;
First network training module is suitable to:During an iteration, a first sample is extracted from style image storehouse Image, at least one second sample image is extracted from content images storehouse, utilize a first sample image and at least one Two sample images realize the training of first network.
Further, first network training module is further adapted for:
One first sample image of fixed extraction, alternatively extracts at least one second sample image;When content images storehouse In the second sample image extraction after, replace next first sample image, alternatively still extract at least one second sample This image.
Further, the device also includes:First network training module;The training process of first network passes through successive ignition Complete;
First network training module is suitable to:During an iteration, using corresponding with the style of first sample image 3rd network, generate the 3rd sample image corresponding with the second sample image;
According to the style loss between the 3rd sample image and first sample image and the 3rd sample image and the second sample Content loss between this image, first network loss function is obtained, first network is realized using first network loss function Training.
Further, the device also includes:First network training module;
First network training module includes:
Extraction unit, suitable for from style image storehouse extract a first sample image, from content images storehouse extraction to Few second sample image;
Generation unit, suitable for first sample image is inputted into first network, obtain the style with first sample image Corresponding 3rd network;
Processing unit, suitable for utilizing the 3rd network corresponding with the style of first sample image, respectively at least one 3rd sample image corresponding to the generation of second sample image;
Updating block, suitable for according between at least one 3rd sample image and first sample image style loss and Content loss between at least one 3rd sample image and corresponding second sample image, obtains first network loss function, The weight parameter of first network is updated according to first network loss function;
First network training module iteration is run, until meeting predetermined convergence condition.
Further, predetermined convergence condition includes:Iterations reaches default iterations;And/or first network loss The output valve of function is less than predetermined threshold value;And/or the visual effect parameter of the 3rd sample image corresponding with the second sample image Reach default visual effect parameter.
Further, generation unit is further adapted for:
Style textural characteristics are extracted from first sample image;
Style textural characteristics are inputted into first network, obtain the 3rd network corresponding with style textural characteristics.
Further, first network is the metanetwork for being trained to obtain to neutral net, and the second network is that image is changed Network.
Further, the 3rd network is the switching network obtained in the training process of first network.
According to another aspect of the present invention, there is provided a kind of terminal, including above-mentioned image switching network acquisition device.
According to another aspect of the present invention, there is provided a kind of image stylization fusion treatment device, the device include:
First processing module, suitable for using the second network obtained by above-mentioned image switching network acquisition device, treating place 3rd image of reason carries out stylized processing, obtains the 4th image corresponding with the 3rd image.
According to another aspect of the present invention, there is provided a kind of image stylization intensity adjustment device, the device include:
Second processing module, suitable for the second network obtained by the above-mentioned image switching network acquisition device of utilization, to first Any one image in image and the second image carries out stylized processing, obtains corresponding 5th image.
According to another aspect of the present invention, there is provided a kind of computing device, including:Processor, memory, communication interface and Communication bus, processor, memory and communication interface complete mutual communication by communication bus;
Memory is used to deposit an at least executable instruction, and executable instruction makes for example above-mentioned image transition net of computing device Operated corresponding to network acquisition methods.
According to another aspect of the present invention, there is provided a kind of computer-readable storage medium, be stored with least one in storage medium Executable instruction, executable instruction make computing device be operated as corresponding to above-mentioned image switching network acquisition methods.
According to another aspect of the present invention, there is provided a kind of computing device, including:Processor, memory, communication interface and Communication bus, processor, memory and communication interface complete mutual communication by communication bus;
Memory is used to deposit an at least executable instruction, and executable instruction makes for example above-mentioned image stylization of computing device Operated corresponding to method for amalgamation processing.
According to another aspect of the present invention, there is provided a kind of computer-readable storage medium, be stored with least one in storage medium Executable instruction, executable instruction make computing device be operated as corresponding to above-mentioned image stylization method for amalgamation processing.
According to another aspect of the invention, there is provided a kind of computing device, including:Processor, memory, communication interface and Communication bus, processor, memory and communication interface complete mutual communication by communication bus;
Memory is used to deposit an at least executable instruction, and executable instruction makes for example above-mentioned image stylization of computing device Operated corresponding to intensity adjustment method.
In accordance with a further aspect of the present invention, there is provided a kind of computer-readable storage medium, be stored with least one in storage medium Executable instruction, executable instruction make computing device be operated as corresponding to above-mentioned image stylization intensity adjustment method.
According to technical scheme provided by the invention, the first image and the second image are obtained, then by the first image and second Image is separately input into first network, and according to default blending weight, fortune is weighted in the ranking operation layer of first network Calculate, obtain the second network corresponding with the style after the fusion of the first image and the second image.Technical scheme provided by the invention Image corresponding with the style after the fusion of two style images can be quickly obtained using trained first network to turn Switching network, the efficiency for obtaining image switching network is effectively improved, optimizes image switching network processing mode.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can Become apparent, below especially exemplified by the embodiment of the present invention.
Brief description of the drawings
By reading the detailed description of hereafter preferred embodiment, it is various other the advantages of and benefit it is common for this area Technical staff will be clear understanding.Accompanying drawing is only used for showing the purpose of preferred embodiment, and is not considered as to the present invention Limitation.And in whole accompanying drawing, identical part is denoted by the same reference numerals.In the accompanying drawings:
Fig. 1 shows the schematic flow sheet of image switching network acquisition methods according to an embodiment of the invention;
Fig. 2 shows the schematic flow sheet of network training method according to an embodiment of the invention;
Fig. 3 shows that the image switching network corresponding with the style of given style image obtained using the present invention is internal Hold the result example set figure that image carries out stylized processing;
Fig. 4 shows the schematic flow sheet of image switching network acquisition methods in accordance with another embodiment of the present invention;
Fig. 5 a show the schematic flow sheet of image stylization method for amalgamation processing according to an embodiment of the invention;
Fig. 5 b show corresponding with the style after the fusion of the first image and the second image the obtained using the present invention Two networks carry out the result example set figure of stylized processing to the 3rd image;
Fig. 6 a show the schematic flow sheet of image stylization intensity adjustment method according to an embodiment of the invention;
Fig. 6 b show corresponding with the style after the fusion of the first image and the second image the obtained using the present invention Two networks carry out the result example set figure of stylized processing to the second image;
Fig. 7 shows the structured flowchart of image switching network acquisition device according to an embodiment of the invention;
Fig. 8 shows image switching network acquisition device and image stylization fusion in accordance with another embodiment of the present invention The connection block diagram of processing unit;
Fig. 9 shows image switching network acquisition device and image stylization intensity in accordance with another embodiment of the present invention The connection block diagram of adjusting means;
Figure 10 shows a kind of structural representation of computing device according to embodiments of the present invention.
Embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in accompanying drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here Limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure Completely it is communicated to those skilled in the art.
Fig. 1 shows the schematic flow sheet of image switching network acquisition methods according to an embodiment of the invention, the party Method is performed by terminal, and this method is based on trained first network and performed, as shown in figure 1, this method comprises the following steps:
Step S100, obtain the first image and the second image.
Wherein, the first image and the second image are two style images for having different-style, for the ease of distinguishing, at this This two style images are referred to as the first image and the second image in invention.Specifically, the first image and the second image can To be the style image with any style, however it is not limited to have the style image of some specific styles.
Step S101, the first image and the second image are separately input into first network, and according to default blending weight, It is weighted, obtains corresponding with the style after the fusion of the first image and the second image in the ranking operation layer of first network The second network.
In the specific embodiment of the invention, first network is that obtained metanetwork (meta is trained to neutral net Network), the second network is image switching network.Wherein, first network is trained that specifically, first network is trained Sample image used includes:Multiple first sample images of style image library storage and multiple the second of content images library storage Sample image.Wherein, first sample image is style sample image, and the second sample image is content sample image.By training First network can be perfectly suitable for any style image and arbitrary content image, then the first image is inputted to the first net After in network, it becomes possible to which rapidly mapping obtains image switching network corresponding with the style of first image, similarly, by the second figure After inputting into first network, it also can rapidly map to obtain image transition net corresponding with the style of second image Network, then in order to obtain image switching network corresponding with the style after the fusion of the first image and the second image, in step Acquired the first image and the second image can be separately input into S101 in first network, and according to default blending weight, It is weighted in the ranking operation layer of first network, then ranking operation result is substituting in first network, it becomes possible to It is quickly obtained the second network corresponding with the style after the fusion of the first image and the second image.
Those skilled in the art can set default blending weight according to being actually needed, and not limit herein.Have in the present invention In body embodiment, default blending weight may include to preset to preset corresponding to blending weight and the second image corresponding to the first image to melt Close weights.It is assumed that it is 0.8 that blending weight is preset corresponding to the first image, it is 0.2 that blending weight is preset corresponding to the second image, Style so after the fusion of the first image and the second image has the style and the second of 20% of 80% the first image for fusion The style of image.
Specifically, the training process of first network is completed by successive ignition.Alternatively, during an iteration, from A first sample image is extracted in style image storehouse, at least one second sample image is extracted from content images storehouse, is utilized One first sample image and at least one second sample image realize the training of first network.
Alternatively, an iteration process includes:Using the 3rd network corresponding with the style of first sample image, generation with 3rd sample image corresponding to second sample image;According between the 3rd sample image and first sample image style lose with And the 3rd content loss between sample image and the second sample image, first network loss function is obtained, utilizes first network Loss function realizes the training of first network.
Wherein, the 3rd network is the switching network obtained in the training process of first network.Specifically, the second network and 3rd network is image switching network in itself, but difference between the two is:Second network is in actual applications Obtained image switching network corresponding with the style after the fusion of two style images, and the 3rd network is in first network The image switching network corresponding with the style of a style image obtained in training process.
The image switching network acquisition methods provided according to embodiments of the present invention, the first image and the second image are obtained, is connect And the first image and the second image are separately input into first network, and according to default blending weight, in adding for first network Power operation layer is weighted, and obtains the second network corresponding with the style after the fusion of the first image and the second image.This Invent the technical scheme that provides using trained first network can be quickly obtained with after the merging of two style images Style corresponding to image switching network, be effectively improved obtain image switching network efficiency, optimize image transition net Network processing mode.
Fig. 2 shows the schematic flow sheet of network training method according to an embodiment of the invention, as shown in Fig. 2 the The training step of one network comprises the following steps:
Step S200, a first sample image is extracted from style image storehouse, at least one is extracted from content images storehouse Individual second sample image.
In specific training process, 100,000 first sample images of style image library storage, content images library storage 100000 the second sample images, wherein, first sample image is style image, and the second sample image is content images.In step In S200, a first sample image is extracted from style image storehouse, at least one second sample is extracted from content images storehouse Image.The quantity that those skilled in the art can set the second sample image according to being actually needed, is not limited herein.
Step S201, first sample image is inputted into first network, obtained corresponding with the style of first sample image The 3rd network.
In one particular embodiment of the present invention, first network is that obtained metanetwork is trained to neutral net. For example, neutral net can be VGG-16 convolutional neural networks (convolutional neural network).Specifically, in step In rapid S201, style textural characteristics are extracted from first sample image, then input the style textural characteristics extracted to the In one network, propagated forward (forward propagation) computing is carried out in first network, is obtained and style textural characteristics Corresponding 3rd network.
Step S202, using the 3rd network corresponding with the style of first sample image, respectively at least one second 3rd sample image corresponding to sample image generation.
After the 3rd network corresponding with the style of first sample image has been obtained, so that it may utilize and first sample image Style corresponding to the 3rd network, respectively at least one second sample image generation corresponding to the 3rd sample image, the 3rd Sample image is Style Transfer image corresponding with the second sample image, and Style Transfer image has and first sample image one The style of cause.When being extracted 8 the second sample images in step s 200, then in step S202, respectively for 8 second 3rd sample image corresponding to sample image generation, i.e., generate the 3rd sample corresponding to one for each second sample image This image.
Step S203, according to the style loss between at least one 3rd sample image and first sample image and at least Content loss between one the 3rd sample image and corresponding second sample image, obtains first network loss function, according to First network loss function updates the weight parameter of first network.
Wherein, the particular content that those skilled in the art can set first network loss function according to being actually needed, herein Do not limit.In a specific embodiment, first network loss function can be:
Wherein, IcFor the second sample image, IsFor first sample image, I is the 3rd sample image, and CP is in perceiving The other perception function of tolerance, SP are the perception function for perceiving style difference,For the 3rd sample graph Picture and the content loss between corresponding second sample image,For the 3rd sample image and first sample Between image style loss, θ be first network weight parameter, λcWeight, λ are lost for preset contentsLost for default style Weight.According to above-mentioned first network loss function, backpropagation (back propagation) computing is carried out, passes through operation result Update the weight parameter θ of first network.
In a specific training process, first network is that obtained metanetwork is trained to neutral net, the 3rd Network is the switching network obtained in the training process of first network.Utilize stochastic gradient descent (stochastic Gradient descent) Algorithm for Training first network.Specifically training process includes:
1. the iterations k and the second sample image I of a first sample image are setcNumber m.For example, it will can change Generation number k is arranged to 20, by the second sample image IcNumber m be arranged to 8, represent in the training process of metanetwork, for One first sample image needs iteration 20 times, and each iteration needs to extract 8 the second sample image I from content images storehousec
2. one first sample image I of fixed extraction from style image storehouses
3. by first sample image IsInput to first network N (;In θ), first network N (;Feedovered in θ) (feed-forward propagation) computing is propagated, is obtained and first sample image IsStyle corresponding to the 3rd network w. Wherein, the 3rd network w and first network N (;Mapping equation θ) is:w←N(Is;θ).
4. input m the second sample image Ic.Wherein, m the second sample image IcIt can useRepresent.
5. the 3rd network w is utilized, respectively for each the second sample image Ic3rd sample image I corresponding to generation.
6. the weight parameter θ of first network is updated according to first network loss function.
Wherein, first network loss function is specially:
In first network loss function, λcWeight, λ are lost for preset contentsWeight is lost for default style.
Step S204, iteration perform the training step of first network, until meeting predetermined convergence condition.
Wherein, those skilled in the art can set predetermined convergence condition according to being actually needed, and not limit herein.For example, Predetermined convergence condition may include:Iterations reaches default iterations;And/or the output valve of first network loss function is small In predetermined threshold value;And/or the visual effect parameter of the 3rd sample image corresponding with the second sample image reaches default vision effect Fruit parameter.Specifically, can be by judging whether iterations reaches default iterations to judge whether to meet predetermined convergence Condition, whether predetermined threshold value can also be less than to judge whether to meet predetermined convergence according to the output valve of first network loss function Condition, whether can also reach default by judging the visual effect parameter of the 3rd sample image corresponding with the second sample image Visual effect parameter judges whether to meet predetermined convergence condition.In step S204, iteration performs the training step of first network Suddenly, until meeting predetermined convergence condition, so as to obtain trained first network.
It is worth noting that, for the stability of first network during training for promotion, the present invention is in successive ignition process In, one first sample image of fixed extraction, alternatively extract at least one second sample image;When in content images storehouse After two sample images extract, next first sample image is replaced, alternatively still extracts at least one second sample image.
By way of first fixing first sample image and constantly replacing the second sample image, it can efficiently train to obtain Suitable for the first sample image and the first network of any second sample image, next first sample image is then replaced again And the second sample image is constantly replaced, it is applied to above-mentioned two first sample image and any second sample graph so as to train to obtain The first network of picture.Said process is repeated up to the second sample in the first sample image and content images storehouse in style image storehouse This image is extracted and finished, it becomes possible to which training obtains being applied to the first of any first sample image and any second sample image Network, the first network of any style image and arbitrary content image is obtained being applied to equivalent to training, so as to effectively contract Subtract the time needed for training first network, improve the training effectiveness of first network.
Because trained obtained first network can be perfectly suitable for any style image and arbitrary content image, that It just can rapidly map to obtain the corresponding image switching network of style of the style image with giving using first network, and It is not directly using neural metwork training to obtain image switching network, therefore compared with prior art, greatly improves Obtain the speed of image switching network.In addition, using first network, according to default blending weight, transported in the weighting of first network Calculate layer to be weighted, can also be quickly obtained image transition net corresponding with the style after the fusion of two style images Network.
Fig. 3 shows that the image switching network corresponding with the style of given style image obtained using the present invention is internal Hold the result example set figure that image carries out stylized processing, as shown in figure 3, the image in the first row in Fig. 3 is style image, Image in the first row is content images, and remaining image in Fig. 3 is the Style Transfer image that is obtained after stylization processing.Its In, the image of the second row secondary series is to utilize image switching network pair corresponding with the style of the style image of the second row first row The content images of the first row secondary series carry out the Style Transfer image obtained by stylization processing;The tertial image of second row is Using the corresponding image switching network of the style of the style image with the second row first row to the tertial content images of the first row The Style Transfer image obtained by stylization processing is carried out, the rest may be inferred.As shown in figure 3, Style Transfer image has been provided with pair The style for the style image answered.
It is provided by the invention below by comparative illustration is carried out with two kinds of image stylizations processing method of the prior art Advantage possessed by method.Wherein, table 1 shows the ratio of this method and two kinds of images stylization processing method of the prior art Relatively result.
Table 1
As shown in table 1, Gai Tisi et al. have submitted paper in 2015《A kind of neural algorithm of artistic style》, the paper Proposed in method can not obtain image switching network, but any style can be applied to, need to take 9.52s can just obtain pair The Style Transfer image answered.
Johnson et al. has delivered paper in 2016 in European Computer visual conference《Real-time style conversion and oversubscription The perception loss of resolution》, the method proposed in the paper need to take 4h and just obtain corresponding image switching network, and can only fit For a kind of style, but it need to only take 0.015s and obtain corresponding Style Transfer image.
And method provided by the invention can be applied not only to any style compared with above two method, and only need Time-consuming 0.022s obtains corresponding image switching network, and need to only take 0.015s and obtain corresponding Style Transfer image, effectively Ground improves the speed for obtaining image switching network and the efficiency for obtaining Style Transfer image.
Fig. 4 shows the schematic flow sheet of image switching network acquisition methods in accordance with another embodiment of the present invention, should Method is performed by terminal, and this method is based on trained first network and performed, as shown in figure 4, this method includes following step Suddenly:
Step S400, obtain the first image and the second image.
Wherein, the first image and the second image are two style images for having different-style, the first image and the second figure As that can be the style image with any style, however it is not limited to have the style image of some specific styles.First image and Second image can be the style image that style image in website or other users are shared, and not limit herein.
Step S401, the first image is inputted into first network, a propagated forward computing is carried out in first network, Determine ranking operation layer data corresponding to the first image.
Because first network is trained, the first network can be perfectly suitable for any style image and any Content images,, only need to be without being trained again for first image so after the first image is inputted into first network A propagated forward computing is carried out in first network, it becomes possible to quickly determine adding in ranking operation layer corresponding to the first image Weigh computing layer data.Wherein, ranking operation layer data can be vector data.
Specifically, ranking operation layer can be bottleneck layer, then ranking operation layer data is bottleneck layer data.Bottleneck layer To have the level of minimum vector dimension in the convolutional layer of first network.The transmission function of bottleneck layer be probably it is linear, It is probably nonlinear.Bottleneck layer plays vital effect in first network, and bottleneck layer avoids that and is easy to reality The equal mapping relations of existing man-to-man output and input, it makes first network encode and press the style image of input Contracting, and decoded and decompressed to produce the estimate of the style image after bottleneck layer.Therefore bottleneck layer is made an uproar with filtering The effect of sound, the bottleneck layer data in bottleneck layer contain the essential information of the style image.So in step S401, by One image is inputted to first network, and ranking operation layer data corresponding to identified first image contains the base of the first image This information.
Step S402, the second image is inputted into first network, a propagated forward computing is carried out in first network, Determine ranking operation layer data corresponding to the second image.
Because trained first network can be perfectly suitable for any style image and arbitrary content image, so After second image is inputted into first network, without being trained again for second image, it need to only enter in first network Propagated forward computing of row, it becomes possible to quickly determine the ranking operation number of plies in ranking operation layer corresponding to the second image According to, wherein, the ranking operation layer data contains the essential information of the second image.
Step S403, according to default blending weight, to adding corresponding to the first image in the ranking operation layer of first network Ranking operation layer data corresponding to power computing layer data and the second image is weighted, and obtains and the first image and the second figure As fusion after style corresponding to the second network.
Specifically, because ranking operation layer data corresponding to the first image contains the essential information of the first image, second Ranking operation layer data contains the essential information of the second image corresponding to image, then in step S403, is melted according to default Weights are closed, to corresponding to ranking operation layer data corresponding to the first image and the second image in the ranking operation layer of first network Ranking operation layer data is weighted, and then ranking operation result is substituting in first network, it becomes possible to is obtained and the Second network corresponding to style after the fusion of one image and the second image, it is continuous so as to also demonstrate the space of network manifold Property.
In the specific embodiment of the invention, default blending weight may include to preset blending weight and the corresponding to the first image Blending weight is preset corresponding to two images.Specifically, can be first by corresponding to the first image in the ranking operation layer of first network Ranking operation layer data, which is multiplied by, presets blending weight corresponding to the first image, ranking operation layer data corresponding to the second image is multiplied To preset blending weight corresponding to the second image, then the result after multiplication is weighted again, so as to obtain and first Second network corresponding to style after the fusion of image and the second image.It is assumed that default blending weight is corresponding to the first image 0.8, default blending weight corresponding to the second image is 0.2, then the style after the fusion of the first image and the second image is fusion There are the style of 80% the first image and the style of 20% the second image.
The image switching network acquisition methods provided according to embodiments of the present invention, can using trained first network Ranking operation layer data corresponding to two style images is quickly determined, according to the ranking operation number of plies corresponding to two style images According to image switching network corresponding with the style after the fusion of two style images can be readily obtained, further increase and obtain Obtain the efficiency of image switching network.
Present invention also offers a kind of image stylization method for amalgamation processing, this method includes:Using provided by the invention The second network obtained by image switching network acquisition methods, stylized processing is carried out to pending the 3rd image, obtain with 4th image corresponding to 3rd image.
Fig. 5 a show the schematic flow sheet of image stylization method for amalgamation processing according to an embodiment of the invention, such as Shown in Fig. 5 a, this method comprises the following steps:
Step S500, obtain the 3rd pending image.
When user is wanted another image procossing in addition to the first image and the second image into both having the first image Style again with the second image style image when, the image can be obtained in step S500.In order to above first Image and the second image make a distinction, and in the present invention think user to be processed another in addition to the first image and the second image One image is referred to as the 3rd pending image.
Step S501, using corresponding second network of style after the fusion with the first image and the second image, treat place 3rd image of reason carries out stylized processing, obtains the 4th image corresponding with the 3rd image.
Image conversion provided by the present invention has been described in detail in above-mentioned image switching network acquisition methods embodiment How Network Capture method obtains the second network corresponding with the style after the fusion of the first image and the second image, herein no longer Repeat.
After the 3rd pending image is obtained, using with the style pair after the merging of the first image and the second image The second network answered, stylized processing is carried out to the 3rd pending image, the 4th resulting image after stylization is handled Style Transfer image as corresponding with the 3rd image, after the Style Transfer image has the fusion of the first image and the second image Style.For example, when the second network have for fusion 80% the first image style and 20% the second image style it is right During the image switching network answered, then the 4th image carried out using the image switching network obtained by stylization processing is had The style of the style of 80% the first image and 20% the second image.
Fig. 5 b show corresponding with the style after the fusion of the first image and the second image the obtained using the present invention Two networks carry out the result example set figure of stylized processing to the 3rd image, wherein, in the group figure shown in Fig. 5 b, at first Image shown in the upper right corner of figure is the first image, is the second image in the image shown in the upper right corner of last figure, and group Other images in figure are to utilize gained after the stylized processing of the second network progress obtained based on different default blending weights Multiple 4th images arrived, multiple 4th images have the style of the first image and the style of the second image of different accountings.By Fig. 5 b can intuitively find out, in this group of figure, the style of the first image accounts for possessed by the 4th shown from left to right image Than gradually decreasing, and the accounting of the style of its second image gradually increases.
The image stylization method for amalgamation processing provided according to embodiments of the present invention, realizes the fusion to image style, Stylization easily and quickly can be carried out to image using the corresponding image switching network of style after the fusion with two images Processing, the Style Transfer image with the style after fusion is obtained, improve the efficiency of image stylization processing, optimize image Stylized processing mode.
Present invention also offers a kind of image stylization intensity adjustment method, this method includes:Using provided by the invention The second network obtained by image switching network acquisition methods, sector-style is entered to any one image in the first image and the second image Format processing, obtain corresponding to the 5th image.
Fig. 6 a show the schematic flow sheet of image stylization intensity adjustment method according to an embodiment of the invention, such as Shown in Fig. 6 a, this method comprises the following steps:
Step S600, obtain the second image.
Assuming that user wants the second image procossing into the style with the first image and meeting some stylized intensity During image, second image can be obtained in step S600.
Step S601, using corresponding second network of style after the fusion with the first image and the second image, to second Image carries out stylized processing, obtains corresponding 5th image.
Image conversion provided by the present invention has been described in detail in above-mentioned image switching network acquisition methods embodiment How Network Capture method obtains the second network corresponding with the style after the fusion of the first image and the second image, herein no longer Repeat.
After the second image is obtained, the style corresponding second after the fusion with the first image and the second image is utilized Network, stylized processing is carried out to the second image, after stylization is handled obtained by the 5th image be and the second image pair The Style Transfer image answered, the Style Transfer image have had the style of the first image and have met default stylized intensity.
Wherein, the specific size of stylized intensity using image switching network acquisition methods with generating the mistake of the second network The default blending weight being based in journey is relevant.Assuming that user want the second image procossing into the style with the first image and When stylized intensity is 60% image, in step s 601, using above-mentioned image switching network acquisition methods in the first figure Preset as corresponding to blending weight be 0.6 and second preset corresponding to image in the case that blending weight is 0.4 obtained by the Two networks carry out stylized processing to the second image, and second network is the style and 40% for the first image that fusion has 60% Image switching network corresponding to the style of second image, then the 5th image corresponding to the second resulting image has 60% The style of the first image and the style of 40% the second image, be also equivalent to the 5th image and maintain original second image 40% style, i.e. it is 60% that the 5th image, which is provided with the style of the first image and stylized intensity, it is achieved thereby that to figure As the regulation of stylized intensity.
Fig. 6 b show corresponding with the style after the fusion of the first image and the second image the obtained using the present invention Two networks carry out the result example set figure of stylized processing to the second image, wherein, in the group figure shown in Fig. 6 b, at first Image shown in the upper right corner of figure is the second image, is the first image in the image shown in the upper right corner of last figure, and group Other images in figure are to utilize gained after the stylized processing of the second network progress obtained based on different default blending weights Multiple 5th images arrived, multiple 5th images have different stylized intensity.Can intuitively it be found out by Fig. 6 b, in the group figure In, the accounting of the style of the first image gradually increases possessed by the 5th shown from left to right image, i.e. its stylized intensity Gradually increase.
The image stylization intensity adjustment method provided according to embodiments of the present invention, is realized to image stylization intensity Regulation, can be easily and quickly to the two figures using the corresponding image switching network of style after the fusion with two images Any one image as in carries out stylized processing, obtains the Style Transfer image with corresponding stylized intensity, improves The efficiency of image stylization processing, optimizes image stylization processing mode.
Fig. 7 shows the structured flowchart of image switching network acquisition device according to an embodiment of the invention, the device Run based on trained first network, as shown in fig. 7, the device includes:Acquisition module 711 and mapping block 712.
Acquisition module 711 is suitable to:Obtain the first image and the second image.
Wherein, the first image and the second image are two style images for having different-style, the first image and the second figure As that can be the style image with any style, however it is not limited to have the style image of some specific styles.
Mapping block 712 is suitable to:First image and the second image are separately input into first network, and melted according to default Close weights, be weighted in the ranking operation layer of first network, obtain with after the merging of the first image and the second image Second network corresponding to style.
Specifically, first network training sample image used includes:Multiple first sample figures of style image library storage Multiple second sample images of picture and content images library storage.Mapping block 712 is by the first image acquired in acquisition module 711 It is separately input into first network with the second image, and according to default blending weight, is carried out in the ranking operation layer of first network Ranking operation, then ranking operation result is substituting in first network, it becomes possible to be quickly obtained and the first image and second Second network corresponding to style after the fusion of image.
The image switching network acquisition device provided according to embodiments of the present invention, acquisition module obtain the first image and second First image and the second image are separately input into first network by image, mapping block, and according to default blending weight, The ranking operation layer of one network is weighted, and obtains corresponding with the style after the fusion of the first image and the second image Two networks.Technical scheme provided by the invention can be quickly obtained and two style images using trained first network Fusion after style corresponding to image switching network, be effectively improved obtain image switching network efficiency, optimize figure As switching network processing mode.
Fig. 8 shows image switching network acquisition device and image stylization fusion in accordance with another embodiment of the present invention The connection block diagram of processing unit, wherein, image switching network acquisition device is based on trained first network and run, As shown in figure 8, image switching network acquisition device 810 includes:Acquisition module 811, first network training module 812 and mapping mould Block 813, image stylization fusion treatment device 820 include:First processing module 821.
Acquisition module 811 in image switching network acquisition device 810 is suitable to:Obtain the first image and the second image.
First network training module 812 is suitable to:First network is trained.
Wherein, the training process of first network is completed by successive ignition.First network training module 812 is suitable to:One In secondary iterative process, a first sample image is extracted from style image storehouse, at least one the is extracted from content images storehouse Two sample images, the training of first network is realized using a first sample image and at least one second sample image.
Alternatively, first network training module 812 is suitable to:During an iteration, using with first sample image 3rd network corresponding to style, generate the 3rd sample image corresponding with the second sample image;According to the 3rd sample image and Style loss between one sample image and the content loss between the 3rd sample image and the second sample image, obtain first Network losses function, the training of first network is realized using first network loss function.
In a specific embodiment, first network training module 812 may include:Extraction unit 8121, generation unit 8122nd, processing unit 8123 and updating block 8124.
Specifically, extraction unit 8121 is suitable to:A first sample image is extracted from style image storehouse, from content images At least one second sample image is extracted in storehouse.
Generation unit 8122 is suitable to:First sample image is inputted into first network, obtained and first sample image 3rd network corresponding to style.
Wherein, the 3rd network is the switching network obtained in the training process of first network.Generation unit 8122 enters one Step is suitable to:Style textural characteristics are extracted from first sample image;Style textural characteristics are inputted into first network, obtain with 3rd network corresponding to style textural characteristics.
Processing unit 8123 is suitable to:Using the 3rd network corresponding with the style of first sample image, respectively at least 3rd sample image corresponding to one the second sample image generation.
Updating block 8124 is suitable to:Lost according to the style between at least one 3rd sample image and first sample image And the content loss between at least one 3rd sample image and corresponding second sample image, obtain first network loss letter Number, the weight parameter of first network is updated according to first network loss function.Wherein, those skilled in the art can be according to actual need The particular content of first network loss function is set, not limited herein.In a specific embodiment, first network loses Function can be:
Wherein, IcFor the second sample image, IsFor first sample image, I is the 3rd sample image, and CP is in perceiving The other perception function of tolerance, SP are the perception function for perceiving style difference,For the 3rd sample graph Picture and the content loss between corresponding second sample image,For the 3rd sample image and first sample Between image style loss, θ be neutral net weight parameter, λcWeight, λ are lost for preset contentsLost for default style Weight.
The iteration of first network training module 812 is run, until meeting predetermined convergence condition.Specifically, predetermined convergence condition Including:Iterations reaches default iterations;And/or the output valve of first network loss function is less than predetermined threshold value;With/ Or, the visual effect parameter of the 3rd sample image corresponding with the second sample image reaches default visual effect parameter.
First network training module 812 is further adapted for:One first sample image of fixed extraction, is alternatively extracted at least One the second sample image;After the second sample image extraction in content images storehouse, next first sample figure is replaced Picture, alternatively still extract at least one second sample image.By the above-mentioned means, it can efficiently train to obtain suitable for any The first network of style image and arbitrary content image, so as to effectively reduce the time needed for training first network, improve The training effectiveness of first network.
Mapping block 813 is suitable to:First image is inputted into first network, a forward direction is carried out in first network and is passed Computing is broadcast, determines ranking operation layer data corresponding to the first image;Second image is inputted into first network, in first network Propagated forward computing of middle progress, determines ranking operation layer data corresponding to the second image;According to default blending weight, To ranking operation layer corresponding to ranking operation layer data corresponding to the first image and the second image in the ranking operation layer of one network Data are weighted, and obtain the second network corresponding with the style after the fusion of the first image and the second image.
In one particular embodiment of the present invention, first network is that obtained metanetwork is trained to neutral net, Second network is image switching network.Specifically, ranking operation layer is bottleneck layer, and bottleneck layer is to have in the convolutional layer of first network There is the level of the vector dimension of minimum.Ranking operation layer data can be vector data.
First processing module 821 in image stylization fusion treatment device 820 is suitable to:Obtained using image switching network The second network obtained by device 810, stylized processing is carried out to the 3rd pending image, obtained corresponding with the 3rd image 4th image.
Technical scheme provided by the invention can quickly determine two style images using trained first network Corresponding ranking operation layer data, it can be readily obtained and two according to ranking operation layer data corresponding to two style images Image switching network corresponding to style after the fusion of style image, further increase the efficiency for obtaining image switching network; In addition, easily and quickly image can be carried out using the corresponding image switching network of style after the fusion with two images Stylization processing, obtains the Style Transfer image with the style after fusion, realizes the fusion to image style, improve figure As the efficiency of stylization processing, image stylization processing mode is optimized.
Fig. 9 shows image switching network acquisition device and image stylization intensity in accordance with another embodiment of the present invention The connection block diagram of adjusting means, as shown in figure 9, the image switching network acquisition device in the present embodiment is shown in Fig. 8 Image switching network acquisition device 810, here is omitted.Image stylization intensity adjustment device 920 includes:Second processing Module 921.
Second processing module 921 is suitable to:Using the second network obtained by image switching network acquisition device 810, to Any one image in one image and the second image carries out stylized processing, obtains corresponding 5th image.
According to technical scheme provided by the invention, realize the regulation to image stylization intensity, using with two images Fusion after style corresponding to image switching network can easily and quickly in the two images any one image carry out Stylization processing, the Style Transfer image with corresponding stylized intensity is obtained, improves the efficiency of image stylization processing, Optimize image stylization processing mode.
Present invention also offers a kind of terminal, the terminal includes above-mentioned image switching network acquisition device.Wherein, terminal Can be mobile phone, PAD, computer, picture pick-up device etc..
Present invention also offers a kind of nonvolatile computer storage media, computer-readable storage medium is stored with least one can Execute instruction, the executable instruction can perform the image switching network acquisition methods in above-mentioned any means embodiment.Wherein, count Calculation machine storage medium can be storage card of the storage card of mobile phone, PAD storage card, the disk of computer, picture pick-up device etc..
Figure 10 shows a kind of structural representation of computing device according to embodiments of the present invention, the specific embodiment of the invention The specific implementation to computing device does not limit.Wherein, computing device can be mobile phone, PAD, computer, picture pick-up device, server Deng.
As shown in Figure 10, the computing device can include:Processor (processor) 1002, communication interface (Communications Interface) 1004, memory (memory) 1006 and communication bus 1008.
Wherein:
Processor 1002, communication interface 1004 and memory 1006 complete mutual lead to by communication bus 1008 Letter.
Communication interface 1004, for being communicated with the network element of miscellaneous equipment such as client or other servers etc..
Processor 1002, for configuration processor 1010, it can specifically perform above-mentioned image switching network acquisition methods and implement Correlation step in example.
Specifically, program 1010 can include program code, and the program code includes computer-managed instruction.
Processor 1002 is probably central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or it is arranged to implement the integrated electricity of one or more of the embodiment of the present invention Road.The one or more processors that computing device includes, can be same type of processor, such as one or more CPU;Also may be used To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 1006, for depositing program 1010.Memory 1006 may include high-speed RAM memory, it is also possible to also Including nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 1010 specifically can be used for so that processor 1002 performs the image conversion in above-mentioned any means embodiment Network Capture method.The specific implementation of each step may refer in above-mentioned image switching network acquisition embodiment in program 1010 Corresponding description, will not be described here in corresponding steps and unit.It is apparent to those skilled in the art that to retouch The convenience stated the equipment of foregoing description and the specific work process of module, may be referred in preceding method embodiment with succinctly Corresponding process describes, and will not be repeated here.
Present invention also offers a kind of nonvolatile computer storage media, computer-readable storage medium is stored with least one can Execute instruction, the executable instruction can perform the image stylization method for amalgamation processing in above-mentioned any means embodiment.Wherein, Computer-readable storage medium can be storage card of the storage card of mobile phone, PAD storage card, the disk of computer, picture pick-up device etc..
Present invention also offers a kind of computing device, including:Processor, memory, communication interface and communication bus, processing Device, memory and communication interface complete mutual communication by communication bus;Memory is used to deposit at least one executable finger Order, executable instruction make to operate corresponding to the above-mentioned image stylization method for amalgamation processing of computing device.Wherein, computing device Can be mobile phone, PAD, computer, picture pick-up device etc..The structure of computing device shown in the structural representation and Figure 10 of the computing device Schematic diagram is identical, and here is omitted.
Present invention also offers a kind of nonvolatile computer storage media, computer-readable storage medium is stored with least one can Execute instruction, the executable instruction can perform the image stylization intensity adjustment method in above-mentioned any means embodiment.Wherein, Computer-readable storage medium can be storage card of the storage card of mobile phone, PAD storage card, the disk of computer, picture pick-up device etc..
Present invention also offers a kind of computing device, including:Processor, memory, communication interface and communication bus, processing Device, memory and communication interface complete mutual communication by communication bus;Memory is used to deposit at least one executable finger Order, executable instruction make to operate corresponding to the above-mentioned image stylization intensity adjustment method of computing device.Wherein, computing device Can be mobile phone, PAD, computer, picture pick-up device etc..The structure of computing device shown in the structural representation and Figure 10 of the computing device Schematic diagram is identical, and here is omitted.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein. Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of system Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that it can utilize various Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair Bright preferred forms.
In the specification that this place provides, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention Example can be put into practice in the case of these no details.In some instances, known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, Above in the description to the exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor The application claims of shield features more more than the feature being expressly recited in each claim.It is more precisely, such as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following embodiment are expressly incorporated in the embodiment, wherein each claim is in itself Separate embodiments all as the present invention.
Those skilled in the art, which are appreciated that, to be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it can use any Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power Profit requires, summary and accompanying drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation Replace.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed One of meaning mode can use in any combination.
The all parts embodiment of the present invention can be realized with hardware, or to be run on one or more processor Software module realize, or realized with combinations thereof.It will be understood by those of skill in the art that it can use in practice Microprocessor or digital signal processor (DSP) are come one of some or all parts in realizing according to embodiments of the present invention A little or repertoire.The present invention is also implemented as setting for performing some or all of method as described herein Standby or program of device (for example, computer program and computer program product).Such program for realizing the present invention can deposit Storage on a computer-readable medium, or can have the form of one or more signal.Such signal can be from because of spy Download and obtain on net website, either provide on carrier signal or provided in the form of any other.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of some different elements and being come by means of properly programmed computer real It is existing.In if the unit claim of equipment for drying is listed, several in these devices can be by same hardware branch To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame Claim.

Claims (10)

1. a kind of image switching network acquisition methods, methods described is based on trained first network and performed, methods described Including:
Obtain the first image and the second image;
Described first image and second image are separately input into the first network, and according to default blending weight, It is weighted in the ranking operation layer of the first network, obtains merging with described first image and second image Second network corresponding to style afterwards.
2. a kind of image stylization method for amalgamation processing, methods described include:
Using the second network obtained by the method described in claim 1, stylized processing is carried out to the 3rd pending image, Obtain the 4th image corresponding with the 3rd image.
3. a kind of image stylization intensity adjustment method, methods described include:
Using the second network obtained by the method described in claim 1, in described first image and second image Any one image carries out stylized processing, obtains corresponding 5th image.
4. a kind of image switching network acquisition device, described device is based on trained first network and run, described device Including:
Acquisition module, suitable for obtaining the first image and the second image;
Mapping block, suitable for described first image and second image are separately input into the first network, and according to Default blending weight, is weighted in the ranking operation layer of the first network, obtains and described first image and described Second network corresponding to style after the fusion of second image.
5. a kind of terminal, including the image switching network acquisition device described in claim 4.
6. a kind of image stylization fusion treatment device, described device include:
First processing module, suitable for the second network obtained by the device described in utilization claim 4, to the 3rd pending figure As carrying out stylized processing, the 4th image corresponding with the 3rd image is obtained.
7. a kind of image stylization intensity adjustment device, described device include:
Second processing module, suitable for using the second network obtained by the device described in claim 4, to described first image and Any one image in second image carries out stylized processing, obtains corresponding 5th image.
8. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;
The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device such as right will Ask and operated corresponding to the image switching network acquisition methods described in 1.
9. a kind of computer-readable storage medium, an at least executable instruction, the executable instruction are stored with the storage medium Make to operate corresponding to computing device image switching network acquisition methods as claimed in claim 1.
10. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;
The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device such as right will Ask and operated corresponding to the image stylization method for amalgamation processing described in 2.
CN201710574583.2A 2017-07-14 2017-07-14 Image conversion network acquisition method and device, computing equipment and storage medium Active CN107516290B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710574583.2A CN107516290B (en) 2017-07-14 2017-07-14 Image conversion network acquisition method and device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710574583.2A CN107516290B (en) 2017-07-14 2017-07-14 Image conversion network acquisition method and device, computing equipment and storage medium

Publications (2)

Publication Number Publication Date
CN107516290A true CN107516290A (en) 2017-12-26
CN107516290B CN107516290B (en) 2021-03-19

Family

ID=60721877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710574583.2A Active CN107516290B (en) 2017-07-14 2017-07-14 Image conversion network acquisition method and device, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN107516290B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108733439A (en) * 2018-03-26 2018-11-02 西安万像电子科技有限公司 Image processing method and device
CN109035318A (en) * 2018-06-14 2018-12-18 西安电子科技大学 A kind of conversion method of image style
CN109146825A (en) * 2018-10-12 2019-01-04 深圳美图创新科技有限公司 Photography style conversion method, device and readable storage medium storing program for executing
WO2019144855A1 (en) * 2018-01-26 2019-08-01 腾讯科技(深圳)有限公司 Image processing method, storage medium, and computer device
CN111080527A (en) * 2019-12-20 2020-04-28 北京金山云网络技术有限公司 Image super-resolution method and device, electronic equipment and storage medium
CN111784566A (en) * 2020-07-01 2020-10-16 北京字节跳动网络技术有限公司 Image processing method, migration model training method, device, medium and equipment
CN113111791A (en) * 2021-04-16 2021-07-13 深圳市格灵人工智能与机器人研究院有限公司 Image filter conversion network training method and computer readable storage medium
CN113850712A (en) * 2021-09-03 2021-12-28 北京达佳互联信息技术有限公司 Training method of image style conversion model, and image style conversion method and device
WO2022193077A1 (en) * 2021-03-15 2022-09-22 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image segmentation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682420A (en) * 2012-03-31 2012-09-19 北京百舜华年文化传播有限公司 Method and device for converting real character image to cartoon-style image
CN103198469A (en) * 2011-10-31 2013-07-10 卡西欧计算机株式会社 Image processing apparatus and image processing method
CN106778928A (en) * 2016-12-21 2017-05-31 广州华多网络科技有限公司 Image processing method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198469A (en) * 2011-10-31 2013-07-10 卡西欧计算机株式会社 Image processing apparatus and image processing method
CN102682420A (en) * 2012-03-31 2012-09-19 北京百舜华年文化传播有限公司 Method and device for converting real character image to cartoon-style image
CN106778928A (en) * 2016-12-21 2017-05-31 广州华多网络科技有限公司 Image processing method and device

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11276207B2 (en) 2018-01-26 2022-03-15 Tencent Technology (Shenzhen) Company Limited Image processing method, storage medium, and computer device
WO2019144855A1 (en) * 2018-01-26 2019-08-01 腾讯科技(深圳)有限公司 Image processing method, storage medium, and computer device
CN108733439A (en) * 2018-03-26 2018-11-02 西安万像电子科技有限公司 Image processing method and device
CN109035318A (en) * 2018-06-14 2018-12-18 西安电子科技大学 A kind of conversion method of image style
CN109035318B (en) * 2018-06-14 2021-11-30 西安电子科技大学 Image style conversion method
CN109146825B (en) * 2018-10-12 2020-11-27 深圳美图创新科技有限公司 Photography style conversion method, device and readable storage medium
CN109146825A (en) * 2018-10-12 2019-01-04 深圳美图创新科技有限公司 Photography style conversion method, device and readable storage medium storing program for executing
CN111080527A (en) * 2019-12-20 2020-04-28 北京金山云网络技术有限公司 Image super-resolution method and device, electronic equipment and storage medium
CN111080527B (en) * 2019-12-20 2023-12-05 北京金山云网络技术有限公司 Image super-resolution method and device, electronic equipment and storage medium
CN111784566A (en) * 2020-07-01 2020-10-16 北京字节跳动网络技术有限公司 Image processing method, migration model training method, device, medium and equipment
CN111784566B (en) * 2020-07-01 2022-02-08 北京字节跳动网络技术有限公司 Image processing method, migration model training method, device, medium and equipment
WO2022193077A1 (en) * 2021-03-15 2022-09-22 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image segmentation
CN113111791A (en) * 2021-04-16 2021-07-13 深圳市格灵人工智能与机器人研究院有限公司 Image filter conversion network training method and computer readable storage medium
CN113111791B (en) * 2021-04-16 2024-04-09 深圳市格灵人工智能与机器人研究院有限公司 Image filter conversion network training method and computer readable storage medium
CN113850712A (en) * 2021-09-03 2021-12-28 北京达佳互联信息技术有限公司 Training method of image style conversion model, and image style conversion method and device

Also Published As

Publication number Publication date
CN107516290B (en) 2021-03-19

Similar Documents

Publication Publication Date Title
CN107516290A (en) Image switching network acquisition methods, device, computing device and storage medium
CN107392842A (en) Image stylization processing method, device, computing device and computer-readable storage medium
CN107277391A (en) Image switching network processing method, server, computing device and storage medium
US11704547B2 (en) Transposing neural network matrices in hardware
US20200334536A1 (en) Performing kernel striding in hardware
EP3380992B1 (en) Generating images using neural networks
CN106471526A (en) Process image using deep neural network
CN107392316A (en) Network training method, device, computing device and computer-readable storage medium
CN107766936A (en) Artificial neural networks, artificial neuron and the control method of artificial neuron
Wang et al. From weighted potential game to weighted harmonic game
CN109214543B (en) Data processing method and device
CN109903100A (en) A kind of customer churn prediction technique, device and readable storage medium storing program for executing
CN106373112A (en) Image processing method, image processing device and electronic equipment
CN109145107A (en) Subject distillation method, apparatus, medium and equipment based on convolutional neural networks
CN110009644B (en) Method and device for segmenting line pixels of feature map
CN106169961A (en) The network parameter processing method and processing device of neutral net based on artificial intelligence
CN109117475A (en) A kind of method and relevant device of text rewriting
CN112966729B (en) Data processing method and device, computer equipment and storage medium
CN114462582A (en) Data processing method, device and equipment based on convolutional neural network model
CN109697511B (en) Data reasoning method and device and computer equipment
CN112559864B (en) Bilinear graph network recommendation method and system based on knowledge graph enhancement
CN113297310A (en) Method for selecting block chain fragmentation verifier in Internet of things
CN109308194B (en) Method and apparatus for storing data
CN111460108A (en) Intelligent response method and device
CN108769236A (en) Using recommendation method, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant