CN110689478A - Image stylization processing method and device, electronic equipment and readable medium - Google Patents

Image stylization processing method and device, electronic equipment and readable medium Download PDF

Info

Publication number
CN110689478A
CN110689478A CN201910910696.4A CN201910910696A CN110689478A CN 110689478 A CN110689478 A CN 110689478A CN 201910910696 A CN201910910696 A CN 201910910696A CN 110689478 A CN110689478 A CN 110689478A
Authority
CN
China
Prior art keywords
image
background
processed
stylized
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910910696.4A
Other languages
Chinese (zh)
Other versions
CN110689478B (en
Inventor
贾靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910910696.4A priority Critical patent/CN110689478B/en
Publication of CN110689478A publication Critical patent/CN110689478A/en
Priority to PCT/CN2020/113974 priority patent/WO2021057463A1/en
Application granted granted Critical
Publication of CN110689478B publication Critical patent/CN110689478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure discloses an image stylization processing method and device, electronic equipment and a readable medium. The method comprises the following steps: if the complexity of the background of the image to be processed is lower than the complexity threshold value, replacing the original background of the image to be processed with a template background to obtain a target image; inputting the target image into a stylized processing model to obtain a primary stylized processing image; and replacing the background of the preliminary stylized image with the original background to obtain a final stylized image. According to the scheme of the embodiment of the disclosure, when the stylized processing is performed on the image, the complexity of the image background is considered, and the fineness and the attractiveness of the stylized processing result of the image are improved.

Description

Image stylization processing method and device, electronic equipment and readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of internet, in particular to an image stylization processing method and device, an electronic device and a readable medium.
Background
The existing terminal equipment is mostly configured with a photographing function, and in order to meet the personalized requirements of terminal users, the terminal equipment generally has an image processing function under the photographing function, so that stylized processing of images photographed by the terminal users is realized.
At present, when a terminal device performs stylization processing on an image, the adopted method is to directly perform stylization processing on the whole image by adopting a stylization processing algorithm through a neural network model. However, since the content complexity of different images is different, when the stylizing processing is actually executed by the existing method, it is difficult to meet the stylizing processing requirements of images with different complexities, and the situations that the stylizing processing result is rough and the attractiveness is affected easily occur, and improvement is urgently needed.
Disclosure of Invention
The embodiment of the disclosure provides an image stylization processing method and device, an electronic device and a readable medium, which consider the complexity of an image background when stylizing an image, and improve the fineness and the aesthetic degree of the stylization processing result of the image.
In a first aspect, an embodiment of the present disclosure provides an image stylization processing method, including:
if the complexity of the background of the image to be processed is lower than the complexity threshold value, replacing the original background of the image to be processed with a template background to obtain a target image;
inputting the target image into a stylized processing model to obtain a primary stylized processing image;
and replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
In a second aspect, an embodiment of the present disclosure further provides an image stylization processing apparatus, including:
the background replacing module is used for replacing the original background of the image to be processed with the template background to obtain a target image if the background complexity of the image to be processed is lower than the complexity threshold;
the stylized processing module is used for inputting the target image into a stylized processing model to obtain a preliminary stylized processing image;
the background replacing module is further configured to replace the background of the preliminary stylized image with the original background to obtain a final stylized image.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an image stylization processing method as in any embodiment of the present disclosure.
In a fourth aspect, embodiments of the present disclosure provide a readable medium, on which a computer program is stored, which when executed by a processor, implements an image stylization processing method according to any of the embodiments of the present disclosure.
The embodiment of the disclosure provides an image stylization processing method, an image stylization processing device, an electronic device and a readable medium, wherein whether the complexity of a background of an image to be processed is lower than a complexity threshold value is judged, and if yes, an original background of the image to be processed is replaced by a preset template background to obtain a target image; and after the target image is stylized by the stylized processing model, replacing the background of the primary stylized processing image obtained by processing with the original background to obtain the final stylized processing image. According to the scheme of the embodiment of the disclosure, when the stylized processing is performed on the image, the complexity of the image background is considered, when the complexity of the image background to be processed is low, a method of performing the stylized processing after replacing the template background for the image to be processed is adopted, the situation that the processing result is rough when the conventional stylized processing method is adopted for the image to be processed with low background complexity is avoided, and the fineness and the attractiveness of the stylized processing result of the image are improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIGS. 1A-1B are schematic diagrams illustrating the stylized processing of an image to be processed according to the prior art;
FIG. 2A is a flowchart illustrating an image stylization processing method according to a first embodiment of the disclosure;
2B-2E are schematic diagrams illustrating the effect of stylizing an image to be processed according to a first embodiment of the disclosure;
FIG. 3 is a flowchart illustrating another image stylization processing method provided in the second embodiment of the disclosure;
FIG. 4 is a flowchart illustrating another method for stylizing an image according to a third embodiment of the disclosure;
fig. 5 is a schematic structural diagram of an image stylization processing apparatus according to a fourth embodiment of the present disclosure;
fig. 6 shows a schematic structural diagram of an electronic device provided in a fifth embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise. The names of messages or information exchanged between multiple parties in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Before the embodiments of the present disclosure are described, an application scenario of the image stylization processing method, the apparatus, the electronic device, and the readable medium in the embodiments of the present disclosure is described. Specifically, the embodiment of the present disclosure is applicable to a process of stylizing an image captured by a user through an electronic device (such as a mobile phone, a tablet computer, a wearable device, and a camera), and an image stored in an electronic device gallery, so as to generate a personalized image for the user. When the stylized processing is performed on the image to be processed, the whole image is usually stylized by a pre-trained neural network model, and at this time, when the complexity of the background part of the image to be processed is relatively low, the stylized processing result is rough. For example, fig. 1A is an image to be processed with a low background image complexity, and fig. 1B is an effect diagram obtained by performing a cartoon style processing on the image to be processed shown in fig. 1A according to a conventional method, where the effect diagram is poor in overall effect and seriously affects the aesthetic degree of the cartoon style processing. Therefore, there is a need to improve the existing image stylization processing method.
The following describes an image stylization processing method, an image stylization processing apparatus, an electronic device, and a readable medium according to the following embodiments of the disclosure in detail.
Example one
FIG. 2A is a flowchart illustrating an image stylization processing method according to a first embodiment of the disclosure; fig. 2B to 2E are schematic diagrams illustrating an effect of stylizing an image to be processed according to an embodiment of the present disclosure. The embodiment can be applied to the condition of stylizing the image to be processed, and is particularly suitable for the condition of stylizing the image to be processed with background complexity lower than the complexity threshold. The method may be performed by an image stylization processing apparatus or an electronic device, the apparatus may be implemented by means of software and/or hardware, and the apparatus may be configured in the electronic device. Optionally, the electronic device may be a terminal device with an image processing function, such as a mobile phone, a tablet computer, a wearable device, and a camera.
Optionally, as shown in fig. 2A, the method in this embodiment may include the following steps:
s201, if the complexity of the background of the image to be processed is lower than the complexity threshold, replacing the original background of the image to be processed with a template background to obtain a target image.
The image to be processed may be an image to be stylized in this embodiment, which may be an image currently taken by a camera on the electronic device, or may be a stored image selected from a local gallery of the electronic device according to a click operation of a user. The original background of the image to be processed may be the original background of the image to be processed itself, for example, the original background of the image to be processed shown in fig. 2B is a gray background. The template background is a preset background image with high complexity, optionally, the number of the template backgrounds in this embodiment may be one or more, and when there are a plurality of template backgrounds, a plurality of background images with different complexities may be set as the template background. The complexity threshold may be a preset criterion for determining whether to replace the background of the image to be processed. The complexity threshold may be a threshold set for background complexity. The background complexity may be the complexity of the background area in the image to be processed, for example, when the background area of the image to be processed is a simple solid background such as a wall, a blue sky, a sea, and the like, the background complexity is low; when the background area of the image to be processed is a cluttered background such as a road, a park and the like, the background complexity is high.
Optionally, an implementable manner of determining whether the background complexity of the to-be-processed image is lower than the complexity threshold in this step may be that pixel gray level histogram statistics is performed on a background region of the to-be-processed image, and if the distribution range of gray level values in the histogram is wider, it indicates that the background complexity is higher. The second possible implementation may be to extract texture features of the original background of the image to be processed, and determine whether the complexity of the background of the image to be processed is lower than a complexity threshold according to the texture features. Specifically, a texture feature extraction algorithm may be first adopted to extract texture features from an original background of the image to be processed, where the algorithm for extracting texture features may include, but is not limited to, a statistical method, a geometric method, a model method, a structural analysis method, a signal processing method, and the like. After the texture features of the original background of the image to be processed are extracted, the number of pixels corresponding to the texture features or the ratio of the pixels corresponding to the texture features to the total pixels of the original background can be used as the background complexity of the image to be processed, and then whether the background complexity is lower than a preset complexity threshold value or not is judged.
In a preferred embodiment, a preferred implementation method of this step when executing the second implementable manner may be: carrying out edge detection on the original background of the image to be processed to obtain texture features of the original background; determining the pixel proportion of the texture features in the original background as the background complexity of the image to be processed; it is determined whether the background complexity of the image to be processed is below a complexity threshold. Specifically, the method may be implemented by performing edge detection on original background content of the image to be processed, and taking an edge detection result as a texture feature of the original image. The edge detection algorithm may include, but is not limited to, canny algorithm, Roberts algorithm, Sobel algorithm, and the like. And then determining the pixel ratio of the pixel number corresponding to the edge result to the total pixel number of the original background, taking the pixel ratio as the background complexity of the image to be processed, comparing the background complexity with a preset complexity threshold value, and judging whether the background complexity of the image to be processed is lower than the complexity threshold value.
Optionally, in a general case, when the stylized processing is performed on the image to be processed, if the background complexity is low, the processing result may be rough. Therefore, in this step, when the background complexity of the image to be processed is higher than or equal to the complexity threshold, it is indicated that the background complexity of the image to be processed is high, and at this time, the stylized processing operation of the image to be processed can be accurately completed by adopting a uniform stylized processing method, that is, the image to be processed can be directly input into the stylized processing model, and the result output by the stylized processing model is obtained as the stylized processing image of the image to be processed. When the background complexity of the image to be processed is lower than the complexity threshold, it is indicated that the background complexity of the image to be processed is relatively low, and a rough processing result is likely to occur, and the background of the image to be processed needs to be replaced, and the original background of the image to be processed is replaced by a preset template background, so as to obtain the target image. For example, fig. 2B is the to-be-processed image of this embodiment, the original background of the to-be-processed image is a pure gray background with low complexity, and if the background complexity of the to-be-processed image is lower than the complexity threshold, the gray original background of fig. 2B may be replaced by a preset high-complexity peach blossom source template background, so as to obtain the target image shown in fig. 2C.
Optionally, in this step, no matter whether the background complexity of the image to be processed is judged or the background replacement operation is executed, the background region and the foreground region of the image to be processed need to be determined first, and a specific determination method may be to perform region segmentation on the image to be processed by adopting an image recognition or image clustering algorithm to obtain the background region and the foreground region of the image to be processed; the method can also be characterized in that a background segmentation model is trained in advance, and the background segmentation model is used for carrying out region segmentation on the image to be processed to obtain a background region and a foreground region of the image to be processed; it is also possible that the background area and the foreground area of the image to be processed are manually selected by the user. This embodiment is not limited to this.
S202, inputting the target image into the stylized processing model to obtain a preliminary stylized processing image.
The stylized processing model can be a pre-trained neural network model which can be used for stylizing the content in the image. The specific training process of the stylized processing model will be described in detail in the following embodiments.
Optionally, in this step, the target image obtained after performing background replacement on the image to be processed in S201 may be used as an input parameter, and a program code of a pre-trained stylized processing model is called and run, at this time, the stylized processing model may perform stylized processing on the content in the input target image based on a stylized processing algorithm during training, and output the stylized processed image, that is, the preliminary stylized processed image.
For example, the target image shown in fig. 2C is used as input data of the stylized processing model, and program code of the stylized processing model trained in advance is called and run, at this time, the stylized processing model performs comic stylization processing on the target image shown in fig. 2C based on a comic style processing algorithm during training, so as to obtain a preliminary stylized processing image shown in fig. 2D.
Optionally, the stylized processing operation performed on the target image by the stylized processing model in this step may include, but is not limited to: and stylizing in various filter forms, such as cartoon style processing, pixelization style processing, sketch style processing, oil painting style processing and the like.
And S203, replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
Optionally, after the preliminary stylized image is obtained, since the background of the preliminary stylized image is not the original image of the image to be processed, in order to ensure consistency between the processed image and the original image, the current background of the preliminary stylized image needs to be replaced by the original background of the image to be processed in this step. The specific execution process may be to perform background region identification on the preliminary stylized image, determine the current background of the preliminary stylized image (the specific determination method is the same as the process of determining the original background from the image to be processed, and is not described here again), and replace the current background of the preliminary stylized image with the original background of the image to be processed, so as to obtain the final stylized image after stylizing the image to be processed. Illustratively, the current background of the preliminary stylized image shown in fig. 2D is determined, and the original background of the image to be processed shown in fig. 2B is substituted for the current background in fig. 2D, resulting in the final stylized image shown in fig. 2E.
Optionally, in this step, if the original background of the image to be processed is directly used to replace the background of the preliminary stylized image, a final stylized image is generated, and the background portion of the finally obtained image is not subjected to the stylized operation. In order to improve the stylized processing consistency and the aesthetic property of the image to be processed, when the background of the preliminary stylized processing image is replaced by the original background, the original background can be input into a background processing model to obtain a processed original background; and replacing the background of the preliminarily stylized image with the processed original background. The background processing model is a pre-trained neural network model which is specially used for stylizing the original background of the image to be processed. The type of stylization processing performed by the background processing model is the same as the type of stylization processing performed by the stylization processing model in S202, and the training processes of the two models are similar, except that the algorithm parameters of the trained models are different when the stylization processing is specifically performed. The specific execution process may be to use the original background replaced in S201 as input data of the background processing model, call and run a program code of the background processing model, where the background processing model performs a stylization operation on the input original background based on a stylization processing algorithm during training to obtain a stylized original background, and then perform a process of replacing the background of the preliminary stylized image with the stylized original background to obtain a final stylized image.
The embodiment of the disclosure provides an image stylization processing method, which includes judging whether the background complexity of an image to be processed is lower than a complexity threshold value, if so, replacing an original background of the image to be processed with a preset template background to obtain a target image; and after the target image is stylized by the stylized processing model, replacing the background of the primary stylized processing image obtained by processing with the original background to obtain the final stylized processing image. According to the scheme of the embodiment of the disclosure, when the stylized processing is performed on the image, the complexity of the image background is considered, when the complexity of the image background to be processed is low, a method of performing the stylized processing after replacing the template background for the image to be processed is adopted, the situation that the processing result is rough when the conventional stylized processing method is adopted for the image to be processed with low background complexity is avoided, and the fineness and the attractiveness of the stylized processing result of the image are improved.
Example two
Fig. 3 is a flowchart illustrating another method for providing video learning materials according to the second embodiment of the present disclosure; the embodiment is optimized on the basis of various alternatives provided by the above embodiment, and specifically gives a detailed description of how to train the stylized processing model.
Optionally, as shown in fig. 3, the method in this embodiment may include the following steps:
s301, constructing an initial network model.
Optionally, in this step, a convolutional neural network with a preset number of layers may be pre-constructed according to the requirement of actual stylization processing, and corresponding initial parameters are set for each layer, where the initial parameters may be an initial number of channels, a loss weight value, a channel parameter, a stylization processing algorithm, an algorithm parameter thereof, and the like, so as to complete construction of an initial network model, and the initial network model constructed in this step cannot be directly used for stylizing processing an image to be processed, and needs to be trained first on the constructed initial network model.
And S302, performing stylized processing training on the initial network model by adopting the sample image data.
The sample image data may be training data required for training the initial network model, and may be composed of a plurality of sets of original images and stylized images of the original images.
Optionally, in this step, when performing stylized training on the initial network model constructed in S301, each set of original image in the sample image data and the stylized processed image of the original image may be sequentially input into the initial network model as a set of input data, and relevant initial parameters set in the initial network model may be trained. The specific training process may be similar to the training of the neural network model for performing the image stylization process in the prior art, and this embodiment is not limited thereto.
And S303, verifying whether the attribute parameters of the output image of the trained initial network model are larger than a parameter threshold value by adopting verification image data, if not, executing S304, and if so, executing S305.
The verification image data may be verification data for verifying whether the trained initial network model can perform stylized processing operations with high quality. Optionally, the verification image data may be obtained by taking a certain proportion (for example, 80%) of image data in the obtained image as the sample image data and a remaining proportion (for example, 20%) of image data as the verification image data in the process of obtaining the sample image; or image data of various shooting scenes which are specially selected. The attribute parameter of the output image may be a judgment parameter for verifying the effect of the stylized processing of the output image, wherein the attribute parameter may include, but is not limited to, at least one of a saturation parameter, an edge smoothness parameter, and a shading parameter, and the attribute parameter for each dimension thereof may be determined by a fixed formula or algorithm.
Optionally, in this step, after performing stylized processing training on the initial network model by using one or more groups of sample image data, verifying whether the initial network model obtained by training in S302 can perform high-quality stylized processing operation by using verification image data, that is, verifying whether the initial network model obtained by training in S302 can be used as a stylized processing model that can be actually used. The specific verification process may be to call and run a program code of the trained initial network model with the verification image as input data, and at this time, the trained initial network model performs a stylized processing operation on the input verification image according to an algorithm during training, and outputs a stylized processing result, that is, a verification stylized processing image. Optionally, when determining the attribute parameter for the verification stylized image, calculating the saturation of the image as a saturation parameter; calculating the smoothness degree of the edge contour in the image as an edge smoothness parameter; and calculating the smoothness of other areas except the edge contour of the image as a shadow parameter. When determining whether the calculated attribute parameters are greater than the parameter threshold, the corresponding parameter threshold may be set for each of the attribute parameters, and it is determined whether each of the attribute parameters is greater than the corresponding parameter threshold. If all the parameter values in the attribute parameters of the output image of the trained initial network model are greater than the corresponding parameter threshold values, it indicates that the initial network model is trained, S305 may be executed, and the trained initial network model is used as the stylized processing model. Otherwise, it is stated that the effect of the stylization processing performed on the initial network model at this time is not good, and further optimization is required, and at this time, S304 is executed to adjust and optimize the model parameters of the initial network model.
Optionally, in this embodiment, in order to ensure that the trained initial network model is accurately verified, at least two groups of verification image data may be selected to perform the operation of this step on the trained initial network model. And when the result of the verification of at least two groups of verification image data is that the attribute parameters of the output image are greater than the parameter threshold, executing S305, and taking the trained initial network model as the stylized processing model.
And S304, if not, adjusting the model parameters of the initial network model within a preset modification range, and returning the initial network model with the adjusted model parameters to execute the operation of S302.
Optionally, in this step, when the model parameter of the initial network model after being trained in S302 is adjusted, at least one dimension of the number of layers, the number of channels of each layer, and the loss weight value of the data processing operation performed inside the model may be adjusted. Optionally, when adjusting the model parameters of the initial network model, setting priorities for the number of layers, the number of channels, and the loss weight value, adjusting the model parameters within a preset modification range corresponding to the model parameters according to the priorities for each time of the model parameters of one dimension, returning to execute S302-S303 after the model parameters of one dimension are modified, performing stylization processing training on the initial network model after the model parameters are adjusted again by using sample image data, and verifying whether the attribute parameters of the output images of the trained initial network model are greater than a parameter threshold by using verification image data. Preferably, if the priority of each model parameter is that the number of layers is greater than the number of channels, and the number of channels is greater than the loss weight value, the number of channels may be adjusted first, the number of channels is increased by 1, and then the initial network model with the increased number of layers is returned to execute S302-S303, the initial network model with the increased number of layers is stylized and trained again by using the sample image data, and it is verified whether the attribute parameter of the output image of the trained output initial network model is greater than the parameter threshold, if not, the number of channels of the initial network model is adjusted again, the number of channels is increased by 1.2 times, and then the operation of S302-S303 is returned to execute. Optionally, in this step, when the number of channels of the initial network model is adjusted, the number of channels of the group convolution layer of the initial network model may be preferentially selected to be adjusted.
It should be noted that the preset modification range may be a preset first modification range; or at least two preset optional modification values, preferably, a modification range can be set for the loss weight value and the number of layers; a plurality of optional modification values, such as 1.2 times, 1.5 times, 1.6 times, and the like, can be set for the number of channels.
Optionally, when the model parameters of the initial network model are adjusted in this step, one model parameter may be adjusted at a time, and at least two model parameters may also be adjusted at a time, which is not limited in this step.
And S305, if so, taking the trained initial network model as a stylized processing model.
Optionally, if the attribute parameter of the output image of the initial network model after verification training by using the verification image data is greater than the parameter threshold, it indicates that the training of the initial network model is completed at this time, and the program code of the initial network model at this time is packaged and used as the program code of the stylized processing model.
S306, if the complexity of the background of the image to be processed is lower than the complexity threshold, replacing the original background of the image to be processed with the template background to obtain the target image.
And S307, inputting the target image into the stylized processing model to obtain a preliminary stylized processing image.
And S308, replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
The embodiment of the disclosure provides an image stylization processing method, which includes the steps of training a constructed initial network model by sample image data, if multi-dimensional attribute parameters of stylization processing images output by the initial network model after verification training are verified by verification image data and are not larger than parameter threshold values, regulating the multi-dimensional model parameters of the initial network model and then retraining, and otherwise, finishing training of the initial network model to obtain the stylization processing model. In the process of training the initial network model, the quality of the processing effect of the stylized processing model is greatly improved through the verification of the multi-dimensional attribute parameters and the adjustment of the multi-dimensional model parameters, and a foundation is laid for the subsequent stylized processing operation of the image to be processed. When the stylized processing is carried out on the image to be processed, if the background complexity of the image to be processed is lower than the complexity threshold value, the original background of the image to be processed is replaced by a preset template background, then the stylized processing model is adopted to carry out the stylized processing operation, and the obtained background of the primary stylized processing image is replaced by the original background to obtain the final stylized processing image.
EXAMPLE III
Fig. 4 is a flowchart illustrating another method for providing video learning materials according to a third embodiment of the present disclosure; the present embodiment is optimized based on the alternatives provided in the above embodiments, and specifically provides a detailed description of inputting the target image into a stylized processing model to obtain a preliminary stylized processing image.
Optionally, as shown in fig. 4, the method in this embodiment may include the following steps:
s401, if the background complexity of the image to be processed is lower than the complexity threshold, replacing the original background of the image to be processed with a template background to obtain a target image.
S402, determining a region to be processed according to the received region selection instruction.
Optionally, in this embodiment, in order to improve the personalized requirement of the user for performing stylized processing on the image to be processed, the stylized processing may be performed only on the user selected area according to the selection of the user. Specifically, after the user selects the image to be processed, a region selection instruction is triggered through a sliding operation on the image to be processed, wherein the region selection instruction includes the region to be processed selected by the user. Optionally, if the area selection instruction is triggered by a non-closed sliding operation of the user on the image to be processed, the area corresponding to the sliding operation of the user may be used as the area to be processed, and the position coordinate corresponding to the area to be processed is added to the area selection instruction and transmitted to the electronic device; if the area selection instruction is triggered by the closed sliding operation of the user on the image to be processed, the closed frame selection area corresponding to the sliding operation of the user can be used as the area to be processed, and the position coordinate corresponding to the area to be processed is added to the area selection instruction and transmitted to the electronic device. After receiving an area selection instruction triggered by a user, the electronic equipment acquires a specific coordinate position of an area to be processed from the area selection instruction.
And S403, inputting the target image and the area to be processed into the stylized processing model, and controlling the stylized processing model to stylize the content of the area to be processed of the target image to obtain a preliminary stylized processing image.
Optionally, in this step, the target image obtained in S401 and the region to be processed determined in S402 are used together as input data, a program code of the stylized processing model is called and run, at this time, the stylized processing model performs the stylized processing operation only on the content of the region to be processed of the target image according to an algorithm during training based on the input target image and the region to be processed, only the region to be processed shows the stylized processing effect, and the rest regions still do not have the stylized processing effect in the obtained preliminary stylized processing image.
S404, replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
Optionally, if the to-be-processed area determined in S402 does not include the background area, the step may be to replace the background of the initial stylized image obtained in S403 with the original background of the to-be-processed image, so as to obtain the final stylized image. If the to-be-processed area determined in S402 includes a background area, this step may be to determine a position coordinate of the to-be-processed area corresponding to the background area, then input the original background and the position coordinate of the to-be-processed area corresponding to the background area as input data into a background processing model, and control the background processing model to perform stylization processing operation only on the content of the position coordinate area in the original background, so as to obtain a processed original background; and finally, replacing the background of the initial stylized image obtained in the step S403 with the processed original background to obtain a final stylized image.
The embodiment of the disclosure provides an image stylizing processing method, wherein if a user triggers a region selection instruction when selecting an image to be processed, when the background complexity of the image to be processed is lower than a complexity threshold, an original background of the image to be processed is replaced by a preset template background, after a target image is obtained, the region to be processed is determined according to the region selection instruction triggered by the user, a stylizing processing model is controlled to only stylize the region to be processed of the target image, and then the background of the preliminarily stylized processed image obtained by processing is replaced by the original background, so that a final stylized processed image is obtained. According to the scheme of the embodiment of the disclosure, when the stylized processing is performed on the image to be processed, the complexity of the image background is considered, and the fineness and the attractiveness of the stylized processing result of the image are improved; and the personalized stylized processing effect can be generated for the user aiming at the region selection instruction of the user, so that the interestingness of the stylized processing of the image is improved.
Example four
Fig. 5 is a schematic structural diagram of an image stylization processing apparatus according to a fourth embodiment of the present disclosure, which is applicable to a case of stylizing an image to be processed, and is particularly applicable to a case of stylizing an image to be processed whose background complexity is lower than a complexity threshold. The apparatus may be implemented by software and/or hardware and integrated in an electronic device executing the method, as shown in fig. 5, the apparatus may include:
a background replacement module 501, configured to replace an original background of an image to be processed with a template background to obtain a target image if it is determined that the background complexity of the image to be processed is lower than a complexity threshold;
a stylized processing module 502, configured to input the target image into a stylized processing model to obtain a preliminary stylized processing image;
the background replacing module 501 is further configured to replace the background of the preliminary stylized image with the original background to obtain a final stylized image.
The embodiment of the disclosure provides an image stylization processing device, which judges whether the background complexity of an image to be processed is lower than a complexity threshold, if so, replaces the original background of the image to be processed with a preset template background to obtain a target image; and after the target image is stylized by the stylized processing model, replacing the background of the primary stylized processing image obtained by processing with the original background to obtain the final stylized processing image. According to the scheme of the embodiment of the disclosure, when the stylized processing is performed on the image, the complexity of the image background is considered, when the complexity of the image background to be processed is low, a method of performing the stylized processing after replacing the template background for the image to be processed is adopted, the situation that the processing result is rough when the conventional stylized processing method is adopted for the image to be processed with low background complexity is avoided, and the fineness and the attractiveness of the stylized processing result of the image are improved.
Further, the apparatus further includes a complexity determination model, when determining that the background complexity of the image to be processed is lower than the complexity threshold, configured to:
extracting texture features of an original background of the image to be processed, and determining whether the background complexity of the image to be processed is lower than a complexity threshold value according to the texture features.
Further, the complexity judgment model is specifically configured to:
carrying out edge detection on the original background of the image to be processed to obtain texture features of the original background;
determining the pixel proportion of the texture features in the original background as the background complexity of the image to be processed;
determining whether a background complexity of the image to be processed is below a complexity threshold.
Further, the apparatus further comprises: a model training module, the model training module comprising:
the training unit is used for carrying out stylized processing training on the constructed initial network model by adopting sample image data;
the verification unit is used for verifying whether the attribute parameters of the output images of the trained initial network model are larger than the parameter threshold value by adopting verification image data; wherein the attribute parameters include: at least one of a saturation parameter, an edge smoothness parameter, and a shading parameter;
and the model determining unit is used for taking the trained initial network model as the stylized processing model if the attribute parameters of the output images of the trained initial network model are larger than the parameter threshold.
Further, the model training module further comprises:
the parameter adjusting unit is used for adjusting the model parameters of the initial network model within a preset modification range if the attribute parameters of the output images of the trained initial network model are less than or equal to a parameter threshold value; the model parameters comprise at least one of the number of layers, the number of channels and the loss weight value of the initial network model;
and the training unit is also used for performing stylization processing training on the initial network model after the model parameters are adjusted by adopting the sample image data.
Further, the stylization processing module 502 is specifically configured to:
determining a region to be processed according to the received region selection instruction;
and inputting the target image and the area to be processed into a stylized processing model, and controlling the stylized processing model to stylize the content of the area to be processed of the target image to obtain a preliminary stylized processing image.
Further, when the background replacement module 501 replaces the background of the preliminary stylized image with the original background, it is specifically configured to:
inputting the original background into a background processing model to obtain a processed original background;
and replacing the background of the preliminarily stylized image with the processed original background.
The image stylization processing apparatus provided by the embodiment of the disclosure is the same as the image stylization processing method provided by each of the above embodiments, and technical details that are not described in detail in the embodiment of the disclosure can be referred to the above embodiments, and the embodiment of the disclosure has the same beneficial effects as the above embodiments.
EXAMPLE five
Referring now to FIG. 6, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), etc., and a stationary terminal such as a digital TV, a desktop computer, etc. The electronic device 600 shown in fig. 6 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some implementations, the electronic devices may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the internal processes of the electronic device to perform: if the complexity of the background of the image to be processed is lower than the complexity threshold value, replacing the original background of the image to be processed with a template background to obtain a target image; inputting the target image into a stylized processing model to obtain a primary stylized processing image; and replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an image stylizing method, including:
if the complexity of the background of the image to be processed is lower than the complexity threshold value, replacing the original background of the image to be processed with a template background to obtain a target image;
inputting the target image into a stylized processing model to obtain a primary stylized processing image;
and replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
According to one or more embodiments of the present disclosure, the determining that the background complexity of the image to be processed is lower than the complexity threshold in the above method includes:
extracting texture features of an original background of the image to be processed, and determining whether the background complexity of the image to be processed is lower than a complexity threshold value according to the texture features.
According to one or more embodiments of the present disclosure, in the above method, extracting texture features of an original background of an image to be processed, and determining whether a background complexity of the image to be processed is lower than a complexity threshold according to the texture features includes:
carrying out edge detection on the original background of the image to be processed to obtain texture features of the original background;
determining the pixel proportion of the texture features in the original background as the background complexity of the image to be processed;
determining whether a background complexity of the image to be processed is below a complexity threshold.
According to one or more embodiments of the present disclosure, the method further includes, before inputting the target image into a stylized processing model:
carrying out stylized processing training on the constructed initial network model by adopting sample image data;
verifying whether the attribute parameters of the output images of the trained initial network model are larger than a parameter threshold value by adopting verification image data; wherein the attribute parameters include: at least one of a saturation parameter, an edge smoothness parameter, and a shading parameter;
and if so, taking the trained initial network model as the stylized processing model.
According to one or more embodiments of the present disclosure, in the method, after verifying whether the attribute parameter of the trained initial network model output image is greater than a parameter threshold value by using the verification image data, the method further includes:
if not, adjusting the model parameters of the initial network model within a preset modification range; the model parameters comprise at least one of the number of layers, the number of channels and the loss weight value of the initial network model;
and performing stylization processing training on the initial network model after the model parameters are adjusted by adopting the sample image data.
According to one or more embodiments of the present disclosure, in the above method, inputting the target image into a stylized processing model to obtain a preliminary stylized processing image, includes:
determining a region to be processed according to the received region selection instruction;
and inputting the target image and the area to be processed into a stylized processing model, and controlling the stylized processing model to stylize the content of the area to be processed of the target image to obtain a preliminary stylized processing image.
According to one or more embodiments of the present disclosure, the replacing the background of the preliminarily stylized image with the original background in the above method includes:
inputting the original background into a background processing model to obtain a processed original background;
and replacing the background of the preliminarily stylized image with the processed original background.
According to one or more embodiments of the present disclosure, there is provided an image stylization processing apparatus including:
the background replacing module is used for replacing the original background of the image to be processed with the template background to obtain a target image if the background complexity of the image to be processed is lower than the complexity threshold;
the stylized processing module is used for inputting the target image into a stylized processing model to obtain a preliminary stylized processing image;
the background replacing module is further configured to replace the background of the preliminary stylized image with the original background to obtain a final stylized image.
According to one or more embodiments of the present disclosure, the apparatus further includes a complexity determination model, when determining that the background complexity of the image to be processed is lower than the complexity threshold, configured to:
extracting texture features of an original background of the image to be processed, and determining whether the background complexity of the image to be processed is lower than a complexity threshold value according to the texture features.
According to one or more embodiments of the present disclosure, the complexity determination model in the foregoing apparatus is specifically configured to:
carrying out edge detection on the original background of the image to be processed to obtain texture features of the original background;
determining the pixel proportion of the texture features in the original background as the background complexity of the image to be processed;
determining whether a background complexity of the image to be processed is below a complexity threshold.
According to one or more embodiments of the present disclosure, the apparatus further includes: a model training module, the model training module comprising:
the training unit is used for carrying out stylized processing training on the constructed initial network model by adopting sample image data;
the verification unit is used for verifying whether the attribute parameters of the output images of the trained initial network model are larger than the parameter threshold value by adopting verification image data; wherein the attribute parameters include: at least one of a saturation parameter, an edge smoothness parameter, and a shading parameter;
and the model determining unit is used for taking the trained initial network model as the stylized processing model if the attribute parameters of the output images of the trained initial network model are larger than the parameter threshold.
According to one or more embodiments of the present disclosure, the model training module in the above apparatus further includes:
the parameter adjusting unit is used for adjusting the model parameters of the initial network model within a preset modification range if the attribute parameters of the output images of the trained initial network model are less than or equal to a parameter threshold value; the model parameters comprise at least one of the number of layers, the number of channels and the loss weight value of the initial network model;
and the training unit is also used for performing stylization processing training on the initial network model after the model parameters are adjusted by adopting the sample image data.
According to one or more embodiments of the present disclosure, the stylization processing module in the foregoing apparatus is specifically configured to:
determining a region to be processed according to the received region selection instruction;
and inputting the target image and the area to be processed into a stylized processing model, and controlling the stylized processing model to stylize the content of the area to be processed of the target image to obtain a preliminary stylized processing image.
According to one or more embodiments of the present disclosure, when the background replacement module in the above apparatus replaces the background of the preliminary stylized image with the original background, the background replacement module is specifically configured to:
inputting the original background into a background processing model to obtain a processed original background;
and replacing the background of the preliminarily stylized image with the processed original background.
According to one or more embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an image stylization processing method as in any embodiment of the present disclosure.
According to one or more embodiments of the present disclosure, a readable medium is provided, on which a computer program is stored, which when executed by a processor, implements an image stylization processing method according to any of the embodiments of the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. An image stylization processing method, comprising:
if the complexity of the background of the image to be processed is lower than the complexity threshold value, replacing the original background of the image to be processed with a template background to obtain a target image;
inputting the target image into a stylized processing model to obtain a primary stylized processing image;
and replacing the background of the preliminary stylized image with the original background to obtain a final stylized image.
2. The method of claim 1, wherein determining that the background complexity of the image to be processed is below a complexity threshold comprises:
extracting texture features of an original background of the image to be processed, and determining whether the background complexity of the image to be processed is lower than a complexity threshold value according to the texture features.
3. The method of claim 2, wherein extracting texture features of an original background of the image to be processed and determining whether the background complexity of the image to be processed is lower than a complexity threshold according to the texture features comprises:
carrying out edge detection on the original background of the image to be processed to obtain texture features of the original background;
determining the pixel proportion of the texture features in the original background as the background complexity of the image to be processed;
determining whether a background complexity of the image to be processed is below a complexity threshold.
4. The method of claim 1, further comprising, prior to inputting the target image into a stylized processing model:
carrying out stylized processing training on the constructed initial network model by adopting sample image data;
verifying whether the attribute parameters of the output images of the trained initial network model are larger than a parameter threshold value by adopting verification image data; wherein the attribute parameters include: at least one of a saturation parameter, an edge smoothness parameter, and a shading parameter;
and if so, taking the trained initial network model as the stylized processing model.
5. The method of claim 4, after verifying whether the trained initial network model output image has an attribute parameter greater than a parameter threshold using verification image data, further comprising:
if not, adjusting the model parameters of the initial network model within a preset modification range; the model parameters comprise at least one of the number of layers, the number of channels and the loss weight value of the initial network model;
and performing stylization processing training on the initial network model after the model parameters are adjusted by adopting the sample image data.
6. The method of claim 1, wherein inputting the target image into a stylized representation to obtain a preliminary stylized representation comprises:
determining a region to be processed according to the received region selection instruction;
and inputting the target image and the area to be processed into a stylized processing model, and controlling the stylized processing model to stylize the content of the area to be processed of the target image to obtain a preliminary stylized processing image.
7. The method of claim 1, wherein replacing the background of the preliminarily stylized image with the original background comprises:
inputting the original background into a background processing model to obtain a processed original background;
and replacing the background of the preliminarily stylized image with the processed original background.
8. An image stylization processing apparatus, comprising:
the background replacing module is used for replacing the original background of the image to be processed with the template background to obtain a target image if the background complexity of the image to be processed is lower than the complexity threshold;
the stylized processing module is used for inputting the target image into a stylized processing model to obtain a preliminary stylized processing image;
the background replacing module is further configured to replace the background of the preliminary stylized image with the original background to obtain a final stylized image.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the image stylization processing method of any of claims 1-7.
10. A readable medium, on which a computer program is stored which, when being executed by a processor, carries out the image stylization processing method according to any one of claims 1 to 7.
CN201910910696.4A 2019-09-25 2019-09-25 Image stylization processing method and device, electronic equipment and readable medium Active CN110689478B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910910696.4A CN110689478B (en) 2019-09-25 2019-09-25 Image stylization processing method and device, electronic equipment and readable medium
PCT/CN2020/113974 WO2021057463A1 (en) 2019-09-25 2020-09-08 Image stylization processing method and apparatus, and electronic device and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910910696.4A CN110689478B (en) 2019-09-25 2019-09-25 Image stylization processing method and device, electronic equipment and readable medium

Publications (2)

Publication Number Publication Date
CN110689478A true CN110689478A (en) 2020-01-14
CN110689478B CN110689478B (en) 2023-12-01

Family

ID=69110569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910910696.4A Active CN110689478B (en) 2019-09-25 2019-09-25 Image stylization processing method and device, electronic equipment and readable medium

Country Status (2)

Country Link
CN (1) CN110689478B (en)
WO (1) WO2021057463A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445383A (en) * 2020-03-20 2020-07-24 北京迈格威科技有限公司 Image parameter adjusting method, device and system
WO2021057463A1 (en) * 2019-09-25 2021-04-01 北京字节跳动网络技术有限公司 Image stylization processing method and apparatus, and electronic device and readable medium
US20230386001A1 (en) * 2021-02-09 2023-11-30 Beijing Zitiao Network Technology Co., Ltd. Image display method and apparatus, and device and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170132459A1 (en) * 2015-11-11 2017-05-11 Adobe Systems Incorporated Enhancement of Skin, Including Faces, in Photographs
CN108154518A (en) * 2017-12-11 2018-06-12 广州华多网络科技有限公司 A kind of method, apparatus of image procossing, storage medium and electronic equipment
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
US20180373999A1 (en) * 2017-06-26 2018-12-27 Konica Minolta Laboratory U.S.A., Inc. Targeted data augmentation using neural style transfer
CN109344724A (en) * 2018-09-05 2019-02-15 深圳伯奇科技有限公司 A kind of certificate photo automatic background replacement method, system and server
CN109523460A (en) * 2018-10-29 2019-03-26 北京达佳互联信息技术有限公司 Moving method, moving apparatus and the computer readable storage medium of image style
CN110211017A (en) * 2019-05-15 2019-09-06 北京字节跳动网络技术有限公司 Image processing method, device and electronic equipment
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794454B (en) * 2010-04-08 2012-01-04 西安交通大学 Oil painting stylizing method based on image
CN104869346A (en) * 2014-02-26 2015-08-26 ***通信集团公司 Method and electronic equipment for processing image in video call
CN105989594B (en) * 2015-02-12 2019-02-12 阿里巴巴集团控股有限公司 A kind of image region detection method and device
CN106920212A (en) * 2015-12-24 2017-07-04 掌赢信息科技(上海)有限公司 A kind of method and electronic equipment for sending stylized video
CN106327448A (en) * 2016-08-31 2017-01-11 上海交通大学 Picture stylization processing method based on deep learning
US10165259B2 (en) * 2017-02-15 2018-12-25 Adobe Systems Incorporated Generating novel views of a three-dimensional object based on a single two-dimensional image
CN107507173B (en) * 2017-08-15 2021-07-27 上海交通大学 No-reference definition evaluation method and system for full-slice image
CN110689478B (en) * 2019-09-25 2023-12-01 北京字节跳动网络技术有限公司 Image stylization processing method and device, electronic equipment and readable medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170132459A1 (en) * 2015-11-11 2017-05-11 Adobe Systems Incorporated Enhancement of Skin, Including Faces, in Photographs
US20180373999A1 (en) * 2017-06-26 2018-12-27 Konica Minolta Laboratory U.S.A., Inc. Targeted data augmentation using neural style transfer
CN108154518A (en) * 2017-12-11 2018-06-12 广州华多网络科技有限公司 A kind of method, apparatus of image procossing, storage medium and electronic equipment
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN109344724A (en) * 2018-09-05 2019-02-15 深圳伯奇科技有限公司 A kind of certificate photo automatic background replacement method, system and server
CN109523460A (en) * 2018-10-29 2019-03-26 北京达佳互联信息技术有限公司 Moving method, moving apparatus and the computer readable storage medium of image style
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system
CN110211017A (en) * 2019-05-15 2019-09-06 北京字节跳动网络技术有限公司 Image processing method, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨常青等: "《舰基图像处理技术原理与应用》", 31 May 2015 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021057463A1 (en) * 2019-09-25 2021-04-01 北京字节跳动网络技术有限公司 Image stylization processing method and apparatus, and electronic device and readable medium
CN111445383A (en) * 2020-03-20 2020-07-24 北京迈格威科技有限公司 Image parameter adjusting method, device and system
CN111445383B (en) * 2020-03-20 2024-02-13 北京迈格威科技有限公司 Image parameter adjusting method, device and system
US20230386001A1 (en) * 2021-02-09 2023-11-30 Beijing Zitiao Network Technology Co., Ltd. Image display method and apparatus, and device and medium

Also Published As

Publication number Publication date
WO2021057463A1 (en) 2021-04-01
CN110689478B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
CN111369427B (en) Image processing method, image processing device, readable medium and electronic equipment
CN110189246B (en) Image stylization generation method and device and electronic equipment
WO2021057463A1 (en) Image stylization processing method and apparatus, and electronic device and readable medium
JP2022531639A (en) How to embed information in video, computer equipment and computer programs
CN112182299B (en) Method, device, equipment and medium for acquiring highlight in video
CN113689372B (en) Image processing method, apparatus, storage medium, and program product
CN110211195B (en) Method, device, electronic equipment and computer-readable storage medium for generating image set
CN110689546A (en) Method, device and equipment for generating personalized head portrait and storage medium
CN113034648A (en) Image processing method, device, equipment and storage medium
CN115965840A (en) Image style migration and model training method, device, equipment and medium
CN110570383A (en) image processing method and device, electronic equipment and storage medium
CN115311178A (en) Image splicing method, device, equipment and medium
CN110211017B (en) Image processing method and device and electronic equipment
CN113902636A (en) Image deblurring method and device, computer readable medium and electronic equipment
CN110619602B (en) Image generation method and device, electronic equipment and storage medium
CN111402159B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN115937020B (en) Image processing method, apparatus, device, medium, and program product
CN111402133A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN115953597B (en) Image processing method, device, equipment and medium
CN111784726A (en) Image matting method and device
CN115829827A (en) Face image processing method, device, equipment and medium
CN115937356A (en) Image processing method, apparatus, device and medium
CN114422698A (en) Video generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant