CN113469876A - Image style migration model training method, image processing method, device and equipment - Google Patents

Image style migration model training method, image processing method, device and equipment Download PDF

Info

Publication number
CN113469876A
CN113469876A CN202110867587.6A CN202110867587A CN113469876A CN 113469876 A CN113469876 A CN 113469876A CN 202110867587 A CN202110867587 A CN 202110867587A CN 113469876 A CN113469876 A CN 113469876A
Authority
CN
China
Prior art keywords
image
area
style
processed
contraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110867587.6A
Other languages
Chinese (zh)
Other versions
CN113469876B (en
Inventor
方慕园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110867587.6A priority Critical patent/CN113469876B/en
Publication of CN113469876A publication Critical patent/CN113469876A/en
Application granted granted Critical
Publication of CN113469876B publication Critical patent/CN113469876B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to a training method of an image style migration model, an image processing method, an image processing device, an electronic device and a storage medium. The training method of the image style migration model comprises the following steps: acquiring a sample image, a corresponding to-be-processed area image and an initial style image; the initial style image is an image obtained after image style migration processing is carried out on the area to be processed; contracting the area to be processed in the image of the area to be processed to obtain an image of a contracted area; obtaining a target style image according to the sample image, the initial style image and the contraction area image; the target style image is an image obtained by carrying out image style migration processing on a contraction area in the sample image; and training the neural network model according to the sample image, the contraction area image and the target style image to obtain an image style migration model. The method and the device for generating the target style image accurately recognize the area range of the to-be-processed area of the image, and further improve the accuracy of the generated target style image.

Description

Image style migration model training method, image processing method, device and equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a training method for an image style migration model, an image processing method, an image processing apparatus, an electronic device, and a storage medium.
Background
With the development of image processing technology, an image stylization technology has appeared, which generally needs to adopt different stylization modes for different areas of an image, for example, the image stylization technology can be applied to portrait dyeing tasks, and color conversion needs to be performed on hair areas, while other areas need to be kept unchanged.
Currently, image stylization techniques typically train neural networks by generating paired training data. However, due to inaccurate area identification of the paired pictures, the pictures generated through the neural network are prone to flaws, and the accuracy of the generated pictures is low.
Disclosure of Invention
The present disclosure provides a training method, an image processing apparatus, an electronic device, and a storage medium for an image style migration model, so as to at least solve the problem of low accuracy of a generated picture in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a training method for an image style migration model, including:
acquiring a sample image, and an image of a region to be processed and an initial style image which correspond to the sample image; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained after the image style migration processing is carried out on the area to be processed in the sample image;
performing contraction processing on the area to be processed in the area image to be processed to obtain a contracted area image;
obtaining a target style image corresponding to the contraction area image according to the sample image, the initial style image and the contraction area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image;
and training a neural network model according to the sample image, the contraction area image and the target style image to obtain an image style migration model.
In an exemplary embodiment, the training a neural network model according to the sample image, the contraction region image, and the target style image to obtain an image style migration model includes: inputting the sample image and the contraction area image into the neural network model, and performing style migration processing on the contraction area in the sample image through the neural network model to obtain a prediction style image; determining a loss value of the neural network model according to a difference value between the predicted style image and the target style image; and adjusting the model parameters of the neural network model according to the loss value to obtain the image style migration model.
In an exemplary embodiment, the target style image is an image obtained by performing color conversion processing on a contracted area corresponding to the contracted area image in the sample image; training a neural network model according to the sample image, the contraction area image and the target style image to obtain an image style migration model, wherein the training comprises the following steps: and training the neural network model according to the sample image, the contraction area image and the target style image to obtain a color conversion model.
In an exemplary embodiment, the contracted area image further includes a background area; the background area is an image area except the contraction area in the contraction area image; obtaining a target style image corresponding to the contracted area image according to the sample image, the initial style image and the contracted area image, including: acquiring a first area image corresponding to the contraction area from the initial style image, and acquiring a second area image corresponding to the background area from the sample image; and combining the first area image and the second area image to obtain the target style image.
In an exemplary embodiment, the contracting the to-be-processed region in the to-be-processed region image to obtain a contracted region image includes: determining a contraction amplitude for a region to be processed in the region to be processed image; and based on the contraction amplitude, performing contraction processing on the area edge of the area to be processed in the area image to be processed to obtain the contraction area image.
In an exemplary embodiment, the contracting, based on the contracting amplitude, the contracting the region edge of the region to be processed in the region to be processed image to obtain the contracted region image includes: based on the contraction amplitude, carrying out corrosion treatment on the edge of the area through a corrosion algorithm to obtain an image of the contraction area; or setting the pixel value of the region edge to be zero based on the contraction amplitude, and obtaining the contraction region image.
In an exemplary embodiment, the initial style image corresponding to the sample image is obtained by: inputting the sample image into a style conversion neural network, and carrying out image style migration processing on the region to be processed in the sample image through the style conversion neural network to obtain the initial style image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing method including:
acquiring an image to be processed and an image of a region to be processed corresponding to the image to be processed;
inputting the image to be processed and the image of the area to be processed into an image style migration model, and performing style migration processing on the area to be processed in the image to be processed through the image style migration model to obtain a style migration image; the image style migration model is obtained by the method for training the image style migration model according to any one of the embodiments of the first aspect.
In an exemplary embodiment, the image style migration model is used for performing color conversion processing on the region to be processed; the style migration processing is performed on the to-be-processed area in the to-be-processed image through the image style migration model to obtain a style migration image, and the method comprises the following steps: and performing color conversion processing on the region to be processed in the image to be processed through the image style migration model to obtain the style migration image.
According to a third aspect of the embodiments of the present disclosure, there is provided a training apparatus for an image style migration model, including:
the system comprises a sample image acquisition unit, a processing unit and a processing unit, wherein the sample image acquisition unit is configured to acquire a sample image, an image of a region to be processed corresponding to the sample image and an initial style image; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained after the image style migration processing is carried out on the area to be processed in the sample image;
a contraction image acquisition unit configured to perform contraction processing on the region to be processed in the region to be processed image to obtain a contraction region image;
a target image obtaining unit configured to obtain a target style image corresponding to the contracted area image according to the sample image, the initial style image and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image;
and the style model training unit is configured to train a neural network model according to the sample image, the contraction area image and the target style image to obtain an image style migration model.
In an exemplary embodiment, the style model training unit is further configured to perform inputting the sample image and the contraction region image into the neural network model, and perform style migration processing on the contraction region in the sample image through the neural network model to obtain a predicted style image; determining a loss value of the neural network model according to a difference value between the predicted style image and the target style image; and adjusting the model parameters of the neural network model according to the loss value to obtain the image style migration model.
In an exemplary embodiment, the target style image is an image obtained by performing color conversion processing on the shrinkage area; the style model training unit is further configured to train the neural network model according to the sample image, the contraction region image and the target style image, so as to obtain a color conversion model.
In an exemplary embodiment, the contracted area image further includes a background area; the background area is an image area except the contraction area in the contraction area image; the target image acquisition unit is further configured to perform acquisition of a first region image corresponding to the contraction region from the initial style image and acquisition of a second region image corresponding to the background region from the sample image; and combining the first area image and the second area image to obtain the target style image.
In an exemplary embodiment, the contraction image obtaining unit is further configured to perform determining a contraction amplitude for a region to be processed in the region to be processed image; and based on the contraction amplitude, performing contraction processing on the area edge of the area to be processed in the area image to be processed to obtain the contraction area image.
In an exemplary embodiment, the contracted image obtaining unit is further configured to perform erosion processing on the region edge by an erosion algorithm based on the contraction amplitude to obtain the contracted region image; or setting the pixel value of the region edge to be zero based on the contraction amplitude to obtain the contraction region image.
In an exemplary embodiment, the sample image obtaining unit is further configured to perform input of the sample image into a style transformation neural network, and perform image style migration processing on a region to be processed in the sample image through the style transformation neural network to obtain the initial style image.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an image processing apparatus comprising:
the image processing device comprises a to-be-processed image acquisition unit, a processing unit and a processing unit, wherein the to-be-processed image acquisition unit is configured to acquire an image to be processed and an image of a to-be-processed area corresponding to the image to be processed;
the image style migration unit is configured to input the image to be processed and the image of the area to be processed into an image style migration model, and perform style migration processing on the area to be processed in the image to be processed through the image style migration model to obtain a style migration image; the image style migration model is obtained by the training method of the image style migration model according to any one of the embodiments of the first aspect.
In an exemplary embodiment, the image style migration model is used for performing color conversion processing on the region to be processed; the image style migration unit is further configured to perform color conversion processing on the region to be processed in the image to be processed through the image style migration model to obtain the style migration image.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method for training the image style migration model according to any embodiment of the first aspect or the method for processing the image according to any embodiment of the second aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method for training an image style migration model according to any one of the embodiments of the first aspect or the method for processing an image according to any one of the embodiments of the second aspect.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product, comprising a computer program, which when executed by a processor, implements the method for training an image style migration model according to any one of the embodiments of the first aspect, or the method for processing an image according to any one of the embodiments of the second aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
obtaining a sample image, an image of a region to be processed corresponding to the sample image and an initial style image; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained after the image style migration processing is carried out on the area to be processed in the sample image; performing contraction processing on a region to be processed in the region image to be processed to obtain a contracted region image; obtaining a target style image corresponding to the contraction area image according to the sample image, the initial style image and the contraction area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image; and training the neural network model according to the sample image, the contraction area image and the target style image to obtain an image style migration model. According to the method and the device, after the to-be-processed area corresponding to the sample image is subjected to shrinkage processing, the target style image is obtained by utilizing the obtained shrinkage area image and the sample image, and the neural network model is trained by utilizing the sample image, the shrinkage area image and the target style image, so that the obtained image style migration model can learn the change of the to-be-processed area, the area range of the to-be-processed area of the image is accurately identified, and the accuracy of the generated target style image is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating a method of training an image style migration model in accordance with an exemplary embodiment.
FIG. 2 is a flow diagram illustrating obtaining an image style migration model according to an exemplary embodiment.
FIG. 3 is a flow diagram illustrating obtaining a target style image according to an exemplary embodiment.
FIG. 4 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
Figure 5 is a schematic diagram of a portrait coloring task, according to an exemplary embodiment.
FIG. 6 is a block diagram illustrating an apparatus for training an image style migration model in accordance with an exemplary embodiment.
Fig. 7 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
FIG. 8 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a training method of an image style migration model according to an exemplary embodiment, where the training method of the image style migration model is used in a terminal, as shown in fig. 1, and includes the following steps.
In step S101, a sample image, an image of a region to be processed corresponding to the sample image, and an initial style image are acquired; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained after the image style migration processing is performed on the area to be processed in the sample image.
The terminal may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, the sample image refers to an image that is acquired by the terminal in advance and is not subjected to image style migration processing for model training, and the image may be acquired by an image acquisition device of the terminal, such as a camera of the terminal, or may be obtained by downloading an image through a network. The sample image carries an image processing area which needs to be subjected to image style migration, namely, a to-be-processed area of the sample image, and then the terminal can perform image style migration processing on the to-be-processed area of the sample image, so that a corresponding initial style image is obtained, and the to-be-processed area image refers to an image corresponding to the to-be-processed area in the sample image.
Specifically, after a terminal can acquire a sample image used for training an image style migration model in advance, an image of a to-be-processed area and an initial style image corresponding to the sample image can be obtained, wherein the image of the to-be-processed area can be obtained by segmenting different image areas in the sample image through an image segmentation algorithm to obtain the to-be-processed area needing image style migration and the image of the to-be-processed area corresponding to the to-be-processed area, and meanwhile, the image style migration processing can be performed on the to-be-processed area part of the sample image, so that the initial style image corresponding to the sample image is obtained.
For example, the image style migration model to be trained may be an image stylization processing model for dyeing a human image, the terminal may acquire a human image which is not dyed in advance as a sample image, and a hair region in the image is used as a region to be processed, and then the terminal may obtain an image corresponding to the hair region through an image segmentation algorithm, may be a mask of the hair region as an image of the region to be processed, and may also perform dyeing processing on the hair region in the sample image, and may be a manner of manually cropping by a designer to obtain the dyed human image as an initial style image.
In step S102, a contraction process is performed on the region to be processed in the region to be processed image, so as to obtain a contracted region image.
After the terminal obtains the image of the area to be processed in step S101, the terminal may perform a shrinking process on the area to be processed in the image of the area to be processed, so as to obtain a shrunk image of the area to be processed, that is, a shrunk image of the area, where the determination of the shrunk area may be that the terminal performs a setting according to an area with inaccurate identification, for example, a black edge exists in a generated picture after hair dyeing due to inaccurate area edge identification of a hair area that may exist in a hair dyeing task, and then the terminal may shrink the area edge of the hair area of the human figure, so as to obtain the shrunk image of the area. And for the reason that inaccurate identification of the white area may exist in the task of repairing the white area, the white area can be shrunk, so that a corresponding shrunk area image is obtained.
In step S103, a target style image corresponding to the contracted area image is obtained from the sample image, the initial style image, and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image;
in step S104, the neural network model is trained according to the sample image, the contraction region image, and the target style image, so as to obtain an image style migration model.
The target style image may be an image obtained by performing image style migration processing on a contracted area corresponding to the contracted area image in the sample image, after the contracted area image is obtained in step S102, the terminal may obtain the target style image corresponding to the contracted area image according to the sample image, the initial style image and the contracted area image, and perform model training on the neural network model according to the sample image, the contracted area image and the obtained target style image, thereby obtaining a trained image style migration model.
For example, after obtaining the contracted area image obtained by contracting the area edge of the hair area of the human figure image, the terminal may obtain an image obtained by dyeing the contracted hair area as the target style image by using the contracted area image, the sample image, and the initial style image, and perform model training by using the sample image, the contracted area image, and the target style image, thereby obtaining the trained image style transition model for dyeing the hair area of the human figure image.
In the training method of the image style migration model, a sample image, an image of a region to be processed corresponding to the sample image and an initial style image are obtained through a terminal; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained after the image style migration processing is carried out on the area to be processed in the sample image; performing contraction processing on a region to be processed in the region image to be processed to obtain a contracted region image; obtaining a target style image corresponding to the contraction area image according to the sample image, the initial style image and the contraction area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image; and training the neural network model according to the sample image, the contraction area image and the target style image to obtain an image style migration model. According to the method, after the terminal contracts the to-be-processed area corresponding to the sample image, the target style image is obtained by using the obtained contracted area image and the sample image, and the neural network model is trained by using the sample image, the contracted area image and the target style image, so that the obtained image style migration model can learn the change of the to-be-processed area, the area range of the to-be-processed area of the image can be accurately identified, and the accuracy of the generated target style image is improved.
In an exemplary embodiment, as shown in fig. 2, the step S104 may further include:
in step S201, the sample image and the contracted region image are input into the neural network model, and the contracted region in the sample image is subjected to the style migration processing by the neural network model, so as to obtain a predicted style image.
The neural network model can carry out image style migration processing on an image area corresponding to the contraction area image in the sample image according to the input sample image and the contraction area image, and accordingly output the corresponding prediction style image. For example, the image style migration model to be trained may be an image stylization processing model for dyeing a human image, and after the terminal inputs the sample image and a contraction region image in which a hair region in the sample image is contracted into the image stylization processing model, the model may determine a contraction region corresponding to the hair region part in the sample image according to the contraction region image and perform dyeing processing on the region, thereby obtaining a prediction style image for dyeing the contraction region part in the hair region of the sample image.
In step S202, determining a loss value of the neural network model according to a difference between the predicted style image and the target style image;
in step S203, the model parameters of the neural network model are adjusted according to the loss value, so as to obtain an image style migration model.
After obtaining the prediction style image in step S201, the terminal may perform a difference process with the target style image obtained in step S103 by using the prediction style image to obtain a difference value between the prediction style image and the target style image, for example, may perform a difference process on a pixel value of the prediction style image and a pixel value of the target style image to obtain a difference value between the prediction style image and the target style image, and use the difference value as a loss value of the neural network model. And then, the terminal can adjust the model parameters of the neural network model based on the obtained loss value until the training of the neural network model is completed to obtain the image style migration model. For example, a loss value threshold value may be set for the neural network model, the training of the model is realized by comparing the obtained loss value with the loss value threshold value, when the loss value is greater than the set loss value threshold value, the model parameters of the neural network model need to be adjusted again, and the loss value of the neural network model after the model parameters are adjusted is obtained again, until the loss value is smaller than the preset difference threshold value, the adjustment of the model parameters of the neural network model may be stopped, and the neural network model at this time is used as the trained image style migration model.
In this embodiment, the terminal may train the neural network model by using a difference value between a predicted style image obtained by the neural network model according to the sample image and the contraction region image and the target style image as a loss value, so that the trained image style migration model may learn the range of the contraction region, thereby accurately identifying the region range of the region to be processed, and further improving the accuracy of the image output by the image style migration model.
In an exemplary embodiment, the target style image is an image obtained by performing color conversion processing on the contracted area; step S104 may further include: and training the neural network model according to the sample image, the contraction area image and the target style image to obtain a color conversion model.
In this embodiment, the image style migration model to be trained may be a color conversion model for implementing color conversion on a partial region in an image, that is, a region to be processed of the image, where the target style image is an image obtained by color conversion on the region to be processed, and specifically, the terminal may perform training on the neural network model by using the sample image, the contracted region image, and the target style image obtained by color conversion on the contracted region in the sample image, so as to obtain the color conversion model.
For example, the color conversion model may be a neural network model for performing color conversion processing on a hair region in a human image, and the terminal may acquire a sample human image, perform color conversion processing on the hair region of the sample human image to obtain an initial style image after color conversion of the hair region, and at the same time, may recognize a hair region image corresponding to the hair region in the image as a region image to be processed, and may further perform contraction on the hair region image to obtain a contracted hair region image, and perform color conversion on the hair region after the region contraction to obtain a target style image, and finally train a color conversion model for realizing color conversion of the hair region.
In this embodiment, the image style migration model may also be a color conversion model for implementing color conversion, and when color conversion of a partial region of an image needs to be implemented, a region range in the image that needs to be color-converted may be accurately identified through the color conversion model, so that accuracy of a generated target style image after color conversion may be improved.
In an exemplary embodiment, the contracted area image further includes a background area; the background area is an image area except the contraction area in the contraction area image; as shown in fig. 3, step S103 may further include:
in step S301, a first region image corresponding to a contracted region is acquired from the initial style image, and a second region image corresponding to a background region is acquired from the sample image.
The contraction area image may be composed of a contraction area and a background area, the background area refers to an image area except for the contraction area in the contraction area image, the first area image refers to an area image partially shown by the contraction area in the initial style image, and the second area image refers to an area image partially shown by the background area in the sample image. Specifically, the terminal may find a region image partially showing the contracted region from the initial style image as a first region image, and find a region image showing a region other than the contracted region from the sample image as a second region image.
For example, the terminal may extract the pixel points in the contraction region from the initial style image, so as to form a first region image using the pixel points extracted from the initial style image, and may also extract the pixel points outside the contraction region from the sample image, so as to form a second region image using the pixel points extracted from the sample image.
In step S302, the first area image and the second area image are combined to obtain a target style image.
After the first area image and the second area image are obtained in step S301, the first area image and the second area image may be combined, or the terminal may superimpose pixel points included in the first area image and the second area image, so as to obtain the target style image.
For example, the target style image may be calculated by the following formula:
B’=B*M’+A*(1-M’)
wherein, B ' represents the combined target style image, M ' represents the contracted area image, the contracted area image includes the contracted area and the background area, wherein, the pixel in the contracted area corresponding to M ' is set to 1, the pixel in the background area is set to 0, B represents the initial style image, a represents the sample image. That is, in the target style image B ', all the corresponding regions where M' is 1, that is, the contracted region, are copied with pixels from the initial style image B, and the region where M 'is 0, that is, the background region, is copied with pixels from the sample image a, thereby generating the target style image B'.
In this embodiment, the initial style image may be used to obtain a first region image corresponding to the contracted region, and the sample image may be used to obtain a second region image corresponding to a background region outside the contracted region, so as to generate the target style image in a combined manner, which may improve accuracy of the generated target style image and efficiency of generating the target style image.
In an exemplary embodiment, step S102 may further include: determining a contraction amplitude for the region to be processed in the image of the region to be processed; and based on the contraction amplitude, performing contraction processing on the area edge of the area to be processed in the area image to be processed to obtain a contracted area image.
The contraction amplitude refers to a contraction amplitude which needs to perform area contraction on the to-be-processed area in the to-be-processed area image, the contraction amplitude can be set by a user according to needs, or can be randomly generated by the terminal, and the contraction processing of the area edge of the to-be-processed area is performed by using the randomly generated contraction amplitude, so that the contraction area image is obtained.
For example, for the same sample image a, the corresponding to-be-processed region image is the region image a, and then the terminal may perform the contraction processing on the to-be-processed region in the region image a based on the determined contraction width, for example, for the contraction width 1 and the contraction width 2, the contraction width 1 and the contraction width 2 may be randomly determined by the terminal, that is, the region edge of the region image a may be subjected to the contraction processing by using the contraction width 1 and the contraction width 2, so as to obtain the region image B and the region image C, which are respectively different contraction region images for the sample image a.
In this embodiment, the terminal may obtain the corresponding contracted region image through the contraction amplitude, so that the generated contracted region image may be contracted by the contraction amplitude, thereby obtaining the target style image, and may also be contracted by the contraction amplitude, which may improve the diversity of the contracted region image and the target style image, and improve the diversity of the training samples.
Further, based on the contraction amplitude, performing contraction processing on the region edge of the region to be processed in the region image to be processed to obtain a contracted region image, which may further include: based on the contraction amplitude, carrying out corrosion treatment on the edge of the area through a corrosion algorithm to obtain an image of the contraction area; or setting the pixel value of the area edge to be zero based on the contraction amplitude, and obtaining the image of the contraction area.
In this embodiment, the shrinking processing of the area edge of the area to be processed may be implemented by using a corrosion algorithm, for example, the area edge of the area to be processed, which is matched with the shrinking amplitude, may be subjected to corrosion processing by using the corrosion algorithm, so as to obtain a shrinking area image, or the pixels of the area edge may be subjected to zeroing processing according to the shrinking amplitude, so as to obtain the shrinking area image.
In this embodiment, the shrinking of the region edge of the region to be processed by the terminal may be performed by using a corrosion algorithm, or may be performed by performing zeroing on pixels at the region edge, so that the efficiency of obtaining the image of the shrunk region can be improved.
In an exemplary embodiment, the initial style image corresponding to the sample image in step S101 may be obtained by: and inputting the sample image into the style conversion neural network, and carrying out image style migration on the sample image through the style conversion neural network to obtain an initial style image.
In this embodiment, the style transformation neural network may be an existing image processing model for implementing image style migration on a sample image, a style image output by the model may have a flaw due to inaccurate region identification, a portion with the flaw may be processed in a manner of shrinking a region to be processed, and in order to improve efficiency of a generated initial style image, in this embodiment, the style transformation neural network may be used to input the sample image into the model, and the model performs image style migration on the sample image to obtain the initial style image.
In this embodiment, the initial style image may be obtained by inputting the sample image into the style conversion neural network, and compared with a mode in which a designer performs manual image style migration, obtaining the initial style image through the style conversion neural network may reduce workload of the designer and improve acquisition efficiency of the initial style image.
Fig. 4 is a flowchart illustrating an image processing method according to an exemplary embodiment, which is used in a terminal as illustrated in fig. 4, including the following steps.
In step S401, an image to be processed and an image of a region to be processed corresponding to the image to be processed are acquired.
The image to be processed refers to an original image that needs to be subjected to image style migration, when the image to be processed needs to be subjected to image processing, the image to be processed may be entered into a terminal, and the terminal may obtain the image to be processed and determine an image of an area to be processed corresponding to the area to be processed that needs to be subjected to image style migration in the image to be processed, for example, the image area of the image to be processed may be segmented by an image segmentation algorithm to obtain the image of the area to be processed.
In step S402, inputting an image to be processed and an image of a region to be processed into an image style migration model, and performing style migration processing on the region to be processed in the image to be processed through the image style migration model to obtain a style migration image; the image style migration model is obtained by the training method of the image style migration model as described in any one of the above embodiments.
Then, the terminal can input the obtained image to be processed and the image of the area to be processed into the trained image style migration model, wherein the image style migration model is obtained by training the sample image, the contracted area image and the target style image obtained by performing image style migration processing on the contracted area of the sample image, and the image style migration model can learn the area range of the image area to be processed, so that the area range needing image style migration can be accurately identified based on the image of the area to be processed, the image style migration processing can be performed on the part of the area to be processed in the image to be processed according to the area to be processed in the image of the area to be processed, and the style migration image after the image style migration processing is output.
In the image processing method, the image to be processed and the image of the area to be processed corresponding to the image to be processed are obtained; inputting an image to be processed and an image of a region to be processed into an image style migration model, and performing style migration processing on the region to be processed in the image to be processed through the image style migration model to obtain a style migration image; the image style migration model is obtained by the training method of the image style migration model as described in any one of the above embodiments. According to the image style migration method and device, the image style migration model is obtained by utilizing the sample image, the contraction region image and the target style image in advance for training, so that the image style migration model can learn the change of the region to be processed, the region range of the region to be processed of the image can be accurately identified by the image style migration model, the image style migration processing is carried out on the image to be processed through the image style migration model, and the accuracy of the obtained style migration image can be improved.
In an exemplary embodiment, the image style migration model is used for performing color conversion processing on the region to be processed; step S402 may further include: and performing color conversion processing on the region to be processed in the image to be processed through the image style migration model to obtain a style migration image.
In this embodiment, the image style migration model may be used to perform color conversion on a region to be processed in the image to be processed, and the terminal may perform color conversion on the region to be processed in the image to be processed through the image style migration model, so as to obtain the style migration image.
For example, the image style migration model is used to perform hair dyeing on hair of a character image, the image to be processed may be any character image, when the terminal obtains the character image to be dyed, an image segmentation algorithm may be used to take a hair region in the recorded character image as the region to be processed, so as to obtain a mask corresponding to the hair region as the image to be processed, the character image and the obtained mask of the hair region are input into the image style migration model, and the image style migration model performs color conversion, i.e., dyeing, on a part of the hair region in the character image, so as to obtain the dyed character image as the style migration image.
In this embodiment, the color switching of the region to be processed may be performed through the image style migration model, so that flaws in the style migration image after the dyeing switching may be reduced, and the accuracy of the style migration image after the color switching may be improved.
In an exemplary embodiment, a stylized image edge restoration method based on extrapolation is also provided, which can be applied to portrait coloring tasks, where hair areas need to be color-converted, and other areas need to be left unchanged. The principle of the method is shown in figure 5, the mask is extracted from the portrait hair area, the mask is shrunk inwards by random amplitude when the neural network is trained, and then the dyed image is shrunk inwards by the same mode, so that the neural network remembers that the dyeing range can change along with the mask, and therefore the phenomenon that edges of paired images are dyed incompletely because the edges are not accurately found at the boundary can be avoided, and black edges exist in training data to cause that the generated images have black edges. And meanwhile, the mask can be restored to a normal hair range when the mask is used, so that the edges of the mask are also dyed by the neural network. The specific process is as follows:
preparing data:
1) a sufficient number of portrait pictures are taken and the pictures contain the hair area of the person. The pictures come from channels such as mobile phones, camera acquisition, internet public picture downloading and the like.
2) And (3) sending each portrait picture Ai in the data set into a style conversion neural network (or manually modifying the picture by a designer) to obtain a corresponding style-converted picture Bi. The edge of Bi has the following conditions: the dyeing is complete, the color is not dyed, and the dyeing has defects.
3) And processing Ai to obtain a corresponding hair region segmentation result Mi by adopting an image segmentation algorithm.
4) And pairing all Ai with corresponding Bi one by one to obtain a data set D. I.e. a pair of pictures D comprising a number of original pictures Ai, corresponding hair segmentations Mi and corresponding stained pictures Bi.
Training an AI model:
1) randomly selecting a black hair photo Ai from the data set, and a corresponding style-converted picture Bi (Bi usually has a flaw in the edge), and a hair segmentation picture Mi (hair area is 1, and others are 0).
2) The Mi is randomly shrunk to Mi', and an erosion algorithm can be adopted, or pixels in a part of Mi are randomly changed to 0.
3) To get a picture of the hair dye range matching the Mi' range. A new image Bi ' is generated, with all corresponding regions with Mi ' 1 copying pixels from Bi and the regions with Mi ' 0 using pixels copied from Ai. Namely B ' ═ B × M ' + a (1-M ').
4) The black hair photographs Ai and Mi 'are fed into the neural network G with the converted style to obtain an output Bi', and the output B 'is optimized to a target B' by a random gradient descent method.
5) And repeating the steps 2) to 4) for a plurality of rounds until the difference value between B 'and B' generated by the G network is less than a certain threshold value, and terminating the iteration process.
Deployment model:
6) and fixing the parameters of G, and deploying G into actually used equipment, such as collection, cloud end, server and the like.
7) In use, the input picture a and the corresponding hair segmentation M (not shrunk) are fed into the neural network G. The picture B' with no black edge at the edge is obtained.
8) And returning the B 'alternative input image A to the user or displaying the B' alternative input image A on a terminal such as a mobile phone.
In the above-mentioned embodiment, when training, random shrink input's mask and hair-dyeing scope, and during the use, obtain the good result of edge dyeing adopting original mask, compare in the process of dying hair of traditional technique, the black limit of edge of original hair dyeing effect obviously reduces among the technical scheme of this disclosure, and hair and face fuse more naturally, adopts the automatic restoration of machine simultaneously in the whole restoration process, has practiced thrift a large amount of manpowers.
It should be understood that although the various steps in the flow charts of fig. 1-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-4 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
FIG. 6 is a block diagram illustrating an apparatus for training an image style migration model in accordance with an exemplary embodiment. Referring to fig. 6, the apparatus includes a sample image acquiring unit 601, a contracted image acquiring unit 602, a target image acquiring unit 603, and a lattice model training unit 604.
A sample image obtaining unit 601 configured to perform obtaining of a sample image, a to-be-processed region image corresponding to the sample image, and an initial style image; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained after the image style migration processing is carried out on the area to be processed in the sample image;
a contracted image obtaining unit 602 configured to perform contraction processing on a region to be processed in the region to be processed image, resulting in a contracted region image;
a target image obtaining unit 603 configured to perform obtaining a target style image corresponding to the contracted area image from the sample image, the initial style image, and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image;
and a style model training unit 604 configured to perform training on the neural network model according to the sample image, the contraction region image and the target style image, so as to obtain an image style migration model.
In an exemplary embodiment, the style model training unit 604 is further configured to perform inputting the sample image and the contraction region image into the neural network model, and perform style migration processing on the contraction region in the sample image through the neural network model to obtain a predicted style image; determining a loss value of the neural network model according to a difference value between the prediction style image and the target style image; and adjusting the model parameters of the neural network model according to the loss value to obtain an image style migration model.
In an exemplary embodiment, the target style image is an image obtained by performing color conversion processing on the contracted area; the style model training unit 604 is further configured to perform training on the neural network model according to the sample image, the contraction region image and the target style image, resulting in a color conversion model.
In an exemplary embodiment, the contracted area image further includes a background area; the background area is an image area except the contraction area in the contraction area image; a target image obtaining unit 603 further configured to perform obtaining a first region image corresponding to a contracted region from the initial style image, and obtaining a second region image corresponding to a background region from the sample image; and combining the first area image and the second area image to obtain a target style image.
In an exemplary embodiment, the contraction image obtaining unit 602 is further configured to perform determining a contraction amplitude for the to-be-processed area in the to-be-processed area image; and based on the contraction amplitude, performing contraction processing on the area edge of the area to be processed in the area image to be processed to obtain a contracted area image.
In an exemplary embodiment, the contracted image obtaining unit 602 is further configured to perform erosion processing on the region edge by an erosion algorithm based on the contraction amplitude to obtain a contracted region image; or setting the pixel value of the area edge to be zero based on the contraction amplitude to obtain a contraction area image.
In an exemplary embodiment, the sample image obtaining unit 601 is further configured to perform inputting the sample image into a neural network model, and perform image style migration processing on the region to be processed in the sample image through the neural network model to obtain an initial style image.
Fig. 7 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment. Referring to fig. 7, the apparatus includes a to-be-processed image acquisition unit 701 and an image style migration unit 702.
A to-be-processed image acquisition unit 701 configured to perform acquisition of an image to be processed and an image of a to-be-processed region corresponding to the image to be processed;
an image style migration unit 702, configured to perform an image style migration model inputting the image to be processed and the image of the region to be processed, and perform style migration processing on the region to be processed in the image to be processed through the image style migration model to obtain a style migration image; the image style migration model is obtained by the training method of the image style migration model as described in any one of the above embodiments.
In an exemplary embodiment, the image style migration model is used for performing color conversion processing on the region to be processed; the image style migration unit 702 is further configured to perform color conversion processing on the region to be processed in the image to be processed through the image style migration model, so as to obtain a style migration image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 8 is a block diagram illustrating an apparatus 800 for image processing model training or for image processing according to an example embodiment. For example, the device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and so forth.
Referring to fig. 8, device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile storage devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 806 provides power to the various components of the device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Communications component 816 is configured to facilitate communications between device 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when being executed by a processor, implements the image processing model training method as defined in any of the above, or the image processing method as defined in any of the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A training method of an image style migration model is characterized by comprising the following steps:
acquiring a sample image, and an image of a region to be processed and an initial style image which correspond to the sample image; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained after the image style migration processing is carried out on the area to be processed in the sample image;
performing contraction processing on the area to be processed in the area image to be processed to obtain a contracted area image;
obtaining a target style image corresponding to the contraction area image according to the sample image, the initial style image and the contraction area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image;
and training a neural network model according to the sample image, the contraction area image and the target style image to obtain an image style migration model.
2. The method of claim 1, wherein training a neural network model from the sample image, the shrinkage region image, and the target style image to obtain an image style migration model comprises:
inputting the sample image and the contraction area image into the neural network model, and performing style migration processing on the contraction area in the sample image through the neural network model to obtain a prediction style image;
determining a loss value of the neural network model according to a difference value between the predicted style image and the target style image;
and adjusting the model parameters of the neural network model according to the loss value to obtain the image style migration model.
3. The method according to claim 1, wherein the target style image is an image obtained by performing color conversion processing on a contracted area corresponding to the contracted area image in the sample image;
training a neural network model according to the sample image, the contraction area image and the target style image to obtain an image style migration model, wherein the training comprises the following steps:
and training the neural network model according to the sample image, the contraction area image and the target style image to obtain a color conversion model.
4. The method of claim 1, wherein the shrink region image further comprises a background region; the background area is an image area except the contraction area in the contraction area image;
obtaining a target style image corresponding to the contracted area image according to the sample image, the initial style image and the contracted area image, including:
acquiring a first area image corresponding to the contraction area from the initial style image, and acquiring a second area image corresponding to the background area from the sample image;
and combining the first area image and the second area image to obtain the target style image.
5. An image processing method, comprising:
acquiring an image to be processed and an image of a region to be processed corresponding to the image to be processed;
inputting the image to be processed and the image of the area to be processed into an image style migration model, and performing style migration processing on the area to be processed in the image to be processed through the image style migration model to obtain a style migration image; the image style migration model is obtained by the training method of the image style migration model according to any one of claims 1 to 4.
6. An apparatus for training an image style migration model, comprising:
the system comprises a sample image acquisition unit, a processing unit and a processing unit, wherein the sample image acquisition unit is configured to acquire a sample image, an image of a region to be processed corresponding to the sample image and an initial style image; the image of the area to be processed is an image corresponding to the area to be processed in the sample image, and the initial style image is an image obtained after the image style migration processing is carried out on the area to be processed in the sample image;
a contraction image acquisition unit configured to perform contraction processing on the region to be processed in the region to be processed image to obtain a contraction region image;
a target image obtaining unit configured to obtain a target style image corresponding to the contracted area image according to the sample image, the initial style image and the contracted area image; the target style image is an image obtained by performing image style migration processing on a contraction area corresponding to the contraction area image in the sample image;
and the style model training unit is configured to train a neural network model according to the sample image, the contraction area image and the target style image to obtain an image style migration model.
7. An image processing apparatus characterized by comprising:
the image processing device comprises a to-be-processed image acquisition unit, a processing unit and a processing unit, wherein the to-be-processed image acquisition unit is configured to acquire an image to be processed and an image of a to-be-processed area corresponding to the image to be processed;
the image style migration unit is configured to input the image to be processed and the image of the area to be processed into an image style migration model, and perform style migration processing on the area to be processed in the image to be processed through the image style migration model to obtain a style migration image; the image style migration model is obtained by the training method of the image style migration model according to any one of claims 1 to 4.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of training the image style migration model according to any one of claims 1 to 4 or the method of image processing according to claim 5.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of training the image style migration model of any one of claims 1 to 4, or the method of image processing of claim 5.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the method of training an image style migration model according to any one of claims 1 to 4, or the method of image processing according to claim 5.
CN202110867587.6A 2021-07-28 2021-07-28 Image style migration model training method, image processing method, device and equipment Active CN113469876B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110867587.6A CN113469876B (en) 2021-07-28 2021-07-28 Image style migration model training method, image processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110867587.6A CN113469876B (en) 2021-07-28 2021-07-28 Image style migration model training method, image processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN113469876A true CN113469876A (en) 2021-10-01
CN113469876B CN113469876B (en) 2024-01-09

Family

ID=77883277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110867587.6A Active CN113469876B (en) 2021-07-28 2021-07-28 Image style migration model training method, image processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN113469876B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325954A (en) * 2018-09-18 2019-02-12 北京旷视科技有限公司 Image partition method, device and electronic equipment
CN109523460A (en) * 2018-10-29 2019-03-26 北京达佳互联信息技术有限公司 Moving method, moving apparatus and the computer readable storage medium of image style
CN109934895A (en) * 2019-03-18 2019-06-25 北京海益同展信息科技有限公司 Image local feature moving method and device
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system
CN110830706A (en) * 2018-08-08 2020-02-21 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN111242844A (en) * 2020-01-19 2020-06-05 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, server, and storage medium
CN111598091A (en) * 2020-05-20 2020-08-28 北京字节跳动网络技术有限公司 Image recognition method and device, electronic equipment and computer readable storage medium
WO2020220807A1 (en) * 2019-04-29 2020-11-05 商汤集团有限公司 Image generation method and apparatus, electronic device, and storage medium
CN112348737A (en) * 2020-10-28 2021-02-09 达闼机器人有限公司 Method for generating simulation image, electronic device and storage medium
CN112734627A (en) * 2020-12-24 2021-04-30 北京达佳互联信息技术有限公司 Training method of image style migration model, and image style migration method and device
KR102260628B1 (en) * 2020-02-13 2021-06-03 이인현 Image generating system and method using collaborative style transfer technology
CN113012185A (en) * 2021-03-26 2021-06-22 影石创新科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110830706A (en) * 2018-08-08 2020-02-21 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN109325954A (en) * 2018-09-18 2019-02-12 北京旷视科技有限公司 Image partition method, device and electronic equipment
CN109523460A (en) * 2018-10-29 2019-03-26 北京达佳互联信息技术有限公司 Moving method, moving apparatus and the computer readable storage medium of image style
CN109934895A (en) * 2019-03-18 2019-06-25 北京海益同展信息科技有限公司 Image local feature moving method and device
WO2020220807A1 (en) * 2019-04-29 2020-11-05 商汤集团有限公司 Image generation method and apparatus, electronic device, and storage medium
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system
CN111242844A (en) * 2020-01-19 2020-06-05 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, server, and storage medium
KR102260628B1 (en) * 2020-02-13 2021-06-03 이인현 Image generating system and method using collaborative style transfer technology
CN111598091A (en) * 2020-05-20 2020-08-28 北京字节跳动网络技术有限公司 Image recognition method and device, electronic equipment and computer readable storage medium
CN112348737A (en) * 2020-10-28 2021-02-09 达闼机器人有限公司 Method for generating simulation image, electronic device and storage medium
CN112734627A (en) * 2020-12-24 2021-04-30 北京达佳互联信息技术有限公司 Training method of image style migration model, and image style migration method and device
CN113012185A (en) * 2021-03-26 2021-06-22 影石创新科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113469876B (en) 2024-01-09

Similar Documents

Publication Publication Date Title
CN110084775B (en) Image processing method and device, electronic equipment and storage medium
CN110782468B (en) Training method and device of image segmentation model and image segmentation method and device
EP2977956B1 (en) Method, apparatus and device for segmenting an image
WO2020093837A1 (en) Method for detecting key points in human skeleton, apparatus, electronic device, and storage medium
CN105095881B (en) Face recognition method, face recognition device and terminal
EP3200125B1 (en) Fingerprint template input method and device
CN110944230B (en) Video special effect adding method and device, electronic equipment and storage medium
CN108154465B (en) Image processing method and device
CN109859144B (en) Image processing method and device, electronic equipment and storage medium
CN108154466B (en) Image processing method and device
WO2022110837A1 (en) Image processing method and device
CN105528765B (en) Method and device for processing image
CN112927122A (en) Watermark removing method, device and storage medium
CN112258605A (en) Special effect adding method and device, electronic equipment and storage medium
CN111935418B (en) Video processing method and device, electronic equipment and storage medium
CN107507128B (en) Image processing method and apparatus
CN110415258B (en) Image processing method and device, electronic equipment and storage medium
CN111953903A (en) Shooting method, shooting device, electronic equipment and storage medium
CN111583142A (en) Image noise reduction method and device, electronic equipment and storage medium
CN107730443B (en) Image processing method and device and user equipment
CN106469446B (en) Depth image segmentation method and segmentation device
CN113469876A (en) Image style migration model training method, image processing method, device and equipment
CN111260581B (en) Image processing method, device and storage medium
CN114445298A (en) Image processing method and device, electronic equipment and storage medium
CN114697515A (en) Method and device for collecting image and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant