CN115937338B - Image processing method, device, equipment and medium - Google Patents

Image processing method, device, equipment and medium Download PDF

Info

Publication number
CN115937338B
CN115937338B CN202210443508.3A CN202210443508A CN115937338B CN 115937338 B CN115937338 B CN 115937338B CN 202210443508 A CN202210443508 A CN 202210443508A CN 115937338 B CN115937338 B CN 115937338B
Authority
CN
China
Prior art keywords
image
manuscript
sample
characteristic
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210443508.3A
Other languages
Chinese (zh)
Other versions
CN115937338A (en
Inventor
丁飞
刘玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210443508.3A priority Critical patent/CN115937338B/en
Publication of CN115937338A publication Critical patent/CN115937338A/en
Application granted granted Critical
Publication of CN115937338B publication Critical patent/CN115937338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure relates to an image processing method, an image processing device and a medium, wherein the method comprises the following steps: acquiring a draft diagram and a reference diagram; extracting a manuscript characteristic corresponding to the manuscript and a reference picture characteristic corresponding to the reference picture; determining semantic relativity between line manuscript feature points in the line manuscript features and a plurality of reference feature points in the reference image features; and generating a coloring image of the manuscript graph based on semantic relativity, wherein the color and texture details of each part of the coloring image are consistent with those of the reference graph. Therefore, the compact semantic correlation between the reference image and the line manuscript image is utilized to intelligently color the line manuscript image, the color and texture details of each part in the coloring result are kept consistent with those of the reference image, and the coloring effect is ensured on the basis of improving the coloring efficiency.

Description

Image processing method, device, equipment and medium
Technical Field
The disclosure relates to the technical field of computer application, and in particular relates to an image processing method, an image processing device and a medium.
Background
In general, reference line drawing coloring belongs to a very important process in animation, game and cartoon creation, a designer firstly draws a plurality of images as line drawing to define details, styles and the like of subsequent creation, and later coloring work ensures consistency of roles and overall styles based on the line drawing.
Conventionally, the coloring of a draft is performed manually based on personal experience and demand of a designer, for example, the draft is imported into a coloring application in which the coloring process is manually performed through a related coloring menu.
However, the above-mentioned coloring process depends on manpower, and when the number of the line drawings is large or the detail of the line drawings is large, the coloring time is long and the coloring efficiency is low.
Disclosure of Invention
In order to solve the technical problems, the present disclosure provides an image processing method, an image processing device, and an image processing medium, so as to solve the technical problem that the coloring efficiency of a manuscript image is low.
The embodiment of the disclosure provides an image processing method, which comprises the following steps: calculating an average absolute error between each pixel in the second reference map and each pixel in the sample image to generate a reconstruction loss function; and/or calculating a mean square error between each pixel in the second reference map and each pixel in the sample image to generate a style loss function; and/or classifying the second reference map and the sample image to generate an fight loss function.
The embodiment of the disclosure also provides an image processing apparatus, which includes: the acquisition module is used for acquiring the manuscript graph and the reference graph; the extraction module is used for extracting the characteristics of the manuscript corresponding to the manuscript and the characteristics of the reference map corresponding to the reference map; the determining module is used for determining semantic relativity between the line manuscript characteristic points in the line manuscript characteristic and a plurality of reference characteristic points in the reference diagram characteristic; and the generation module is used for generating a colored image of the draft based on the semantic relativity, wherein the color and texture details of each part of the colored image are consistent with those of the reference image.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement an image processing method as provided in an embodiment of the disclosure.
The present disclosure also provides a computer-readable storage medium storing a computer program for executing the image processing method as provided by the embodiments of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
according to the image processing scheme provided by the embodiment of the disclosure, through obtaining the line manuscript and the reference diagram, extracting the line manuscript characteristic corresponding to the line manuscript and the reference diagram characteristic corresponding to the reference diagram, determining the semantic correlation between the line manuscript characteristic point in the line manuscript characteristic and a plurality of reference characteristic points in the reference diagram characteristic, and generating the colored image of the line manuscript based on the semantic correlation. Therefore, the compact semantic correlation between the reference image and the line manuscript image is utilized to intelligently color the line manuscript image, the color and texture details of each part in the coloring result are kept consistent with those of the reference image, and the coloring effect is ensured on the basis of improving the coloring efficiency.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic view of an image processing scenario provided in an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the disclosure;
FIG. 3 is a schematic view of another image processing scenario provided in an embodiment of the present disclosure;
FIG. 4 is a schematic view of another image processing scenario provided by an embodiment of the present disclosure;
fig. 5 is a flowchart of another image processing method according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram of another image processing scenario provided by an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of another image processing scenario provided by an embodiment of the present disclosure;
FIG. 8 is a schematic view of another image processing scenario provided by an embodiment of the present disclosure;
fig. 9 is a flowchart of another image processing method according to an embodiment of the disclosure;
FIG. 10 is a schematic view of another image processing scenario provided by an embodiment of the present disclosure;
Fig. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure; and
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
In order to solve the problem that the coloring efficiency is low due to the fact that the coloring of the line manuscript is dependent on manual work, the embodiment of the disclosure provides an image processing method, in which the color and texture details of each part in a coloring result are kept consistent with those of the reference image by utilizing the intelligent coloring of the intensive semantic relevance between the reference image and the line manuscript, for example, as shown in fig. 1, when the line manuscript is A1 and the reference image is a color image B, the method can realize the intelligent coloring according to the intensive semantic relevance of the color, the texture and the like in the B, migrate the color and the texture and the like in the B to the A1 to obtain the colored line manuscript A2, and the color and the texture and the like of each part in the A2 are consistent with the color and the texture and the like of the corresponding part of the B (the color, the texture and the like are respectively marked by gray values and filling lines in the image), for example, the eye color and the texture of the B are consistent with the eye color and the texture of the A2, and the like of the B are not required to be manually operated, for coloring processing can be carried out according to the color of the B in the A2, the color of the image on the basis is realized, coloring efficiency is improved, coloring effect is ensured, and the color of the contour and the contour of the colored image is kept the same.
The method is described below in connection with specific examples.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present disclosure, where the method may be performed by an image processing apparatus, and the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 2, the method includes:
step 201, obtaining a draft diagram and a reference diagram.
The line manuscript can be understood as an image without coloring filling, which includes various image contour details, as shown in the above A1, and the reference image is an image including color filling, so that in order to facilitate migration of colors, textures and the like based on semantics, the line manuscript is generally the same as a main object included in the reference image, for example, the main object included in the line manuscript is a face, and the main object included in the corresponding reference image is also a face.
Step 202, extracting the characteristics of the manuscript corresponding to the manuscript and the characteristics of the reference map corresponding to the reference map.
In this embodiment, the feature of the line manuscript corresponding to the line manuscript and the feature of the reference diagram corresponding to the reference diagram are extracted based on the feature dimension, so as to ensure that the detail is richer when the color is migrated in the feature dimension, that is, textures and the like are migrated in addition to the color.
In some possible embodiments, an image coloring model may be trained in advance, and then the line manuscript and the reference image are respectively input into the image coloring model, so as to obtain the line manuscript feature corresponding to the line manuscript and the reference image feature corresponding to the reference image, where the image coloring model may be a convolution model, and the line manuscript feature corresponding to the line manuscript and the reference image feature corresponding to the reference image are multidimensional features obtained after convolution.
In other possible embodiments, the line contribution features corresponding to the line contribution and the reference contribution features corresponding to the reference contribution may be determined based on a histogram.
In this embodiment, a pattern (mode) generated by spatially converting a gradation (color in the case of a color image) in a predetermined form may have a predetermined periodicity, considering that a texture is one of characteristics inherent to an image. Since the pixel gray level distribution of the texture region has a certain form, and the histogram is a powerful tool for describing the pixel gray level distribution in an image, the description of the texture by using the histogram is straightforward.
Undoubtedly, similar textures have similar histograms; textures of different characteristics correspond to different histograms. This indicates that there is a certain correspondence between the histogram and the texture. Thus, a histogram or its statistical features can be used as image texture features. The histogram itself is a vector, and the dimension of the vector is the gray level number of the histogram statistics, so that the vector can be directly used as a feature vector representing the image texture. In this embodiment, the histogram features are generated by performing texture extraction on the draft and reference images, and the generated histogram features are used as the corresponding draft features and reference image features.
In step 203, semantic relativity between the draft feature points in the draft features and a plurality of reference feature points in the reference features is determined.
After the draft characteristic and the corresponding reference characteristic are obtained, in order to ensure color consistency after migration, for example, color characteristics belonging to hair in the reference characteristic are migrated to a hair part of the draft, and semantic correlation between the draft characteristic and a plurality of reference characteristic points in the reference characteristic is calculated. The higher the semantic correlation, the greater the probability that two corresponding feature points belong to the same part is proved.
And 204, generating a coloring image of the manuscript graph based on the semantic relevance, wherein the color and texture details of each part of the coloring image are consistent with those of the reference graph.
In one embodiment of the present disclosure, after the semantic relevance is calculated, a colored image of the line manuscript is generated based on the semantic relevance, that is, points in reference image features related to the semantic are migrated to corresponding positions of the line manuscript features, thereby ensuring a coloring effect because colors in the reference image are migrated, colors and texture details of each part of the color image are consistent with those of the reference image, and retaining image details in the line manuscript because only the colors are migrated.
In summary, according to the image processing method of the embodiment of the disclosure, by acquiring a line manuscript and a reference diagram, extracting a line manuscript feature corresponding to the line manuscript and a reference diagram feature corresponding to the reference diagram, determining semantic correlation between a line manuscript feature point in the line manuscript feature and a plurality of reference feature points in the reference diagram feature, and generating a colored image of the line manuscript based on the semantic correlation. Therefore, the compact semantic correlation between the reference image and the line manuscript image is utilized to intelligently color the line manuscript image, the color and texture details of each part in the coloring result are kept consistent with those of the reference image, and the coloring effect is ensured on the basis of improving the coloring efficiency.
In the actual execution process, according to different application scenarios, the manner of determining semantic relativity between the line manuscript feature points in the line manuscript feature and a plurality of reference feature points in the reference image feature is different, and the example is as follows:
in one embodiment of the present disclosure, a similarity between a line-manuscript feature point and each of a plurality of reference feature points is calculated.
It should be emphasized that in this embodiment, when color migration is performed based on the reference map, global mode migration or local mode migration may be adopted according to scene needs.
In the global mode, all colors, textures and the like of the corresponding reference pictures are migrated into the line manuscript, and the line manuscript can be used in efficient creation scenes with different actions of the same role. By using the mode, only a part of key frames can be colored in the animation creation, the coloring of intermediate line manuscript frames can be completed in batches, and colored images are used as reference images to perform coloring treatment on other line manuscripts, so that all actions of a certain character are not colored in scenes such as cartoon creation, game creation and the like, and the creation cost is greatly reduced.
In the global mode, the plurality of reference feature points are all reference feature points in the reference map features, that is, the similarity between each line manuscript feature point and each reference feature point in all reference feature points in the reference map features is calculated.
In the global mode, as shown in fig. 3, if the reference feature map is a matrix map of 3*3 and the line manuscript feature map is also a matrix map of 3*3, for each line manuscript feature point in the line manuscript feature map (only a calculation process of one feature point is shown in the figure), a similarity between the reference feature point and each line manuscript feature point is calculated, and the similarity is the semantic similarity between the corresponding feature points.
Only the migration of partial textures of the reference graph is supported in the partial mode, colors in different reference graphs can be combined, and the flexibility of coloring is improved.
In the local mode, the plurality of reference feature points are part of the reference feature points in the reference map feature, and the plurality of reference feature points are associated with one or more locations in the site. Wherein the site may include: any one or more parts of hair, eyes, mouth, face, neck, clothing, etc.
In the local mode, if the migrated portion is a hair portion, as shown in fig. 4, the reference image features a matrix image of 3*3, the line manuscript feature image is T1, T1 is a matrix image of 3*3, and if the migrated portion of T1 is a hair portion in the local mode, a plurality of reference feature points corresponding to the hair portion are feature points of gray portions in the image T1, a similarity between each line manuscript feature point and each feature point of gray portions is calculated (only a calculation process of one feature point is shown in the image), where the similarity is a semantic similarity between corresponding feature points.
In some possible embodiments, the line-manuscript feature point and each reference feature point in the plurality of reference feature points may be subjected to a dot multiplication process to obtain an inner product between the line-manuscript feature point and each reference feature point, and the inner product is used as a semantic similarity between the corresponding feature points.
In other possible embodiments, a vector included angle between a line manuscript feature point and each reference feature point in the plurality of reference feature points may be calculated, and a cosine value of the vector included angle is used as a measure for measuring the magnitude of the difference in semantics of the two feature points, where the closer the cosine value is to 1, the closer the identification included angle is to 0 degrees, and the higher the semantic correlation between the two feature points is.
Further, after the semantic relevance is obtained, a colored image of the line manuscript is generated based on the semantic relevance of the line manuscript.
In this embodiment, if the target manuscript feature point value is obtained based on the inner product between the manuscript feature point and each of the plurality of reference feature points.
Wherein, in different application scenes, the mode of acquiring the target manuscript characteristic point value is different based on the inner product between the manuscript characteristic point and each reference characteristic point in the plurality of reference characteristic points,
In some possible embodiments, the inner product value is normalized, and the result of the normalization is taken as the corresponding target manuscript feature point value.
For example, the sum value of all the line-manuscript target line-manuscript features may be calculated, and the ratio of each line-manuscript feature point to the sum value may be used as the target line-manuscript feature point value.
In other possible embodiments, as shown in fig. 5, the step of obtaining the target manuscript feature point value includes:
in step 501, a weight corresponding to each reference feature point is obtained based on an inner product between the line manuscript feature point and each reference feature point.
In this embodiment, the softmax function may be performed on a plurality of inner products to obtain a corresponding plurality of function values, where the sum of the plurality of function values is 1, and the plurality of function values are taken as weights, where a larger inner product indicates a higher correlation between the corresponding line manuscript feature point and the reference feature point, and the corresponding weight is higher.
In this embodiment, the corresponding inner product may be input into a deep learning model constructed in advance, and the weight corresponding to each reference feature point may be acquired based on the output of the depth model.
Step 502, each reference feature point is weighted and summed based on the weights to obtain a target manuscript feature point value.
In this embodiment, each reference feature point is weighted and summed based on the weights to obtain a target draft feature point value that embodies the degree of semantic correlation between the reference and draft feature points.
After the target document feature point value is obtained, the value of the document feature point is replaced by the target document feature point value to generate a colored image of the document map, for example, when the original value of the document feature point of the document feature map is a1-a9 as shown in fig. 6, the obtained corresponding target document feature point value is b1-b9, and then a1-a9 in the document feature map can be replaced by b1-b9 to decode the replaced document feature map to obtain the corresponding colored image.
In the global mode, as shown in fig. 7, the coloring effect of each part in the coloring image A4 is consistent with the coloring effect between the corresponding reference images C, for example, the color and texture of the hair part of the obtained coloring image corresponding to the draft image A3 are consistent with the coloring effect of the hair part between the reference images, the color and texture of the eye part of the coloring image are consistent with the coloring effect of the eye part between the reference images, and the like, so that migration from the whole coloring effect of the reference images to the draft image is realized.
In the local mode, as shown in fig. 8, (in the figure, only the color of the shifted eye portion is taken as an example), when the reference image to be shifted includes T3 and T4, wherein the shifted portion of T3 is a hair portion, the semantic relevance between the reference image feature point of the T3 and the feature point in the line graph feature is calculated, according to the semantic relevance, the pixel corresponding to the feature point of the T3 hair portion is shifted to the hair portion in the line graph based on the semantic relevance, in the obtained colored image (the coloring effect of other portions in the previous image is not reflected), the coloring effect of the hair portion is consistent with the hair portion in the T3 image, the shifted portion of T4 is an eye portion, the semantic relevance between the reference image feature point of the eye portion of T4 and the feature point in the line graph feature point is calculated, and according to the semantic relevance, the pixel corresponding to the feature point of the eye portion of T4 is shifted to the eye portion in the line graph based on the semantic relevance, and the obtained colored effect of the eye portion of the eye is consistent with the eye portion in the T4 image.
In summary, the image processing method implemented by the present disclosure may automatically color the line manuscript based on the reference color chart, while retaining the detail design in the line manuscript, automatically color the line manuscript, and automatically color the line manuscript in the local mode by quickly assembling the texture colors of the local parts of different reference charts, so as to greatly save the time for detail description, and perform overall texture and color migration in the global mode, so as to greatly reduce the time cost for coloring.
Based on the above embodiment, the above extraction of the feature of the draft corresponding to the draft and the feature of the reference map corresponding to the reference map, the determination of the semantic correlation between the feature point of the draft in the feature of the draft and the plurality of reference feature points in the feature of the reference map and the generation of the colored image of the draft based on the semantic correlation may be performed by the image coloring model.
To facilitate the process of image coloring models to be more apparent to those skilled in the art, the following description is provided in connection with the training process of image coloring models.
In one embodiment of the present disclosure, as shown in fig. 9, the image coloring model is trained by:
step 901, a sample image is acquired.
The sample image is understood to be a color image having a coloring effect.
At step 902, a sample line contribution corresponding to the sample image is determined.
The sample line manuscript corresponding to the sample image can be acquired through a contour recognition algorithm and the like.
In step 903, the sample image is deformed to obtain a sample reference map.
In this embodiment, in order to improve the robustness of the trained model parameters, color textures and the like in the color image may be extracted, and the model may be directly trained not based on the sample image corresponding to the sample line manuscript, but by performing deformation processing on the sample image to obtain a sample reference image, so as to perform model training based on the sample reference image.
In the different application scenarios, the sample image is deformed to obtain the sample reference image in different ways, and the following examples are illustrated:
in one embodiment of the disclosure, a line manuscript extraction algorithm is called to process a sample image, after the sample line manuscript is extracted, the line manuscript extraction algorithm is called, is a common 2D interpolation method, and needs to distort and change the positions of pixel points in the image according to defined fitting items and distortion items, so that random deformation processing is performed on the sample image based on calling a thin plate spline function, a sample reference image is obtained, and because the sample reference image is a distorted sample image, no strict corresponding relation exists between the sample reference image and the sample line manuscript, fitting of model parameters is avoided when model parameters are trained, and robustness of the model parameters is improved.
In one embodiment of the present disclosure, each location (e.g., eye location, hair location, etc.) in the sample image may be identified, and a different deformation processing algorithm may be randomly matched to each location, including but not limited to a local coordinate translation, a local rotation algorithm, a local scaling algorithm, etc., based on which the deformed sample reference map is obtained after deformation calculation of the corresponding location of the sample image.
And step 904, training to obtain an image coloring model based on the sample line manuscript graph and the sample reference graph.
In this embodiment, an image coloring model is obtained based on sample line manuscripts and sample reference pictures, where in some possible embodiments, sample line manuscripts features of the sample line manuscripts are extracted, for example, a line manuscripts encoder may be trained in advance, and the line manuscript encoder may be a convolutional network, and performs encoding processing on the sample line manuscripts by using a preset line manuscript encoder, so as to extract corresponding sample line manuscripts features.
Further, sample reference map features of the sample reference map are extracted, and in this embodiment, a reference map encoder, which may be a convolutional network, is trained in advance, and the sample reference map is encoded by a preset reference map encoder, so as to extract corresponding sample reference map features.
After the sample line manuscript characteristic and the sample reference image characteristic are obtained, determining semantic relativity between sample characteristic points in the sample line manuscript characteristic and a plurality of sample reference characteristic points in the sample reference image characteristic. The calculating manner of the semantic correlation may refer to the calculating manner of the semantic correlation in the above-mentioned application side embodiment, and the implementation principle is similar and will not be described herein.
After the obtained semantic relevance, a second reference diagram corresponding to the sample line manuscript is generated based on the semantic relevance, namely, based on the inner product between the sample line manuscript characteristic point and each reference characteristic point in the plurality of sample reference characteristic points, a target sample line manuscript characteristic point value is obtained, and the value of the sample line manuscript characteristic point is replaced by the target sample line manuscript characteristic point value so as to generate the second reference diagram of the sample line manuscript diagram. The feature map of the values of the feature points of the substitute sample line manuscript can be decoded through a preset decoder to obtain a second reference map.
Theoretically, the coloring effect of the second reference map should be consistent with the sample reference map. Thus, to determine whether the current model is trained, an objective loss function is generated based on the sample image, the sample reference map, and the second reference map. The objective loss function embodies the variability between the second reference map and the sample image.
The way to calculate the objective loss function includes, but is not limited to, one or more of the following three ways:
in some possible embodiments, pixel differences between each pixel in the second reference map and the corresponding pixel point in the sample image are calculated, and an average absolute error of all pixel differences is calculated to obtain the reconstruction loss function.
In some possible embodiments, a pixel difference between each pixel in the second reference map and each pixel point in the sample image is calculated, a mean square error of all pixel values is calculated, and a style loss function is obtained.
In some possible embodiments, a discriminant model is preset, and the discriminant model performs countermeasure training based on a countermeasure loss function, and in this embodiment, the second reference image and the sample image are processed according to the preset discriminant model, so as to obtain the countermeasure loss function, and the loss function is used as a target loss function.
Further, after the target loss function is obtained, training the image coloring model according to the back propagation of the target loss function, when the loss value of the target loss function corresponding to the image coloring model is larger than a preset loss threshold value, correcting the corresponding model parameter until the loss value of the target function corresponding to the image coloring model is smaller than the preset loss threshold value, and obtaining the corresponding image coloring model.
For example, as shown in fig. 10, in the training process, a sample line contribution P1 corresponding to a sample image is obtained, a corresponding sample line contribution feature P2 is extracted, a sample reference image S1 is obtained by performing deformation processing on a sample image S0, a corresponding sample reference image feature S2 is extracted, according to semantic correlation between the sample line contribution feature P2 and the sample reference image feature S2, an image coloring model migrates pixels corresponding to the sample reference image into the sample line contribution, a second reference image S3 corresponding to the sample line contribution is generated by decoding the migrated feature image through a preset decoder or the like, a loss value of a target loss function before S3 and S0 is calculated, model parameters are trained based on the loss value, and finally, a corresponding image coloring model is obtained after the loss value is smaller than a certain value.
In summary, according to the image processing method of the embodiment of the disclosure, a sample line manuscript corresponding to a sample image is obtained, deformation processing is performed on the sample image to obtain a sample reference image, training of a coloring model is performed based on the deformed sample image and the sample line manuscript, robustness of model parameters after training is guaranteed, and coloring effect of the coloring model after coloring processing is guaranteed.
In order to achieve the above embodiments, the present disclosure also proposes an image processing apparatus.
Fig. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 11, the apparatus includes:
an obtaining module 1110, configured to obtain a line manuscript graph and a reference graph;
an extracting module 1120, configured to extract a feature of the line contribution corresponding to the line contribution and a feature of the reference contribution corresponding to the reference contribution;
a determining module 1130, configured to determine semantic relativity between a line-manuscript feature point in the line-manuscript feature and a plurality of reference feature points in the reference-map feature;
a generating module 1140, configured to generate a colored image of the draft based on the semantic relevance, where the color and texture details of each part of the colored image are consistent with the reference image.
The image processing device provided by the embodiment of the disclosure can execute the image processing method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
In summary, the image processing apparatus according to the embodiment of the present disclosure extracts a line drawing feature corresponding to a line drawing and a reference drawing feature corresponding to a reference drawing by acquiring the line drawing and the reference drawing, determines semantic correlation between a line drawing feature point in the line drawing feature and a plurality of reference feature points in the reference drawing feature, and generates a colored image of the line drawing based on the semantic correlation. Therefore, the compact semantic correlation between the reference image and the line manuscript image is utilized to intelligently color the line manuscript image, the color and texture details of each part in the coloring result are kept consistent with those of the reference image, and the coloring effect is ensured on the basis of improving the coloring efficiency.
To achieve the above embodiments, the present disclosure also proposes a computer program product comprising a computer program/instruction which, when executed by a processor, implements the image processing method in the above embodiments.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Referring now in particular to fig. 12, a schematic diagram of a configuration of an electronic device 1200 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 1200 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, as well as stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 12 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 12, the electronic apparatus 1200 may include a processing device (e.g., a central processor, a graphics processor, etc.) 1201, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1202 or a program loaded from a storage device 1208 into a Random Access Memory (RAM) 1203. In the RAM 1203, various programs and data required for the operation of the electronic apparatus 1200 are also stored. The processing device 1201, the ROM 1202, and the RAM 1203 are connected to each other through a bus 1204. An input/output (I/O) interface 1205 is also connected to the bus 1204.
In general, the following devices may be connected to the I/O interface 1205: input devices 1206 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 1207 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 1208 including, for example, magnetic tape, hard disk, etc.; and a communication device 1209. The communication means 1209 may allow the electronic device 1200 to communicate wirelessly or by wire with other devices to exchange data. While fig. 12 shows an electronic device 1200 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1209, or installed from the storage device 1208, or installed from the ROM 1202. When the computer program is executed by the processing apparatus 1201, the above-described functions defined in the image processing method of the embodiment of the present disclosure are performed.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: the color image based on the reference can automatically color the line manuscript, the detail design in the line manuscript is reserved, meanwhile, the line manuscript is automatically colored, the color of textures of local parts of different reference pictures can be quickly assembled in a local mode, and the like, so that the time for detail depiction is greatly saved, and the overall texture and color migration is carried out in a global mode, so that the time cost for coloring is greatly reduced.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In one embodiment of the present disclosure, there is provided an image processing method including:
acquiring a draft diagram and a reference diagram;
extracting a manuscript characteristic corresponding to the manuscript and a reference picture characteristic corresponding to the reference picture;
determining semantic relativity between line manuscript feature points in the line manuscript features and a plurality of reference feature points in the reference image features;
and generating a coloring image of the manuscript graph based on semantic relativity, wherein the color and texture details of each part of the coloring image are consistent with those of the reference graph.
In one embodiment of the present disclosure, determining semantic relatedness between a line contribution feature point in a line contribution feature and a plurality of reference feature points in a reference contribution feature comprises:
and calculating the similarity between the line manuscript characteristic point and each reference characteristic point in the plurality of reference characteristic points.
In one embodiment of the present disclosure, calculating a similarity between a line-manuscript feature point and each of a plurality of reference feature points includes:
and carrying out point multiplication processing on the line manuscript characteristic points and each reference characteristic point in the plurality of reference characteristic points to obtain an inner product between the line manuscript characteristic points and each reference characteristic point.
In one embodiment of the present disclosure, generating a colored image of a draft based on semantic relevance includes:
Acquiring a target manuscript characteristic point value based on an inner product between a manuscript characteristic point and each of a plurality of reference characteristic points;
and replacing the values of the characteristic points of the line manuscript with the values of the characteristic points of the target line manuscript to generate a colored image of the line manuscript.
In one embodiment of the present disclosure, obtaining a target draft feature point value based on an inner product between the draft feature point and each of a plurality of reference feature points includes:
acquiring weights corresponding to each reference feature point based on an inner product between the line manuscript feature point and each reference feature point;
each reference feature point is weighted and summed based on the weights to obtain a target draft feature point value.
In one embodiment of the present disclosure, in the global mode, the plurality of reference feature points are all of the reference feature points in the reference map feature.
In one embodiment of the present disclosure, in the local mode, the plurality of reference feature points are part of the reference feature points in the reference map feature, and the plurality of reference feature points are associated with one or more locations in the site.
In one embodiment of the present disclosure, the site comprises: hair, eyes, mouth, face, neck, and clothing.
In one embodiment of the present disclosure, the extracting, determining and generating are performed by an image coloring model, which is trained by the steps of:
acquiring a sample image;
determining a sample line manuscript corresponding to the sample image;
deforming the sample image to obtain a sample reference image;
and training to obtain an image coloring model based on the sample line manuscript graph and the sample reference graph.
In one embodiment of the present disclosure, deforming a sample image to obtain a sample reference map includes:
and calling the thin plate spline function to perform random deformation processing on the sample image so as to obtain a sample reference picture.
In one embodiment of the present disclosure, training to obtain an image coloring model based on a sample line contribution graph and a sample reference graph includes:
extracting sample line manuscript characteristics of a sample line manuscript;
extracting sample reference map features of a sample reference map;
determining semantic relativity between sample feature points in the sample line manuscript characteristic and a plurality of sample reference feature points in the sample reference image characteristic;
generating a second reference image corresponding to the sample line manuscript image based on the semantic relativity;
generating an objective loss function based on the sample image, the sample reference map, and the second reference map; and
Based on the objective loss function, an image coloring model is trained.
In one embodiment of the present disclosure, generating an objective loss function based on a sample image, a sample reference map, and a second reference map includes:
calculating an average absolute error between each pixel in the second reference map and each pixel in the sample image to generate a reconstruction loss function; and/or
Calculating a mean square error between each pixel in the second reference map and each pixel in the sample image to generate a style loss function; and/or
The second reference map and the sample image are classified to generate a contrast loss function.
In one embodiment of the present disclosure, there is provided an image processing apparatus including:
the acquisition module is used for acquiring the manuscript graph and the reference graph;
the extraction module is used for extracting the characteristic of the line manuscript corresponding to the line manuscript and the characteristic of the reference diagram corresponding to the reference diagram;
the determining module is used for determining semantic relativity between the line manuscript characteristic points in the line manuscript characteristic and a plurality of reference characteristic points in the reference diagram characteristic;
and the generation module is used for generating a coloring image of the manuscript graph based on semantic relativity, wherein the color and texture details of each part of the coloring image are consistent with those of the reference graph.
In one embodiment of the present disclosure, the determining module is configured to:
and calculating the similarity between the line manuscript characteristic point and each reference characteristic point in the plurality of reference characteristic points.
In one embodiment of the present disclosure, the determining module is configured to:
and carrying out point multiplication processing on the line manuscript characteristic points and each reference characteristic point in the plurality of reference characteristic points to obtain an inner product between the line manuscript characteristic points and each reference characteristic point.
In one embodiment of the present disclosure, a generation module is configured to:
acquiring a target manuscript characteristic point value based on an inner product between a manuscript characteristic point and each of a plurality of reference characteristic points;
and replacing the values of the characteristic points of the line manuscript with the values of the characteristic points of the target line manuscript to generate a colored image of the line manuscript.
In one embodiment of the present disclosure, a generation module is configured to: acquiring weights corresponding to each reference feature point based on an inner product between the line manuscript feature point and each reference feature point;
each reference feature point is weighted and summed based on the weights to obtain a target draft feature point value.
In one embodiment of the present disclosure, in the global mode, the plurality of reference feature points are all of the reference feature points in the reference map feature.
In one embodiment of the present disclosure, in the local mode, the plurality of reference feature points are part of the reference feature points in the reference map feature, and the plurality of reference feature points are associated with one or more locations in the site.
In one embodiment of the present disclosure, the site comprises: hair, eyes, mouth, face, neck, and clothing.
In one embodiment of the present disclosure, the system further comprises a training module for:
acquiring a sample image;
determining a sample line manuscript corresponding to the sample image;
deforming the sample image to obtain a sample reference image;
and training to obtain an image coloring model based on the sample line manuscript graph and the sample reference graph.
In one embodiment of the present disclosure, a training module is configured to:
and calling the thin plate spline function to perform random deformation processing on the sample image so as to obtain a sample reference picture.
In one embodiment of the present disclosure, a training module is configured to: extracting sample line manuscript characteristics of a sample line manuscript;
extracting sample reference map features of a sample reference map;
determining semantic relativity between sample feature points in the sample line manuscript characteristic and a plurality of sample reference feature points in the sample reference image characteristic;
generating a second reference image corresponding to the sample line manuscript image based on the semantic relativity;
Generating an objective loss function based on the sample image, the sample reference map, and the second reference map; and
based on the objective loss function, an image coloring model is trained.
In one embodiment of the present disclosure, a training module is configured to:
calculating an average absolute error between each pixel in the second reference map and each pixel in the sample image to generate a reconstruction loss function; and/or
Calculating a mean square error between each pixel in the second reference map and each pixel in the sample image to generate a style loss function; and/or
The second reference map and the sample image are classified to generate a contrast loss function.
In one embodiment of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
and a processor for reading the executable instructions from the memory and executing the instructions to implement the image processing method.
In one embodiment of the present disclosure, a computer-readable storage medium storing a computer program for executing the above-described image processing method is provided.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (14)

1. An image processing method, comprising:
acquiring a draft diagram and a reference diagram;
extracting a manuscript characteristic corresponding to the manuscript and a reference image characteristic corresponding to the reference image, wherein characteristic dimensions corresponding to the manuscript characteristic image and the reference characteristic image comprise colors and textures;
Determining semantic relativity between the draft characteristic points in the draft characteristic and a plurality of reference characteristic points in the reference characteristic;
generating a coloring image of the manuscript image based on the semantic relativity, wherein the color and texture details of each part of the coloring image are consistent with the reference image;
wherein the generating the colored image of the draft based on the semantic relevance comprises:
acquiring a target manuscript characteristic point value based on an inner product between the manuscript characteristic point and each of the plurality of reference characteristic points;
and replacing the value of the characteristic point of the line manuscript with the value of the characteristic point of the target line manuscript to generate a colored image of the line manuscript.
2. The method of claim 1, wherein the determining semantic relatedness between a line contribution feature point in the line contribution feature and a plurality of reference feature points in the reference contribution feature comprises:
and calculating the similarity between the line manuscript characteristic point and each reference characteristic point in the plurality of reference characteristic points.
3. The method according to claim 2, wherein the calculating the similarity between the draft feature point and each of the plurality of reference feature points includes:
And carrying out dot multiplication processing on the line manuscript characteristic points and each reference characteristic point in the plurality of reference characteristic points to obtain an inner product between the line manuscript characteristic points and each reference characteristic point.
4. The method of claim 1, wherein the obtaining a target draft feature value based on an inner product between the draft feature and each of the plurality of reference feature points comprises:
acquiring a weight corresponding to each reference feature point based on an inner product between the line manuscript feature point and each reference feature point;
and carrying out weighted summation on each reference characteristic point based on the weight so as to acquire the characteristic point value of the target manuscript.
5. The method according to any one of claims 1 to 4, wherein,
in global mode, the plurality of reference feature points are all of the reference feature points in the reference map feature.
6. The method according to any one of claims 1 to 4, wherein,
in a local mode, the plurality of reference feature points are part of the reference map features and are associated with one or more of the sites.
7. The method of any one of claims 1-4, the site comprising: hair, eyes, mouth, face, neck, and clothing.
8. The method of claim 1, wherein the extracting, the determining, and the generating are performed by an image coloring model, the image coloring model being trained by:
acquiring a sample image;
determining a sample line manuscript corresponding to the sample image;
deforming the sample image to obtain a sample reference image;
and training based on the sample line manuscript graph and the sample reference graph to obtain the image coloring model.
9. The method of claim 8, wherein the deforming the sample image to obtain a sample reference map comprises:
and calling a thin plate spline function to perform random deformation processing on the sample image so as to obtain the sample reference graph.
10. The method according to claim 8 or 9, wherein the training to obtain the image coloring model based on the sample line contribution and the sample reference contribution comprises:
extracting sample line manuscript characteristics of the sample line manuscript;
extracting sample reference map features of the sample reference map;
Determining semantic relativity between sample feature points in the sample line manuscript feature and a plurality of sample reference feature points in the sample reference image feature;
generating a second reference image corresponding to the sample line contribution image based on the semantic relevance;
generating an objective loss function based on the sample image, the sample reference map, and the second reference map; and
and training the image coloring model based on the target loss function.
11. The method of claim 10, wherein the generating an objective loss function based on the sample image, the sample reference map, and the second reference map comprises:
calculating an average absolute error between each pixel in the second reference map and each pixel in the sample image to generate a reconstruction loss function; and/or
Calculating a mean square error between each pixel in the second reference map and each pixel in the sample image to generate a style loss function; and/or
And classifying the second reference image and the sample image to generate an fight loss function.
12. An image processing apparatus, comprising:
the acquisition module is used for acquiring the manuscript graph and the reference graph;
The extraction module is used for extracting the manuscript characteristic corresponding to the manuscript and the reference image characteristic corresponding to the reference image, and the characteristic dimension corresponding to the manuscript characteristic image and the reference characteristic image comprises color and texture;
the determining module is used for determining semantic relativity between the line manuscript characteristic points in the line manuscript characteristic and a plurality of reference characteristic points in the reference diagram characteristic;
the generation module is used for generating a colored image of the line manuscript graph based on the semantic relativity, wherein the color and texture details of each part of the colored image are consistent with those of the reference graph;
the generating module is specifically configured to obtain a target manuscript feature point value based on an inner product between the manuscript feature point and each of the plurality of reference feature points; and replacing the value of the characteristic point of the line manuscript with the value of the characteristic point of the target line manuscript to generate a colored image of the line manuscript.
13. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the image processing method according to any one of the preceding claims 1-11.
14. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the image processing method according to any one of the preceding claims 1-11.
CN202210443508.3A 2022-04-25 2022-04-25 Image processing method, device, equipment and medium Active CN115937338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210443508.3A CN115937338B (en) 2022-04-25 2022-04-25 Image processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210443508.3A CN115937338B (en) 2022-04-25 2022-04-25 Image processing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN115937338A CN115937338A (en) 2023-04-07
CN115937338B true CN115937338B (en) 2024-01-30

Family

ID=86651400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210443508.3A Active CN115937338B (en) 2022-04-25 2022-04-25 Image processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115937338B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615252A (en) * 2018-05-03 2018-10-02 苏州大学 The training method and device of color model on line original text based on reference picture
CN109147003A (en) * 2018-08-01 2019-01-04 北京东方畅享科技有限公司 Method, equipment and the storage medium painted to line manuscript base picture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967467B (en) * 2020-07-24 2022-10-04 北京航空航天大学 Image target detection method and device, electronic equipment and computer readable medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615252A (en) * 2018-05-03 2018-10-02 苏州大学 The training method and device of color model on line original text based on reference picture
CN109147003A (en) * 2018-08-01 2019-01-04 北京东方畅享科技有限公司 Method, equipment and the storage medium painted to line manuscript base picture

Also Published As

Publication number Publication date
CN115937338A (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN107633218B (en) Method and apparatus for generating image
CN110189246B (en) Image stylization generation method and device and electronic equipment
CN111275784B (en) Method and device for generating image
CN111915480B (en) Method, apparatus, device and computer readable medium for generating feature extraction network
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN109754464B (en) Method and apparatus for generating information
CN113704531A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112839223B (en) Image compression method, image compression device, storage medium and electronic equipment
CN108241855B (en) Image generation method and device
CN114511041B (en) Model training method, image processing method, device, equipment and storage medium
CN112419179A (en) Method, device, equipment and computer readable medium for repairing image
CN114913061A (en) Image processing method and device, storage medium and electronic equipment
CN113902636A (en) Image deblurring method and device, computer readable medium and electronic equipment
CN115731341A (en) Three-dimensional human head reconstruction method, device, equipment and medium
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN115953597B (en) Image processing method, device, equipment and medium
CN111784726A (en) Image matting method and device
CN115937338B (en) Image processing method, device, equipment and medium
WO2022178975A1 (en) Noise field-based image noise reduction method and apparatus, device, and storage medium
CN116310615A (en) Image processing method, device, equipment and medium
CN116129534A (en) Image living body detection method and device, storage medium and electronic equipment
CN116823700A (en) Image quality determining method and device
CN114418835A (en) Image processing method, apparatus, device and medium
CN115829827A (en) Face image processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant