CN107025457B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN107025457B
CN107025457B CN201710199165.XA CN201710199165A CN107025457B CN 107025457 B CN107025457 B CN 107025457B CN 201710199165 A CN201710199165 A CN 201710199165A CN 107025457 B CN107025457 B CN 107025457B
Authority
CN
China
Prior art keywords
image
preset
map
color
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710199165.XA
Other languages
Chinese (zh)
Other versions
CN107025457A (en
Inventor
朱晓龙
郑永森
王浩
黄凯宁
罗文寒
高雨
杨之华
华园
曾毅榕
吴发强
黄祥瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710199165.XA priority Critical patent/CN107025457B/en
Publication of CN107025457A publication Critical patent/CN107025457A/en
Priority to PCT/CN2018/080446 priority patent/WO2018177237A1/en
Application granted granted Critical
Publication of CN107025457B publication Critical patent/CN107025457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an image processing method and device; after receiving an image processing request, the embodiment of the invention can obtain a semantic segmentation model corresponding to an element type to be replaced according to the indication of the request, predict the probability of each pixel in an image belonging to the element type according to the model to obtain an initial probability map, then optimize the initial probability map based on a conditional random field, and fuse the image and a preset element material by using a segmentation effect map obtained after optimization, thereby achieving the purpose of replacing a certain element type part in the image with the preset element material; the scheme can reduce the probability of false detection and missed detection, greatly improve the segmentation accuracy and improve the fusion effect of images.

Description

Image processing method and device
Technical Field
The invention relates to the technical field of computers, in particular to an image processing method and device.
Background
With the popularization of intelligent mobile terminals, shooting and recording at any time and any place become a way of people's life gradually, and meanwhile, image processing, such as beautifying or special effect processing on images, is more and more popular among people.
In special effects processing, element replacement is one of the most common techniques. Taking replacing sky elements as an example, in the prior art, a threshold value determination may be generally performed based on information such as color and position of sky in an image, and then, sky segmentation is performed on the image according to a determination result, and a sky area obtained after segmentation is replaced with other elements, such as fireworks, reindeer, or quadratic space, so that a processed image may achieve a special effect.
In the process of research and practice of the prior art, the inventor of the present invention finds that, because the threshold determination is mainly performed based on information such as color and position when the image is divided into regions according to the prior art, false detection and missed detection are easily caused, which greatly affects the accuracy of segmentation and the fusion effect of the image, such as generation of distortion or insufficient smoothness.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device; the cutting accuracy can be improved, and the fusion effect can be improved.
The embodiment of the invention provides an image processing method, which comprises the following steps:
receiving an image processing request indicating an image that needs to be processed and an element type that needs to be replaced;
obtaining a semantic segmentation model corresponding to the element type, wherein the semantic segmentation model is formed by training a deep neural network;
predicting the probability of each pixel in the image belonging to the element type according to the semantic segmentation model to obtain an initial probability map;
optimizing the initial probability map based on the conditional random field to obtain a segmentation effect map;
and fusing the image and preset element materials according to the segmentation effect graph to obtain a processed image.
Correspondingly, an embodiment of the present invention further provides an image processing apparatus, including:
a receiving unit configured to receive an image processing request indicating an image that needs to be processed and an element type that needs to be replaced;
the acquisition unit is used for acquiring a semantic segmentation model corresponding to the element type, and the semantic segmentation model is formed by training a deep neural network;
the prediction unit is used for predicting the probability that each pixel in the image belongs to the element type according to the semantic segmentation model to obtain an initial probability map;
the optimization unit is used for optimizing the initial probability map based on the conditional random field to obtain a segmentation effect map;
and the fusion unit is used for fusing the image and preset element materials according to the segmentation effect graph to obtain a processed image.
After receiving an image processing request, the embodiment of the invention can obtain a semantic segmentation model corresponding to an element type to be replaced according to the indication of the request, predict the probability of each pixel in an image belonging to the element type according to the model to obtain an initial probability map, then optimize the initial probability map based on a conditional random field, and fuse the image and a preset element material by using a segmentation effect map obtained after optimization, thereby achieving the purpose of replacing a certain element type part in the image with the preset element material; because the semantic segmentation model in the scheme is mainly trained by the deep neural network, and when the model is used for carrying out semantic segmentation on the image, the probability that each pixel belongs to the element type is predicted not only based on information such as color, position and the like, so that the probability of false detection and missed detection can be greatly reduced compared with the existing scheme; in addition, the scheme can also optimize the segmented initial probability map by utilizing the conditional random field, so that a more precise segmentation result can be obtained, the segmentation accuracy is greatly improved, the situation of image distortion is favorably reduced, and the image fusion effect is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1a is a schematic view of a scene of an image processing method according to an embodiment of the present invention;
FIG. 1b is a flowchart of an image processing method provided by an embodiment of the invention;
FIG. 2a is another flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2b is a diagram illustrating an example of an interface of an image processing request in the image processing method according to the embodiment of the present invention;
fig. 2c is a diagram illustrating an example of sky segmentation in an image processing method according to an embodiment of the present invention;
FIG. 2d is a process flow diagram of an image processing method according to an embodiment of the present invention;
FIG. 3a is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of another structure of an image processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an image processing method and an image processing device, wherein the image processing device can be specifically integrated in equipment such as a server.
For example, referring to fig. 1a, when a user needs to process a certain image, an image processing request indicating information such as the image that needs to be processed and the type of element that needs to be replaced may be transmitted to the server through the terminal. After receiving the image processing request, the server may obtain a semantic segmentation model (which is trained by a deep neural network) corresponding to the element type, and then predict, according to the semantic segmentation model, a probability that each pixel in the image belongs to the element type to obtain a segmentation probability map. Thereafter, the server may further optimize the initial probability map by using a conditional random field or the like to obtain a finer segmentation result (i.e., obtain a segmentation effect map), and then fuse the image with preset element materials according to the segmentation result, for example, a first color portion (e.g., a white portion) in the segmentation effect map may be combined with a replaceable element material by a fusion algorithm, and a second color portion (e.g., a black portion) in the segmentation effect map may be combined with the image, and then combine the two combination results, and provide the combined processed image to the terminal, and so on.
The following are detailed below. The numbers in the following examples are not intended to limit the order of preference of the examples.
The first embodiment,
The present embodiment will be described from the viewpoint of an image processing apparatus which can be specifically integrated in a server or the like.
An image processing method comprising: the method comprises the steps of receiving an image processing request, wherein the image processing request indicates an image to be processed and an element type to be replaced, obtaining a semantic segmentation model corresponding to the element type, the semantic segmentation model is formed by training a deep neural network, predicting the probability that each pixel in the image belongs to the element type according to the semantic segmentation model to obtain an initial probability map, optimizing the initial probability map based on a conditional random field to obtain a segmentation effect map, and fusing the image and preset element materials according to the segmentation effect map to obtain a processed image.
As shown in fig. 1b, the specific flow of the image processing method may be as follows:
101. an image processing request is received.
For example, an image processing request sent by a terminal or other network-side device may be specifically received, and so on. The image processing request may indicate information such as an image to be processed and an element type to be replaced.
The element type refers to a category of elements, and an element refers to a basic element that can carry visual information, for example, if the image processing request indicates that the type of the element that needs to be replaced is "sky", it indicates that all sky parts in the image need to be replaced; for another example, if the image processing request indicates that the element type that needs to be replaced is "portrait," this indicates that all portrait portions in the image need to be replaced, and so on.
102. And acquiring a semantic segmentation model corresponding to the element type, wherein the semantic segmentation model is formed by training a deep neural network.
For example, if the received image processing request indicates that the element type requiring replacement is "sky" in step 101, a semantic segmentation model corresponding to "sky" may be acquired, and if the received image processing request indicates that the element type requiring replacement is "portrait" in step 101, a semantic segmentation model corresponding to "portrait" may be acquired, and so on.
Optionally, the semantic segmentation model may be pre-stored in the image processing apparatus or other storage devices, and acquired by the image processing apparatus when needed, or the semantic segmentation model may be built by the image processing apparatus, that is, before the step "acquiring the semantic segmentation model corresponding to the element type", the image processing method may further include:
establishing a semantic segmentation model corresponding to the element type, for example, the semantic segmentation model may specifically be as follows:
and training a preset semantic segmentation initial model by using a deep neural network according to the training data to obtain a semantic segmentation model corresponding to the element type.
For example, taking the example of establishing a semantic segmentation model corresponding to "sky", a certain number (for example, 8000) of pictures including sky may be collected, and then, according to the pictures, a preset semantic segmentation initial model is adjusted (fine tune) by using a deep neural network, and the finally obtained model is the semantic segmentation model corresponding to "sky".
It should be noted that the preset semantic segmentation initial model may be preset according to the requirements of practical applications, for example, a pre-trained semantic segmentation model for 20 categories of a general scene may be adopted.
103. Predicting the probability of each pixel in the image belonging to the element type according to the semantic segmentation model to obtain an initial probability map; for example, the following may be specifically mentioned:
(1) and importing the image into the semantic segmentation model to predict the probability that each pixel in the image belongs to the element type.
For example, if the element type is "sky", then at this time, the image may be imported into a semantic segmentation model corresponding to "sky" to predict the probability that each pixel in the image belongs to "sky".
For another example, if the element type is "portrait", then at this time, the image may be imported into a semantic segmentation model corresponding to the "portrait", so as to predict the probability that each pixel in the image belongs to the "portrait", and so on.
(2) And setting the color of the corresponding pixel on the preset mask according to the probability to obtain an initial probability map.
For example, it may be specifically determined whether the probability is greater than a preset threshold, if so, the color of the corresponding pixel on the preset mask is set as a first color, if not, the color of the corresponding pixel on the preset mask is set as a second color, and after it is determined that the colors of all pixels on the preset mask in the image are completely set, the preset mask after the colors are set is output, so as to obtain an initial probability map.
That is, a mask including a first color and a second color may be obtained, where the first color in the mask indicates that the probability that the corresponding pixel belongs to the element type is relatively high, and the second color indicates that the probability that the corresponding pixel belongs to the element type is relatively low.
For example, if the probability that a certain pixel a belongs to the "sky" is greater than 80%, the color of the pixel a on the preset mask may be set to be a first color, otherwise, if the probability that the pixel a belongs to the "sky" is less than or equal to 80%, the color of the pixel a on the preset mask may be set to be a second color, and so on.
The first color and the second color may also be determined according to the requirements of practical applications, for example, the first color may be set to white, and the second color may be set to black, or the first color may also be set to pink, and the second color may also be set to green, and so on. For convenience of description, in the embodiments of the present invention, the first color is white, and the second color is black.
104. The initial probability map is optimized based on a Conditional Random field (CRF or CRFs, also called Conditional Random Fields) to obtain a segmentation effect map.
For example, the pixels in the initial probability map may be mapped to nodes in the conditional random field, the similarity of edge constraints between the nodes is determined, and the segmentation result of the pixels in the initial probability map is adjusted according to the similarity of the edge constraints, so as to obtain a segmentation effect map.
The conditional random field is a discriminant probability model, and is a kind of random field. Like a markov random field, a conditional random field is a graph model with no direction, nodes (i.e., vertices) in the graph model represent random variables, and connecting lines between the nodes represent the dependency relationships between the random variables. The conditional random field has the capability of expressing long-distance dependency and overlapping characteristics, can better solve the problems of labeling (classification) bias and the like, and can carry out global normalization on all the characteristics to obtain a global optimal solution, so that the initial probability map can be optimized by using the conditional random field to achieve the purpose of optimizing the segmentation result.
It should be noted that, since the segmentation effect map is optimized from the initial probability map, the segmentation effect map is also a mask including the first color and the second color.
105. Fusing the image with preset element materials according to the segmentation effect graph to obtain a processed image; for example, the following may be specifically mentioned:
(1) and acquiring the replaceable element material according to a preset strategy.
The preset policy may be set according to requirements of actual applications, for example, a material selection instruction triggered by a user may be received, and then, corresponding materials are obtained from a material library according to the material selection instruction, and the corresponding materials are used as replaceable element materials.
Optionally, in order to increase the diversity of the element material, the element material may be obtained in a random interception manner, that is, the step "obtaining the replaceable element material according to the preset policy" may also include:
acquiring a candidate image, randomly intercepting the candidate image, and taking the intercepted image as a replaceable element material, and the like.
The candidate image may be obtained over the network, or may be uploaded by the user, or may even be directly captured on a terminal screen or a web page by the user and then provided to the image processing apparatus, and so on, which are not described herein again.
(2) And combining the first color part in the segmentation effect graph with the acquired element materials through a fusion algorithm to obtain a first combination graph.
Because the probability that the pixels of the first color part belong to the element type to be replaced is high, at this time, the part can be combined with the acquired element materials through a fusion algorithm, that is, the pixels of the part can be replaced by the acquired element materials.
(3) And combining the second color part in the segmentation effect map with the image through a fusion algorithm to obtain a second combination map.
Since the probability that the pixels of the second color portion belong to the element type to be replaced is low, the portion can be combined with the original image through a fusion algorithm, that is, the pixels of the portion are retained.
Optionally, in order to improve the fusion effect or implement other special effect effects, before the second color portion is combined with the image, the image may be further subjected to certain preprocessing, such as color transformation, contrast adjustment, brightness adjustment, saturation adjustment, and/or adding other special effect masks, and then the second color portion is combined with the preprocessed image by a fusion algorithm to obtain a second combination diagram.
(4) And synthesizing the first combination diagram and the second combination diagram to obtain a processed image.
Therefore, the element that needs to be replaced in the image can be replaced by the element material, for example, the "sky" in the image is replaced by the "space", and the like, and details are not repeated here.
Optionally, in order to make the fusion result more real and avoid noise or loss caused by inaccurate probability prediction, the segmentation effect map may be further processed to a certain extent before fusion, so that the segmentation boundary thereof is smoother and color transition at the joint of the replacement region may be more natural; before the step "fusing the image with the preset element material according to the segmentation effect map to obtain the processed image", the image processing method may further include:
and carrying out Appearance Model (Appearance Model) algorithm and/or image morphological operation processing on the segmentation effect map to obtain a processed segmentation effect map.
Then, the step "fusing the image with preset element materials according to the segmentation effect map to obtain a processed image" may include: and according to the processed segmentation effect graph, fusing the image with preset element materials, such as transparency (Alpha) fusion, to obtain a processed image.
The appearance model algorithm is a feature point extraction method widely applied to the field of pattern recognition, can perform statistical modeling on textures, and further fuses two statistical models of shapes and textures into an appearance model. The image morphology operation processing may include noise reduction processing and/or connected domain analysis, and the segmentation boundary may be smoother and the color transition at the joint of the replacement region may be more natural through the segmentation effect map after the processing such as the appearance model algorithm or the image morphology operation.
It should be noted that "Alpha fusion" in the embodiments of the present invention refers to fusion based on Alpha values, where Alpha is mainly used to specify the transparency level of a pixel. In general, 8 bits may be reserved for the alpha portion of each pixel, with the effective value of alpha in the range of [0, 255], with [0, 255] representing opacity [ 0%, 100% ]. Therefore, a pixel alpha of 0 indicates complete transparency, a pixel alpha of 128 indicates 50% transparency, and a pixel alpha of 255 indicates complete opacity.
As can be seen from the above, after receiving an image processing request, the embodiment may obtain, according to an instruction of the request, a semantic segmentation model corresponding to an element type to be replaced, predict, according to the model, a probability that each pixel in an image belongs to the element type, to obtain an initial probability map, optimize the initial probability map based on a conditional random field, and fuse the image and a preset element material by using a segmentation effect map obtained after the optimization, thereby achieving a purpose of replacing a certain element type part in the image with the preset element material; because the semantic segmentation model in the scheme is mainly trained by the deep neural network, and when the model is used for carrying out semantic segmentation on the image, the probability that each pixel belongs to the element type is predicted not only based on information such as color, position and the like, so that the probability of false detection and missed detection can be greatly reduced compared with the existing scheme; in addition, the scheme can also optimize the segmented initial probability map by utilizing the conditional random field, so that a more precise segmentation result can be obtained, the segmentation accuracy is greatly improved, the situation of image distortion is favorably reduced, and the image fusion effect is improved.
Example II,
The method described in the first embodiment is further illustrated by way of example.
In this embodiment, the image processing apparatus is specifically integrated in a server, and the element to be replaced is "sky" as an example.
As shown in fig. 2a and 2d, a specific flow of an image processing method may be as follows:
201. the terminal sends an image processing request to the server, wherein the image processing request can indicate information such as images needing to be processed and element types needing to be replaced.
The image processing request may be triggered in various ways, for example, by clicking or sliding a trigger key on a web page or a client interface, or by inputting a preset instruction.
For example, taking a trigger key click to trigger, referring to fig. 2b, when a user needs to replace a sky part in picture a with another element, such as a "space" element or add a "cloud," the user may upload picture a and click the trigger key "play once" to trigger generation of an image processing request, and send the image processing request to the server, where the image processing request indicates that an image to be processed is image a and the type of the element to be replaced is "sky.
It should be noted that, in this embodiment, the element to be replaced is taken as "sky" for example, and it should be understood that the type of the element to be replaced may also be other types, such as "portrait", "eye", or "plant", and the like, and the implementation thereof is similar to this, and is not described herein again.
202. After receiving the image processing request, the server acquires a semantic segmentation model corresponding to the sky, wherein the semantic segmentation model is formed by training a deep neural network.
Optionally, the semantic segmentation model may be pre-stored in the image processing apparatus or other storage devices, and is acquired by the image processing apparatus when the semantic segmentation model is needed to be used, or the semantic segmentation model may also be built by the image processing apparatus, for example, training data including the element type may be acquired, for example, a certain number of pictures including the sky are collected, and then, according to the training data (i.e., pictures including the sky), a preset semantic segmentation initial model is trained by using a deep neural network, so as to obtain the semantic segmentation model corresponding to the "sky".
It should be noted that the preset semantic segmentation initial model may be preset according to the requirements of practical applications, for example, a pre-trained semantic segmentation model for 20 categories of a general scene may be adopted.
203. The server imports the image into the semantic segmentation model to predict the probability that each pixel in the image belongs to the "sky".
For example, in step 202, if the received image processing request indicates that the image to be processed is the image a, then the image a may be imported into the semantic segmentation model corresponding to the "sky" in a three-channel color image manner to predict the probability that each pixel in the image a belongs to the "sky", and then step 204 is executed.
204. And the server sets the color of the corresponding pixel on the preset mask according to the probability to obtain an initial probability map.
For example, it may be specifically determined whether the probability is greater than a preset threshold, if so, the color of the corresponding pixel on the preset mask is set as a first color, if not, the color of the corresponding pixel on the preset mask is set as a second color, and after it is determined that the colors of all pixels on the preset mask in the image are completely set, the preset mask after the colors are set is output, so as to obtain an initial probability map.
For example, if the probability that a certain pixel K belongs to the "sky" is greater than 80%, the color of the pixel K on the preset mask may be set to be a first color, otherwise, if the probability that a certain pixel K belongs to the "sky" is less than or equal to 80%, the color of the pixel K on the preset mask may be set to be a second color, and so on.
The first color and the second color may also be determined according to the requirements of practical applications, for example, the first color may be set to white, and the second color may be set to black, or the first color may also be set to pink, and the second color may also be set to green, and so on.
For example, if the first color is set to white and the second color is set to black, the initial probability map shown in fig. 2c can be obtained after the picture a is imported into the semantic segmentation model.
205. And the server optimizes the initial probability map based on the conditional random field to obtain a segmentation effect map.
For example, the server may map pixels in the initial probability map to nodes in the conditional random field, determine similarity of edge constraints between the nodes, and adjust a segmentation result of the pixels in the initial probability map according to the similarity of the edge constraints to obtain a segmentation effect map.
Because the conditional random field is an undirected graph model, each pixel in the image can correspond to a node in the conditional random field, and the prior information including parameters such as color, texture, position and the like is preset, so that pixels with similar edge constraints among the nodes have similar segmentation results, and therefore, the segmentation results of the pixels in the initial probability graph can be adjusted according to the similarity of the edge constraints, so that the sky segmentation results are more precise, for example, referring to fig. 2c, and after the initial probability graph is optimized based on the conditional random field, a segmentation effect graph with a more precise segmentation results can be obtained.
206. The server performs appearance model algorithm and/or image morphology operation processing on the segmentation effect map to obtain a processed segmentation effect map, and then executes step 207.
The image morphology operation processing may include processing such as noise reduction processing and/or connected component analysis. By the segmentation effect graph after processing such as an appearance model algorithm or image morphology operation, the segmentation boundary can be smoother, and the color transition at the joint of the replacement region can be more natural.
It should be noted that step 206 is optional, and if step 206 is not executed, step 207 may be directly executed after step 205 is executed, and in step 208, the segmentation effect map, the image, and the element material are fused by a fusion algorithm to obtain a processed image.
207. And the server acquires the replaceable element material according to a preset strategy.
The preset policy may be set according to requirements of actual applications, for example, a material selection instruction triggered by a user may be received, and then, corresponding materials are obtained from a material library according to the material selection instruction, and the corresponding materials are used as replaceable element materials.
Optionally, in order to increase the diversity of the element material, the element material may also be obtained by a random interception method, for example, the server may obtain a candidate image, then perform random interception on the candidate image, and use the intercepted image as a replaceable element material, and so on.
The candidate image may be obtained over the network, or may be uploaded by the user, or may even be directly captured on a terminal screen or a web page by the user and then provided to the image processing apparatus, and so on, which are not described herein again.
208. And the server fuses the processed segmentation effect graph, the processed image and the element material through a fusion algorithm to obtain the processed image.
For example, the first color is white, and the second color is black, in this case, the server may combine the white portion in the segmentation effect map with the acquired element material by using a fusion algorithm to obtain a first combination map, combine the black portion in the segmentation effect map with the image a by using a fusion algorithm to obtain a second combination map, and then combine the first combination map and the second combination map to obtain a processed image.
Because the probability that the pixels of the white portion belong to the "sky" is high, at this time, the pixels of the white portion may be replaced with the acquired element materials through a fusion algorithm, and because the probability that the pixels of the black portion belong to the "sky" is low, at this time, the pixels of the white portion may be combined with the original image a through the fusion algorithm, that is, the pixels of the white portion are retained, so that after the first combination image and the second combination image are synthesized, the "sky" in the original image a may be replaced with corresponding element materials, for example, the "sky" in the image a is replaced with "night sky in christmas", and the like, see fig. 2d, which is not described herein again.
It should be noted that, optionally, as shown in fig. 2d, in order to improve the fusion effect or implement other special effect effects, before combining the black portion (i.e., the second color portion) with the image a, a certain preprocessing may be performed on the image a, such as performing color transformation, contrast adjustment, brightness adjustment, saturation adjustment, and/or adding other special effect masks, and then, the black portion is combined with the preprocessed image a by using a fusion algorithm to obtain a second combined diagram, which is not described herein again.
209. And the server sends the processed image to the terminal.
For example, the processed image may be displayed on an interface of the corresponding client. Optionally, the server may further provide a corresponding saving path and/or a sharing interface for a user to protect and/or share, for example, the processed image may be saved in a cloud or locally (i.e., in a terminal), and shared to a microblog, a friend circle, and/or inserted into a chat conversation interface of an instant chat tool, and so on, which are not described herein again.
As can be seen from the above, after an image processing request is received, a semantic segmentation model corresponding to "sky" can be obtained according to an instruction of the request, a probability that each pixel in an image belongs to "sky" is predicted according to the model to obtain an initial probability map, then, the initial probability map is optimized based on a conditional random field, and the image and a preset element material are fused by using a segmentation effect map obtained after the optimization, so that the purpose of replacing the "sky" part in the image with the preset element material is achieved; because the semantic segmentation model in the scheme is mainly trained by the deep neural network, and when the model is used for carrying out semantic segmentation on the image, the probability that each pixel belongs to the element type is predicted not only based on information such as color, position and the like, so that the probability of false detection and missed detection can be greatly reduced compared with the existing scheme; in addition, the scheme can also optimize the segmented initial probability map by utilizing the conditional random field, so that a more precise segmentation result can be obtained, the segmentation accuracy is greatly improved, the situation of image distortion is favorably reduced, and the image fusion effect is improved.
Example III,
In order to better implement the above method, an embodiment of the present invention further provides an image processing apparatus, which may be specifically integrated in a server or the like.
As shown in fig. 3a, the image processing apparatus includes a receiving unit 301, an obtaining unit 302, a predicting unit 303, an optimizing unit 304, and a fusing unit 305, as follows:
(1) a receiving unit 301;
a receiving unit 301 configured to receive an image processing request indicating information such as an image that needs to be processed and an element type that needs to be replaced.
(2) An acquisition unit 302;
an obtaining unit 302, configured to obtain a semantic segmentation model corresponding to the element type, where the semantic segmentation model is trained by a deep neural network.
For example, if the image processing request received by the receiving unit 301 indicates that the element type requiring replacement is "sky", at this time, the obtaining unit 302 may obtain a semantic segmentation model corresponding to "sky", and if the image processing request received by the receiving unit 301 indicates that the element type requiring replacement is "portrait", at this time, the obtaining unit 302 may obtain a semantic segmentation model corresponding to "portrait", and so on, which are not listed here.
Optionally, the semantic segmentation model may be pre-stored in the image processing apparatus or other storage devices, and acquired by the image processing apparatus when needed, or the semantic segmentation model may be built by the image processing apparatus, that is, as shown in fig. 3b, the image processing apparatus may further include a model building unit 306, as follows:
the model establishing unit 306 may be configured to establish a semantic segmentation model corresponding to the element type, for example, specifically, the following model is established:
and training a preset semantic segmentation initial model by using a deep neural network according to the training data to obtain a semantic segmentation model corresponding to the element type.
The preset semantic segmentation initial model may be preset according to the requirements of practical applications, for example, a pre-trained semantic segmentation model for 20 categories of a general scene may be adopted.
(3) A prediction unit 303;
the predicting unit 303 is configured to predict, according to the semantic segmentation model, a probability that each pixel in the image belongs to the element type, so as to obtain an initial probability map.
For example, the prediction unit 303 may include a prediction subunit and a setting subunit, as follows:
and a prediction subunit, configured to import the image into the semantic segmentation model to predict a probability that each pixel in the image belongs to the element type.
For example, if the element type is "sky", then at this time, the prediction subunit may introduce the image into a semantic segmentation model corresponding to "sky" to predict the probability that each pixel in the image belongs to "sky".
And the setting subunit is used for setting the color of the corresponding pixel on the preset mask according to the probability to obtain an initial probability map.
For example, the setting subunit may be specifically configured to determine whether the probability is greater than a preset threshold, and if so, set a color of the corresponding pixel on a preset mask as a first color; if not, setting the color of the corresponding pixel on the preset mask as a second color; and after the colors of all pixels in the image on the preset mask are determined to be set, outputting the preset mask with the set colors to obtain an initial probability map.
The preset threshold may be set according to the requirement of the actual application, and the first color and the second color may also be determined according to the requirement of the actual application, for example, the first color may be set to white, the second color may be set to black, and so on.
(4) An optimization unit 304;
and the optimizing unit 304 is configured to optimize the initial probability map based on the conditional random field to obtain a segmentation effect map.
For example, the optimization unit 304 may be specifically configured to map pixels in the initial probability map to nodes in the conditional random field, determine similarity of edge constraints between the nodes, and adjust a segmentation result of the pixels in the initial probability map according to the similarity of the edge constraints to obtain a segmentation effect map.
(5) A fusion unit 305;
and a fusion unit 305, configured to fuse the image with a preset element material according to the segmentation effect map, so as to obtain a processed image.
For example, the fusion unit 305 may include a material acquisition subunit, a first fusion subunit, a second fusion subunit, and a composition subunit, as follows:
the material obtaining subunit is configured to obtain a replaceable element material according to a preset policy.
The preset policy may be set according to requirements of actual applications, for example, the material obtaining subunit may be specifically configured to receive a material selection instruction triggered by a user, obtain a corresponding material from a material library according to the material selection instruction, and use the material as a replaceable element material.
Optionally, in order to increase the diversity of the element material, the element material may also be obtained in a random interception manner, that is:
the material obtaining subunit is specifically configured to obtain a candidate image, randomly intercept the candidate image, and use the intercepted image as a replaceable element material.
The candidate image may be obtained over the network, or may be uploaded by the user, or may even be directly captured on a terminal screen or a web page by the user and then provided to the image processing apparatus, and so on, which are not described herein again.
The first blending subunit may be configured to combine, by using a blending algorithm, the first color part in the segmentation effect map with the acquired element material to obtain a first combined map.
The second fusion subunit may be configured to combine the second color part in the segmentation effect map with the image through a fusion algorithm to obtain a second combination map.
The combining subunit may be configured to combine the first combination map and the second combination map to obtain a processed image.
Optionally, in order to make the fusion result more real and avoid noise or loss caused by inaccurate probability prediction, the segmentation effect map may be further processed to a certain extent before fusion, so that the segmentation boundary thereof is smoother and color transition at the joint of the replacement region may be more natural; that is, as shown in fig. 3b, the image processing apparatus may further include a preprocessing unit 307 as follows:
the preprocessing unit 307 may be configured to perform an appearance model algorithm and/or an image morphology operation on the segmentation effect map to obtain a processed segmentation effect map.
Then, the fusion unit 305 may be specifically configured to fuse the image with the preset element material according to the processed segmentation effect map to obtain a processed image.
The image morphological operation processing may include processing such as noise reduction processing and/or connected domain analysis, which is not described herein again.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, in this embodiment, after receiving an image processing request, the obtaining unit 302 obtains a semantic segmentation model corresponding to an element type to be replaced according to an instruction of the request, and the prediction unit 303 predicts a probability that each pixel in an image belongs to the element type according to the model to obtain an initial probability map, and then the optimization unit 304 optimizes the initial probability map based on a conditional random field, and the fusion unit 305 fuses the image and a preset element material by using a segmentation effect map obtained after the optimization, so as to achieve a purpose of replacing a certain element type portion in the image with the preset element material; because the semantic segmentation model in the scheme is mainly trained by the deep neural network, and when the model is used for carrying out semantic segmentation on the image, the probability that each pixel belongs to the element type is predicted not only based on information such as color, position and the like, so that the probability of false detection and missed detection can be greatly reduced compared with the existing scheme; in addition, the scheme can also optimize the segmented initial probability map by utilizing the conditional random field, so that a more precise segmentation result can be obtained, the segmentation accuracy is greatly improved, the situation of image distortion is favorably reduced, and the image fusion effect is improved.
Example four,
An embodiment of the present invention further provides a server, as shown in fig. 4, which shows a schematic structural diagram of the server according to the embodiment of the present invention, specifically:
the server may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the server architecture shown in FIG. 4 is not meant to be limiting, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the server, connects various parts of the entire server using various interfaces and lines, and performs various functions of the server and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the server. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the server, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The server further includes a power supply 403 for supplying power to each component, and preferably, the power supply 403 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The server may also include an input unit 404, the input unit 404 being operable to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the server may further include a display unit and the like, which will not be described in detail herein. Specifically, in this embodiment, the processor 401 in the server loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application program stored in the memory 402, thereby implementing various functions as follows:
the method comprises the steps of receiving an image processing request, wherein the image processing request indicates an image to be processed and an element type to be replaced, obtaining a semantic segmentation model corresponding to the element type, the semantic segmentation model is formed by training a deep neural network, predicting the probability that each pixel in the image belongs to the element type according to the semantic segmentation model to obtain an initial probability map, optimizing the initial probability map based on a conditional random field to obtain a segmentation effect map, and fusing the image and preset element materials according to the segmentation effect map to obtain a processed image.
For example, a replaceable element material may be obtained according to a preset policy, then a first color portion in the segmentation effect map is combined with the obtained element material through a fusion algorithm to obtain a first combination map, a second color portion in the segmentation effect map is combined with the image through the fusion algorithm to obtain a second combination map, and then the first combination map and the second combination map are combined to obtain a processed image.
Optionally, the semantic segmentation model may be pre-stored in the image processing apparatus or other storage devices, and acquired by the image processing apparatus when needed, or the semantic segmentation model may be built by the image processing apparatus, that is, the processor 401 may further run an application program stored in the memory 402, so as to implement the following functions:
and training a preset semantic segmentation initial model by using a deep neural network according to the training data to obtain a semantic segmentation model corresponding to the element type.
The preset semantic segmentation initial model may be preset according to the requirements of practical applications, for example, a pre-trained semantic segmentation model for 20 categories of a general scene may be adopted.
Optionally, in order to make the fusion result more real and avoid noise or loss caused by inaccurate probability prediction, the segmentation effect map may be further processed to a certain extent before fusion, so that the segmentation boundary thereof is smoother and color transition at the joint of the replacement region may be more natural; that is, the processor 401 may also run an application program stored in the memory 402, thereby implementing the following functions:
the segmentation effect graph is subjected to an appearance model algorithm and/or image morphological operation processing to obtain a processed segmentation effect graph, so that during subsequent fusion, the image and preset element materials can be fused according to the processed segmentation effect graph to obtain a processed image, which is detailed in the foregoing embodiment and is not repeated herein.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
As can be seen from the above, after receiving an image processing request, the server in this embodiment may obtain, according to an instruction of the request, a semantic segmentation model corresponding to an element type to be replaced, predict, according to the model, a probability that each pixel in an image belongs to the element type, to obtain an initial probability map, optimize the initial probability map based on a conditional random field, and fuse the image with a preset element material by using a segmentation effect map obtained after the optimization, thereby achieving a purpose of replacing a certain element type in the image with the preset element material; because the semantic segmentation model in the scheme is mainly trained by the deep neural network, and when the model is used for carrying out semantic segmentation on the image, the probability that each pixel belongs to the element type is predicted not only based on information such as color, position and the like, so that the probability of false detection and missed detection can be greatly reduced compared with the existing scheme; in addition, the scheme can also optimize the segmented initial probability map by utilizing the conditional random field, so that a more precise segmentation result can be obtained, the segmentation accuracy is greatly improved, the situation of image distortion is favorably reduced, and the image fusion effect is improved.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The foregoing detailed description is directed to an image processing method and apparatus according to an embodiment of the present invention, and the principles and embodiments of the present invention are described herein by using specific examples, which are merely used to help understand the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (16)

1. An image processing method, comprising:
receiving an image processing request indicating an image that needs to be processed and an element type that needs to be replaced;
obtaining a semantic segmentation model corresponding to the element type, wherein the semantic segmentation model is formed by training a deep neural network;
predicting the probability of each pixel in the image belonging to the element type according to the semantic segmentation model to obtain an initial probability map;
optimizing the initial probability map based on the conditional random field to obtain a segmentation effect map;
fusing the image with preset element materials according to the segmentation effect graph to obtain a processed image;
the step of fusing the image with preset element materials according to the segmentation effect graph to obtain a processed image comprises the following steps:
acquiring replaceable element materials according to a preset strategy;
combining a first color part in the segmentation effect graph with the obtained element materials through a fusion algorithm to obtain a first combination graph;
combining a second color part in the segmentation effect graph with the image through a fusion algorithm to obtain a second combination graph;
and synthesizing the first combination diagram and the second combination diagram to obtain a processed image.
2. The method according to claim 1, wherein the predicting the probability that each pixel in the image belongs to the element type according to the semantic segmentation model to obtain an initial probability map comprises:
importing the image into the semantic segmentation model to predict the probability that each pixel in the image belongs to the element type;
and setting the color of the corresponding pixel on a preset mask according to the probability to obtain an initial probability map.
3. The method of claim 2, wherein the setting the color of the corresponding pixel on the preset mask according to the probability to obtain an initial probability map comprises:
determining whether the probability is greater than a preset threshold;
if so, setting the color of the corresponding pixel on the preset mask as a first color;
if not, setting the color of the corresponding pixel on the preset mask as a second color;
and after the colors of all pixels in the image on the preset mask are determined to be set, outputting the preset mask with the set colors to obtain an initial probability map.
4. The method of claim 1, wherein the optimizing the initial probability map based on the conditional random field to obtain a segmentation effect map comprises:
mapping pixels in the initial probability map to nodes in a conditional random field;
determining similarity of edge constraints between nodes;
and adjusting the segmentation result of the pixels in the initial probability map according to the similarity of the edge constraint to obtain a segmentation effect map.
5. The method of claim 1, wherein the obtaining replaceable element material according to the preset strategy comprises:
acquiring a candidate image, randomly intercepting the candidate image, and taking the intercepted image as a replaceable element material; alternatively, the first and second electrodes may be,
and receiving a material selection instruction triggered by a user, and acquiring corresponding materials from a material library according to the material selection instruction to serve as replaceable element materials.
6. The method according to any one of claims 1 to 4, wherein before the fusing the image with preset element materials according to the segmentation effect map to obtain the processed image, the method further comprises:
performing appearance model algorithm and/or image morphological operation processing on the segmentation effect graph to obtain a processed segmentation effect graph;
the fusing the image with preset element materials according to the segmentation effect graph to obtain a processed image comprises the following steps: and according to the processed segmentation effect graph, fusing the image with preset element materials to obtain a processed image.
7. The method according to any one of claims 1 to 4, wherein before the obtaining the semantic segmentation model corresponding to the element type, the method further comprises:
acquiring training data containing the element types;
and training a preset semantic segmentation initial model by using a deep neural network according to the training data to obtain a semantic segmentation model corresponding to the element type.
8. An image processing apparatus characterized by comprising:
a receiving unit configured to receive an image processing request indicating an image that needs to be processed and an element type that needs to be replaced;
the acquisition unit is used for acquiring a semantic segmentation model corresponding to the element type, and the semantic segmentation model is formed by training a deep neural network;
the prediction unit is used for predicting the probability that each pixel in the image belongs to the element type according to the semantic segmentation model to obtain an initial probability map;
the optimization unit is used for optimizing the initial probability map based on the conditional random field to obtain a segmentation effect map;
the fusion unit is used for fusing the image and preset element materials according to the segmentation effect graph to obtain a processed image;
the fusion unit comprises a material acquisition subunit, a first fusion subunit, a second fusion subunit and a synthesis subunit;
the material acquisition subunit is used for acquiring replaceable element materials according to a preset strategy;
the first fusion subunit is configured to combine, by using a fusion algorithm, the first color part in the segmentation effect map with the acquired element material to obtain a first combination map;
the second fusion subunit is configured to combine, by using a fusion algorithm, the second color part in the segmentation effect map with the image to obtain a second combination map;
and the synthesis subunit is used for synthesizing the first combination diagram and the second combination diagram to obtain a processed image.
9. The apparatus of claim 8, wherein the prediction unit comprises a prediction subunit and a setting subunit;
the prediction subunit is configured to import the image into the semantic segmentation model to predict a probability that each pixel in the image belongs to the element type;
and the setting subunit is used for setting the color of the corresponding pixel on the preset mask according to the probability to obtain an initial probability map.
10. The apparatus according to claim 9, wherein the setting subunit is specifically configured to:
determining whether the probability is greater than a preset threshold;
if so, setting the color of the corresponding pixel on the preset mask as a first color;
if not, setting the color of the corresponding pixel on the preset mask as a second color;
and after the colors of all pixels in the image on the preset mask are determined to be set, outputting the preset mask with the set colors to obtain an initial probability map.
11. The apparatus of claim 8,
the optimization unit is specifically configured to map pixels in the initial probability map to nodes in the conditional random field, determine similarity of edge constraints between the nodes, and adjust a segmentation result of the pixels in the initial probability map according to the similarity of the edge constraints to obtain a segmentation effect map.
12. The apparatus of claim 8,
the material acquisition subunit is specifically configured to acquire a candidate image, randomly intercept the candidate image, and use the intercepted image as a replaceable element material; alternatively, the first and second electrodes may be,
the material obtaining subunit is specifically configured to receive a material selection instruction triggered by a user, and obtain a corresponding material from a material library according to the material selection instruction, where the material is used as a replaceable element material.
13. The apparatus of any one of claims 8 to 11, further comprising a pre-processing unit;
the preprocessing unit is used for performing appearance model algorithm and/or image morphological operation processing on the segmentation effect graph to obtain a processed segmentation effect graph;
and the fusion unit is specifically used for fusing the image and the preset element material according to the processed segmentation effect graph to obtain a processed image.
14. The apparatus according to any one of claims 8 to 11, further comprising a model building unit;
the model establishing unit is used for acquiring training data containing the element types, and training a preset semantic segmentation initial model by using a deep neural network according to the training data to obtain a semantic segmentation model corresponding to the element types.
15. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the image processing method according to any of claims 1-7 are implemented when the program is executed by the processor.
16. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 7.
CN201710199165.XA 2017-03-29 2017-03-29 Image processing method and device Active CN107025457B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710199165.XA CN107025457B (en) 2017-03-29 2017-03-29 Image processing method and device
PCT/CN2018/080446 WO2018177237A1 (en) 2017-03-29 2018-03-26 Image processing method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710199165.XA CN107025457B (en) 2017-03-29 2017-03-29 Image processing method and device

Publications (2)

Publication Number Publication Date
CN107025457A CN107025457A (en) 2017-08-08
CN107025457B true CN107025457B (en) 2022-03-08

Family

ID=59525827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710199165.XA Active CN107025457B (en) 2017-03-29 2017-03-29 Image processing method and device

Country Status (2)

Country Link
CN (1) CN107025457B (en)
WO (1) WO2018177237A1 (en)

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025457B (en) * 2017-03-29 2022-03-08 腾讯科技(深圳)有限公司 Image processing method and device
CN107705334B (en) * 2017-08-25 2020-08-25 北京图森智途科技有限公司 Camera abnormity detection method and device
CN107507201A (en) * 2017-09-22 2017-12-22 深圳天琴医疗科技有限公司 A kind of medical image cutting method and device
CN107993191B (en) * 2017-11-30 2023-03-21 腾讯科技(深圳)有限公司 Image processing method and device
CN108009506A (en) * 2017-12-07 2018-05-08 平安科技(深圳)有限公司 Intrusion detection method, application server and computer-readable recording medium
CN111052028B (en) * 2018-01-23 2022-04-05 深圳市大疆创新科技有限公司 System and method for automatic surface and sky detection
CN108305260B (en) * 2018-03-02 2022-04-12 苏州大学 Method, device and equipment for detecting angular points in image
CN108764143B (en) * 2018-05-29 2020-11-24 北京字节跳动网络技术有限公司 Image processing method, image processing device, computer equipment and storage medium
CN110610495B (en) * 2018-06-15 2022-06-07 北京京东尚科信息技术有限公司 Image processing method and system and electronic equipment
CN110910334B (en) * 2018-09-15 2023-03-21 北京市商汤科技开发有限公司 Instance segmentation method, image processing device and computer readable storage medium
CN110163862B (en) * 2018-10-22 2023-08-25 腾讯科技(深圳)有限公司 Image semantic segmentation method and device and computer equipment
CN109598678B (en) * 2018-12-25 2023-12-12 维沃移动通信有限公司 Image processing method and device and terminal equipment
CN109741347B (en) * 2018-12-30 2021-03-16 北京工业大学 Iterative learning image segmentation method based on convolutional neural network
CN111489359B (en) * 2019-01-25 2023-05-30 银河水滴科技(北京)有限公司 Image segmentation method and device
CN111832587B (en) * 2019-04-18 2023-11-14 北京四维图新科技股份有限公司 Image semantic annotation method, device and storage medium
CN110310222A (en) * 2019-06-20 2019-10-08 北京奇艺世纪科技有限公司 A kind of image Style Transfer method, apparatus, electronic equipment and storage medium
CN110544218B (en) * 2019-09-03 2024-02-13 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN110992371B (en) * 2019-11-20 2023-10-27 北京奇艺世纪科技有限公司 Portrait segmentation method and device based on priori information and electronic equipment
CN110930296B (en) * 2019-11-20 2023-08-08 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
CN110956221A (en) * 2019-12-17 2020-04-03 北京化工大学 Small sample polarization synthetic aperture radar image classification method based on deep recursive network
CN111210434A (en) * 2019-12-19 2020-05-29 上海艾麒信息科技有限公司 Image replacement method and system based on sky identification
CN111354059B (en) * 2020-02-26 2023-04-28 北京三快在线科技有限公司 Image processing method and device
CN111461996B (en) * 2020-03-06 2023-08-29 合肥师范学院 Quick intelligent color matching method for image
CN111445486B (en) * 2020-03-25 2023-10-03 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN111507946A (en) * 2020-04-02 2020-08-07 浙江工业大学之江学院 Element data driven flower type pattern rapid generation method based on similarity sample
CN113554658B (en) * 2020-04-23 2024-06-14 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN111598902B (en) * 2020-05-20 2023-05-30 抖音视界有限公司 Image segmentation method, device, electronic equipment and computer readable medium
CN111832745B (en) * 2020-06-12 2023-08-01 北京百度网讯科技有限公司 Data augmentation method and device and electronic equipment
CN111915636B (en) * 2020-07-03 2023-10-24 闽江学院 Method and device for positioning and dividing waste targets
CN111862045B (en) * 2020-07-21 2021-09-07 上海杏脉信息科技有限公司 Method and device for generating blood vessel model
CN112037142B (en) * 2020-08-24 2024-02-13 腾讯科技(深圳)有限公司 Image denoising method, device, computer and readable storage medium
CN112508964B (en) * 2020-11-30 2024-02-20 北京百度网讯科技有限公司 Image segmentation method, device, electronic equipment and storage medium
CN112800499B (en) * 2020-12-02 2023-12-26 杭州群核信息技术有限公司 Diatom ooze pattern high-order design method based on image processing and real-time material generation capability
CN112633142A (en) * 2020-12-21 2021-04-09 广东电网有限责任公司电力科学研究院 Power transmission line violation building identification method and related device
CN112866573B (en) * 2021-01-13 2022-11-04 京东方科技集团股份有限公司 Display, fusion display system and image processing method
CN112819741B (en) * 2021-02-03 2024-03-08 四川大学 Image fusion method and device, electronic equipment and storage medium
CN112861885B (en) * 2021-03-25 2023-09-22 北京百度网讯科技有限公司 Image recognition method, device, electronic equipment and storage medium
CN113129319B (en) * 2021-04-29 2023-06-23 北京市商汤科技开发有限公司 Image processing method, device, computer equipment and storage medium
CN113657401B (en) * 2021-08-24 2024-02-06 凌云光技术股份有限公司 Probability map visualization method and device for defect detection
CN117437338A (en) * 2023-10-08 2024-01-23 书行科技(北京)有限公司 Special effect generation method, device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777180A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Complex background real-time alternating method based on background modeling and energy minimization
CN104133956A (en) * 2014-07-25 2014-11-05 小米科技有限责任公司 Method and device for processing pictures
CN104463843A (en) * 2014-10-31 2015-03-25 南京邮电大学 Interactive image segmentation method of android system
CN104636761A (en) * 2015-03-12 2015-05-20 华东理工大学 Image semantic annotation method based on hierarchical segmentation
EP2996085A1 (en) * 2014-09-09 2016-03-16 icoMetrix NV Method and system for analyzing image data
CN105574513A (en) * 2015-12-22 2016-05-11 北京旷视科技有限公司 Character detection method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8014590B2 (en) * 2005-12-07 2011-09-06 Drvision Technologies Llc Method of directed pattern enhancement for flexible recognition
CN102486827B (en) * 2010-12-03 2014-11-05 中兴通讯股份有限公司 Extraction method of foreground object in complex background environment and apparatus thereof
CN103116754B (en) * 2013-01-24 2016-05-18 浙江大学 Batch images dividing method and system based on model of cognition
CN107025457B (en) * 2017-03-29 2022-03-08 腾讯科技(深圳)有限公司 Image processing method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777180A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Complex background real-time alternating method based on background modeling and energy minimization
CN104133956A (en) * 2014-07-25 2014-11-05 小米科技有限责任公司 Method and device for processing pictures
EP2996085A1 (en) * 2014-09-09 2016-03-16 icoMetrix NV Method and system for analyzing image data
CN104463843A (en) * 2014-10-31 2015-03-25 南京邮电大学 Interactive image segmentation method of android system
CN104636761A (en) * 2015-03-12 2015-05-20 华东理工大学 Image semantic annotation method based on hierarchical segmentation
CN105574513A (en) * 2015-12-22 2016-05-11 北京旷视科技有限公司 Character detection method and device

Also Published As

Publication number Publication date
CN107025457A (en) 2017-08-08
WO2018177237A1 (en) 2018-10-04

Similar Documents

Publication Publication Date Title
CN107025457B (en) Image processing method and device
KR102469295B1 (en) Remove video background using depth
CN110555896B (en) Image generation method and device and storage medium
CN111598818A (en) Face fusion model training method and device and electronic equipment
CN111832745A (en) Data augmentation method and device and electronic equipment
CN109710255B (en) Special effect processing method, special effect processing device, electronic device and storage medium
TW201909028A (en) Image processing method, non-transitory computer readable medium and image processing system
CN111080746A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112950640A (en) Video portrait segmentation method and device, electronic equipment and storage medium
CN114973349A (en) Face image processing method and training method of face image processing model
CN110910400A (en) Image processing method, image processing device, storage medium and electronic equipment
WO2024088111A1 (en) Image processing method and apparatus, device, medium, and program product
WO2021106855A1 (en) Data generation method, data generation device, model generation method, model generation device, and program
CN117557708A (en) Image generation method, device, storage medium and computer equipment
CN113610720A (en) Video denoising method and device, computer readable medium and electronic device
CN115170390B (en) File stylization method, device, equipment and storage medium
CN110163049B (en) Face attribute prediction method, device and storage medium
CN116229188A (en) Image processing display method, classification model generation method and equipment thereof
US20230131418A1 (en) Two-dimensional (2d) feature database generation
CN114005066B (en) HDR-based video frame image processing method and device, computer equipment and medium
CN115272057A (en) Training of cartoon sketch image reconstruction network and reconstruction method and equipment thereof
CN114764821A (en) Moving object detection method, moving object detection device, electronic apparatus, and storage medium
CN113706399A (en) Face image beautifying method and device, electronic equipment and storage medium
CN113408452A (en) Expression redirection training method and device, electronic equipment and readable storage medium
KR102554442B1 (en) Face synthesis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant