CN111915702A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111915702A
CN111915702A CN201910389984.XA CN201910389984A CN111915702A CN 111915702 A CN111915702 A CN 111915702A CN 201910389984 A CN201910389984 A CN 201910389984A CN 111915702 A CN111915702 A CN 111915702A
Authority
CN
China
Prior art keywords
texture
determining
filling area
shape
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910389984.XA
Other languages
Chinese (zh)
Inventor
陈鹏
陈培
高暐玥
刘宸寰
刘奎龙
唐浩超
向为
杨昌源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910389984.XA priority Critical patent/CN111915702A/en
Publication of CN111915702A publication Critical patent/CN111915702A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an image processing method and apparatus. The method comprises the following steps: determining the shape of the texture filling area; determining a target texture sample image matched with the shape of the texture filling area in a texture material library at least according to the shape of the texture filling area, wherein the texture material library comprises a plurality of texture sample images consistent with semantic labels of the texture filling area; and according to the target texture sample map, carrying out texture synthesis in the texture filling area. The method and the device can accurately determine the target texture sample map by comprehensively considering the shape of the texture filling area, thereby effectively improving the accuracy of performing texture filling on the texture filling area based on the target texture sample map.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus
Background
Texture synthesis is an important application for computer aided painters to draw. When a painter draws a piece of work, the painter often draws a simple outline or a general layout and then refines the drawing process to a local drawing process. Since the local rendering process is mostly a repetitive task, such as rendering of grass, it generally has a similar texture. Therefore, it is very important to automatically fill the texture for the region with the same semantic tag by using the texture synthesis technology.
Currently, a commonly used texture synthesis method includes texture synthesis based on a texture sample map, and texture filling is performed on a texture filling area by using the texture sample map with the same semantic tag as that of the texture filling area. However, images with the same semantic tags do not necessarily have the same texture, for example, the semantic tags are rock, but far view rock and near view rock do not have the same texture. Therefore, the existing texture synthesis algorithm has low accuracy of texture filling on the texture filling area.
Disclosure of Invention
In view of this, the present disclosure provides an image processing method and apparatus, which can effectively improve the accuracy of texture filling on a texture filling area.
According to a first aspect of the present disclosure, there is provided an image processing method including: determining the shape of the texture filling area; determining a target texture sample image matched with the shape of the texture filling area in a texture material library at least according to the shape of the texture filling area, wherein the texture material library comprises a plurality of texture sample images consistent with semantic labels of the texture filling area; and according to the target texture sample map, carrying out texture synthesis in the texture filling area.
In one possible implementation, determining the shape of the texture filling area includes: determining the contour edge of the texture filling area through an edge detection algorithm; and determining the shape of the texture filling area through a shape context algorithm according to the contour edge of the texture filling area.
In a possible implementation manner, determining, in a texture material library, a target texture sample map matching the shape of the texture filling area according to at least the shape of the texture filling area includes: determining the contour edge of each texture sample map in the texture material library through an edge detection algorithm; determining the shape of each texture sample map through a shape context algorithm according to the contour edge of each texture sample map; determining the shape similarity between the texture filling area and each texture sample map according to the shape of the texture filling area and the shape of each texture sample map; and determining the target texture sample map according to the shape similarity between the texture filling area and each texture sample map.
In one possible implementation manner, determining the target texture sample according to the shape similarity between the texture filling area and each texture sample includes: and determining the texture sample map with the highest shape similarity with the texture filling area as the target texture sample map.
In one possible implementation manner, determining the target texture sample according to the shape similarity between the texture filling area and each texture sample includes: determining a plurality of texture sample maps with shape similarity greater than or equal to a threshold value with the texture filling area as a candidate texture sample map set; determining an area difference between the texture filling area and each candidate texture sample in the candidate texture sample set; and determining the candidate texture sample map with the minimum area difference value with the texture filling area as the target texture sample map.
In one possible implementation, determining the shape by a shape context algorithm based on the contour edge includes: carrying out uniform sampling on the contour edge, and determining a plurality of feature points; for any feature point, determining a feature vector corresponding to the feature point according to the relative position between the feature point and other feature points; and determining a feature vector matrix formed by the feature vectors corresponding to each feature point as the shape.
In one possible implementation manner, performing texture synthesis in the texture filling area according to the target texture sample map includes: and according to the target texture sample map, performing texture synthesis in the texture filling area through a block-based texture synthesis algorithm.
In one possible implementation manner, performing texture synthesis in the texture filling area according to the target texture sample map includes: and according to the target texture sample map, performing texture synthesis in the texture filling area through a point-based texture synthesis algorithm.
According to a second aspect of the present disclosure, there is provided an image processing apparatus comprising: a first determining module for determining the shape of the texture filling area; a second determining module, configured to determine, in a texture material library, a target texture sample image matched with the shape of the texture filling area according to at least the shape of the texture filling area, where the texture material library includes a plurality of texture sample images consistent with semantic tags of the texture filling area; and the texture synthesis module is used for carrying out texture synthesis in the texture filling area according to the target texture sample map.
In one possible implementation manner, the first determining module includes: the edge detection submodule is used for determining the contour edge of the texture filling area through an edge detection algorithm; and the first determining submodule is used for determining the shape of the texture filling area through a shape context algorithm according to the contour edge of the texture filling area.
In one possible implementation manner, the second determining module includes: the edge detection submodule is used for determining the contour edge of each texture sample map in the texture material library through an edge detection algorithm; the second determining submodule is used for determining the shape of each texture sample map through a shape context algorithm according to the contour edge of each texture sample map; a third determining submodule, configured to determine, according to the shape of the texture filling area and the shape of each texture sample, a shape similarity between the texture filling area and each texture sample; and the fourth determining submodule is used for determining the target texture sample map according to the shape similarity between the texture filling area and each texture sample map.
In a possible implementation manner, the fourth determining submodule is specifically configured to: and determining the texture sample map with the highest shape similarity with the texture filling area as the target texture sample map.
In one possible implementation, the fourth determining sub-module includes: a first determining unit, configured to determine, as a candidate texture sample set, a plurality of texture sample maps having a shape similarity with the texture filling area that is greater than or equal to a threshold; a second determining unit, configured to determine an area difference between the texture filling area and each candidate texture sample in the candidate texture sample set; and a third determining unit, configured to determine, as the target texture sample, a candidate texture sample with a smallest area difference with the texture filling region.
In one possible implementation, the edge detection sub-module includes: the fourth determining unit is used for performing uniform sampling on the contour edge and determining a plurality of feature points; a fifth determining unit, configured to determine, for any feature point, a feature vector corresponding to the feature point according to a relative position between the feature point and another feature point; and a sixth determining unit, configured to determine, as the shape, a feature vector matrix formed by the feature vectors corresponding to each feature point.
In one possible implementation, the texture synthesis module is specifically configured to: and according to the target texture sample map, performing texture synthesis in the texture filling area through a block-based texture synthesis algorithm.
In one possible implementation, the texture synthesis module is specifically configured to: and according to the target texture sample map, performing texture synthesis in the texture filling area through a point-based texture synthesis algorithm.
Determining a target texture sample image matched with the shape of the texture filling area in a texture material library at least according to the shape of the texture filling area, wherein the texture material library comprises a plurality of texture sample images consistent with semantic labels of the texture filling area, and further carrying out texture synthesis in the texture filling area according to the target texture sample image. By comprehensively considering the shape of the texture filling area, the target texture sample image can be accurately determined, so that the accuracy of performing texture filling on the texture filling area based on the target texture sample image can be effectively improved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a schematic flow diagram of an image processing method according to an embodiment of the present disclosure;
FIG. 2 illustrates determining a feature point P according to an embodiment of the disclosureiA schematic of the corresponding feature vector;
FIG. 3 is a comparison graph of a histogram of statistical distributions between different feature points in two similar shapes according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating texture synthesis in a texture fill area by a block-based texture synthesis algorithm according to an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 6 shows a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
in step S11, the shape of the texture filling area is determined.
Step S12, according to the shape of the texture filling area, determining a target texture sample image matched with the shape of the texture filling area in a texture material library, wherein the texture material library comprises a plurality of texture sample images consistent with the semantic labels of the texture filling area.
Step S13, texture synthesis is performed in the texture filling area according to the target texture sample map.
The semantic information of the texture filling area is not only related to the semantic label of the texture filling area, but also related to the shape of the texture filling area, i.e. the shape of the texture filling area may reflect part of the semantic information of the texture filling area. When the texture filling area is subjected to texture filling, the semantic labels and the shapes of the texture filling area are comprehensively considered, and then the target texture sample can be accurately determined to fill the texture filling area.
In one possible implementation, the texture material library is determined based on semantic tags of the texture fill area.
And determining a plurality of texture sample pictures with the semantic labels consistent with the semantic labels of the texture filling area according to the semantic labels of the texture filling area to form a texture material library. For example, if the semantic label of the texture filling area is a mountain, an area with the semantic label of the mountain is selected from a certain picture and determined as the texture sample map.
The texture sample map and the texture filling area have corresponding semantic labels, that is, the texture sample map and the texture filling area represent the same semantic information, for example, the texture sample map and the texture filling area represent the same semantic information mountain.
In one example, the library of texture material includes a plurality of texture samples having different shapes. Since the plurality of texture samples included in the texture material library are derived from different pictures or different regions of the same picture, the shapes of the different texture samples may be the same or different.
When texture filling is required for the texture filling area, in order to comprehensively consider the shape of the texture filling area, the shape of the texture filling area is first determined.
In one possible implementation, determining the shape of the texture filling area includes: determining the contour edge of the texture filling area through an edge detection algorithm; and determining the shape of the texture filling area through a shape context algorithm according to the contour edge of the texture filling area.
When a user draws an outline map or a layout map of a work, a texture filling area needing texture filling is extracted from the outline map or the layout map by using an image processing technology. And performing edge detection on the texture filling area through an edge detection algorithm to determine the outline edge of the texture filling area, and further determining the shape of the texture filling area through a shape context algorithm according to the outline edge of the texture filling area.
In one possible implementation, determining the shape by a shape context algorithm based on the contour edge includes: carrying out uniform sampling on the edge of the contour, and determining a plurality of characteristic points; for any feature point, determining a feature vector corresponding to the feature point according to the relative position between the feature point and other feature points; and determining a characteristic vector matrix formed by the characteristic vectors corresponding to each characteristic point as a shape.
In one example, the contour edge in the texture fill areaUniformly sampling on the edge to obtain N characteristic points (P)1,P2,......,PN). For any characteristic point PiAccording to the characteristic point PiWith other feature points PjRelative position therebetween, the feature point P is determinediCorresponding feature vector VPi
FIG. 2 illustrates determining a feature point P according to an embodiment of the disclosureiSchematic representation of the corresponding feature vector. As shown in FIG. 2, the feature point P is usediAnd establishing a plurality of concentric circles with different radiuses as the circle center, equally dividing the circle into sectors, and dividing the whole area into M grid areas. Calculating a feature point PiWith other N-1 feature points PjI.e. the characteristic point P is determined by the following formulaiI.e. other N-1 feature points PjIs calculated from the statistical distribution histogram of
Figure BDA0002056167830000071
Figure BDA0002056167830000072
Figure BDA0002056167830000073
Wherein the characteristic point P is determined according to the relative angle theta and the radial distance rjThe area of the grid in which the grid is located,
Figure BDA0002056167830000074
to remove the characteristic point PiOther feature points P falling outside the k-th mesh regionjNumber of (P)i,Pj) Is a characteristic point PiAnd a feature point PjBin (k) is the kth grid region.
According to the characteristic point PiWith other N-1 feature points PjDetermining the relative position relationship of the feature points PiCorresponding feature vector
Figure BDA0002056167830000075
Comprises the following steps:
Figure BDA0002056167830000076
n feature points (P)1,P2,......,PN) And determining a feature vector matrix formed by the corresponding feature vectors as the shape F of the texture filling area:
Figure BDA0002056167830000077
fig. 3 shows a comparison diagram of a histogram of statistical distribution between different feature points in two similar shapes according to an embodiment of the present disclosure. As shown in FIG. 3, (a) and (b) are two similar shapes, in which the feature point P in (a)1And the characteristic point P in (b)2Corresponds to the position of (a) the characteristic point P3And the characteristic point P in (b)2Does not correspond. As can be seen from (d), (e) and (f) in FIG. 3, the feature point P1And a feature point P2Are similar to the statistical distribution histogram of (1), the characteristic point P3And a feature point P2Are not similar. Therefore, the shape F determined from the statistical distribution histogram of the feature points can be used for subsequently determining a target texture sample map that matches (is similar to) the shape of the texture filling region.
After the shape of the texture filling area is determined, a target texture sample image matched with the shape of the texture filling area is determined in the texture material base at least according to the shape of the texture filling area.
In one possible implementation, determining a target texture sample map in a texture material library, which matches the shape of the texture filling region, according to at least the shape of the texture filling region includes: determining the contour edge of each texture sample image in the texture material library through an edge detection algorithm; determining the shape of each texture sample map through a shape context algorithm according to the contour edge of each texture sample map; determining the shape similarity between the texture filling area and each texture sample map according to the shape of the texture filling area and the shape of each texture sample map; and determining the target texture sample map according to the shape similarity between the texture filling area and each texture sample map.
And determining the contour edge of each texture sample image in the texture material library through an edge detection algorithm. For each texture sample image, uniform sampling is carried out on the contour edge of the texture sample image to obtain N characteristic points (Q)1,Q2,......,QN). For any characteristic point QiAccording to the characteristic point QiAnd other characteristic points QjRelative position therebetween, the characteristic point Q is determinediCorresponding feature vector
Figure BDA0002056167830000081
Determining a feature point QiCorresponding feature vector
Figure BDA0002056167830000082
And the above-mentioned determined feature point PiCorresponding feature vector
Figure BDA0002056167830000083
Is similar to the process of obtaining the characteristic point QiCorresponding feature vector
Figure BDA0002056167830000084
Comprises the following steps:
Figure BDA0002056167830000085
n feature points (Q)1,Q2,......,QN) The feature vector matrix formed by the corresponding feature vectors is determined as the shape F' of the texture sample graph:
Figure BDA0002056167830000086
determining the shape similarity between the texture filling area and the texture sample map by the following formula:
Figure BDA0002056167830000087
wherein, the smaller the value of f is, the higher the shape similarity between the texture filling area and the texture sample map is.
After determining the shape similarity between the texture filling area and each texture sample, determining a target texture sample according to the shape similarity between the texture filling area and each texture sample, including at least two ways as follows.
The first method comprises the following steps:
in one possible implementation, determining the target texture sample map according to the shape similarity between the texture filling area and each texture sample map includes: and determining the texture sample map with the highest shape similarity with the texture filling area as a target texture sample map.
And the second method comprises the following steps:
in one possible implementation, determining the target texture sample map according to the shape similarity between the texture filling area and each texture sample map includes: determining a plurality of texture sample maps with shape similarity greater than or equal to a threshold value with the texture filling area as a candidate texture sample map set; determining an area difference value between the texture filling area and each candidate texture sample in the candidate texture sample set; and determining the candidate texture sample map with the minimum area difference value with the texture filling area as the target texture sample map.
The specific value of the threshold can be determined according to actual conditions, and the disclosure does not specifically limit the value
The target texture sample graph determined by comprehensively considering the semantic labels and the shapes of the texture filling areas can better accord with the semantic information represented by the texture filling areas.
After the target texture sample map is determined, texture synthesis can be performed on the texture filling area according to the target texture sample map, including at least two texture synthesis methods described below.
The first method comprises the following steps:
in one possible implementation, performing texture synthesis in the texture filling area according to the target texture sample map includes: and according to the target texture sample map, performing texture synthesis in the texture filling area through a block-based texture synthesis algorithm.
In one example, the target texture sample is divided into a number of texture blocks as a candidate set, and the regions are filled with textures by a texture block synthesis algorithm.
Fig. 4 is a schematic diagram illustrating texture synthesis in a texture filling area by a block-based texture synthesis algorithm according to an embodiment of the present disclosure. As shown in fig. 4 (a), the target texture sample is divided into a plurality of texture blocks as candidate sets at random, and one texture block is sequentially filled in the texture filling area to perform texture synthesis until the entire texture filling area is filled. As shown in (B) and (c) of fig. 4, when performing texture synthesis on the current region B, determining an overlapping portion in the region B (an overlapping portion between the region B and the region a in the texture filling region, which has been subjected to texture synthesis), selecting a texture block with the smallest sum of pixel point differences with the overlapping portion from the candidate set, that is, selecting a most similar texture block, and performing texture splicing with the region which has been subjected to texture synthesis.
During the splicing process, a minimum error path which can minimize the visual error of the splicing effect can be found. The minimum error path is divided into a horizontal dividing path and a vertical dividing path, and the searching methods of the horizontal dividing path and the vertical dividing path are similar. Taking the vertical segmentation path as an example, assume that the region a and the region B are two regions that need texture splicing, and the overlapping portions are respectively OVAAnd OVBThe corresponding pixel error matrix e ═ OVA-OVB. And traversing each pixel point from the first line of the overlapped part, and calculating the minimum accumulated error sum in all paths by the following formula: ei,j=ei,j+min(Ei-1,j-1,Ei-1,j,Ei-1,j+1). The ith row and the jth column pixel point in the overlapped part are Pi,j,Ei,jStarting from an initial row to Pi,jThe smallest sum of accumulated errors in all paths. Since the last pixel point on the path is at the top row and the horizontal position is-1, 0 or 1, Ei,jThe minimum value of the three corresponding errors and the current pixel point Pi,jAnd (4) summing. Finally, a minimum value can be found in the last line of the overlap portion and a vertical split path in the minimum error path can be found back forward. A horizontal split path among the minimum error paths can be similarly obtained. And combining the vertical dividing path and the horizontal dividing path to obtain the cutting path corresponding to the minimum error path.
In one example, the size of the texture block divided in the target texture sample is reduced, and the texture block synthesis algorithm is iterated for multiple times, so that the texture synthesis effect is smoother.
And the second method comprises the following steps:
in one possible implementation, performing texture synthesis in the texture filling area according to the target texture sample map includes: and according to the target texture sample diagram, performing texture synthesis in the texture filling area through a point-based texture synthesis algorithm.
In one example, a prior art point-based texture synthesis algorithm may be used to perform texture synthesis in the texture filling region according to the target texture sample. Unlike block-based texture synthesis algorithms, point-based texture synthesis algorithms synthesize one pixel point at a time.
Determining a target texture sample image matched with the shape of the texture filling area in a texture material library at least according to the shape of the texture filling area, wherein the texture material library comprises a plurality of texture sample images consistent with semantic labels of the texture filling area, and further carrying out texture synthesis in the texture filling area according to the target texture sample image. By comprehensively considering the shape of the texture filling area, the target texture sample image can be accurately determined, so that the accuracy of performing texture filling on the texture filling area based on the target texture sample image can be effectively improved.
Fig. 5 shows a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. The apparatus 50 shown in fig. 5 may be used to perform the steps of the method embodiment shown in fig. 1, the apparatus 50 comprising:
a first determining module 51 for determining the shape of the texture filling area;
a second determining module 52, configured to determine, according to at least the shape of the texture filling area, a target texture sample map that matches the shape of the texture filling area in a texture material library, where the texture material library includes a plurality of texture sample maps that are consistent with semantic tags of the texture filling area;
and a texture synthesis module 53, configured to perform texture synthesis in the texture filling area according to the target texture sample map.
In one possible approach, the first determining module 51 comprises:
the edge detection submodule is used for determining the contour edge of the texture filling area through an edge detection algorithm;
and the first determining submodule is used for determining the shape of the texture filling area through a shape context algorithm according to the contour edge of the texture filling area.
In one possible implementation, the second determining module 52 includes:
the edge detection submodule is used for determining the contour edge of each texture sample image in the texture material library through an edge detection algorithm;
the second determining submodule is used for determining the shape of each texture sample map through a shape context algorithm according to the contour edge of each texture sample map;
the third determining submodule is used for determining the shape similarity between the texture filling area and each texture sample map according to the shape of the texture filling area and the shape of each texture sample map;
and the fourth determining submodule is used for determining the target texture sample map according to the shape similarity between the texture filling area and each texture sample map.
In a possible implementation manner, the fourth determining submodule is specifically configured to:
and determining the texture sample map with the highest shape similarity with the texture filling area as a target texture sample map.
In one possible implementation, the fourth determining sub-module includes:
a first determining unit, configured to determine, as a candidate texture sample set, a plurality of texture sample maps having a shape similarity with the texture filling area that is greater than or equal to a threshold;
a second determining unit, configured to determine an area difference between the texture filling area and each candidate texture sample in the candidate texture sample set;
and the third determining unit is used for determining the candidate texture sample map with the minimum area difference value with the texture filling area as the target texture sample map.
In one possible implementation, the edge detection sub-module includes:
the fourth determining unit is used for carrying out uniform sampling on the contour edge and determining a plurality of feature points;
a fifth determining unit, configured to determine, for any feature point, a feature vector corresponding to the feature point according to a relative position between the feature point and another feature point;
and a sixth determining unit, configured to determine, as the shape, a feature vector matrix formed by the feature vectors corresponding to each feature point.
In one possible implementation, the texture synthesis module is specifically configured to:
and according to the target texture sample map, performing texture synthesis in the texture filling area through a block-based texture synthesis algorithm.
In one possible implementation, the texture synthesis module is specifically configured to:
and according to the target texture sample diagram, performing texture synthesis in the texture filling area through a point-based texture synthesis algorithm.
The apparatus 50 provided in the present disclosure can implement each step in the method embodiment shown in fig. 1, and implement the same technical effect, and is not described herein again to avoid repetition.
Fig. 6 shows a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 6, at the hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The memory may include a memory, such as a Random-access memory (RAM), and may further include a non-volatile memory, such as at least 1 disk memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be interconnected by an internal bus, which may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
And a memory for storing the program. In particular, the program may include program code including computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the image processing device on a logic level. The processor executes the program stored in the memory and specifically executes: determining the shape of the texture filling area; determining a target texture sample image matched with the shape of the texture filling area in a texture material library at least according to the shape of the texture filling area, wherein the texture material library comprises a plurality of texture sample images consistent with semantic labels of the texture filling area; and according to the target texture sample diagram, carrying out texture synthesis in the texture filling area.
In one possible implementation, the processor is specifically configured to perform: determining the contour edge of the texture filling area through an edge detection algorithm; and determining the shape of the texture filling area through a shape context algorithm according to the contour edge of the texture filling area.
In one possible implementation, the processor is specifically configured to perform: determining the contour edge of each texture sample image in the texture material library through an edge detection algorithm; determining the shape of each texture sample map through a shape context algorithm according to the contour edge of each texture sample map; determining the shape similarity between the texture filling area and each texture sample map according to the shape of the texture filling area and the shape of each texture sample map; and determining the target texture sample map according to the shape similarity between the texture filling area and each texture sample map.
In one possible implementation, the processor is specifically configured to perform: and determining the texture sample map with the highest shape similarity with the texture filling area as a target texture sample map.
In one possible implementation, the processor is specifically configured to perform: determining a plurality of texture sample maps with shape similarity greater than or equal to a threshold value with the texture filling area as a candidate texture sample map set; determining an area difference value between the texture filling area and each candidate texture sample in the candidate texture sample set; and determining the candidate texture sample map with the minimum area difference value with the texture filling area as the target texture sample map.
In one possible implementation, the processor is specifically configured to perform: carrying out uniform sampling on the edge of the contour, and determining a plurality of characteristic points; for any feature point, determining a feature vector corresponding to the feature point according to the relative position between the feature point and other feature points; and determining a characteristic vector matrix formed by the characteristic vectors corresponding to each characteristic point as a shape.
In one possible implementation, the processor is specifically configured to perform: and according to the target texture sample map, performing texture synthesis in the texture filling area through a block-based texture synthesis algorithm.
In one possible implementation, the processor is specifically configured to perform: and according to the target texture sample diagram, performing texture synthesis in the texture filling area through a point-based texture synthesis algorithm.
The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may execute the method executed in the method embodiment shown in fig. 1, and implement the functions of the method embodiment shown in fig. 1, which are not described herein again in this specification.
The present specification also proposes a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which when executed by an electronic device including a plurality of application programs, enable the electronic device to execute the image processing method in the embodiment shown in fig. 1, and specifically to execute the steps of the embodiment of the method shown in fig. 1.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (16)

1. An image processing method, comprising:
determining the shape of the texture filling area;
determining a target texture sample image matched with the shape of the texture filling area in a texture material library at least according to the shape of the texture filling area, wherein the texture material library comprises texture sample images consistent with semantic labels of the texture filling area;
and according to the target texture sample map, carrying out texture synthesis in the texture filling area.
2. The method of claim 1, wherein determining the shape of the texture fill area comprises:
determining the contour edge of the texture filling area through an edge detection algorithm;
and determining the shape of the texture filling area through a shape context algorithm according to the contour edge of the texture filling area.
3. The method of claim 1, wherein determining a target texture sample map in a texture material library that matches the shape of the texture fill region based at least on the shape of the texture fill region comprises:
determining the contour edge of each texture sample map in the texture material library through an edge detection algorithm;
determining the shape of each texture sample map through a shape context algorithm according to the contour edge of each texture sample map;
determining the shape similarity between the texture filling area and each texture sample map according to the shape of the texture filling area and the shape of each texture sample map;
and determining the target texture sample map according to the shape similarity between the texture filling area and each texture sample map.
4. The method according to claim 3, wherein determining the target texture sample according to the shape similarity between the texture filling area and each texture sample comprises:
and determining the texture sample map with the highest shape similarity with the texture filling area as the target texture sample map.
5. The method according to claim 3, wherein determining the target texture sample according to the shape similarity between the texture filling area and each texture sample comprises:
determining a plurality of texture sample maps with shape similarity greater than or equal to a threshold value with the texture filling area as a candidate texture sample map set;
determining an area difference between the texture filling area and each candidate texture sample in the candidate texture sample set;
and determining the candidate texture sample map with the minimum area difference value with the texture filling area as the target texture sample map.
6. The method of claim 2 or 3, wherein determining the shape from the contour edge by a shape context algorithm comprises:
carrying out uniform sampling on the contour edge, and determining a plurality of feature points;
for any feature point, determining a feature vector corresponding to the feature point according to the relative position between the feature point and other feature points;
and determining a feature vector matrix formed by the feature vectors corresponding to each feature point as the shape.
7. The method of claim 1, wherein performing texture synthesis in the texture filling area according to the target texture sample map comprises:
and according to the target texture sample map, performing texture synthesis in the texture filling area through a block-based texture synthesis algorithm.
8. The method of claim 1, wherein performing texture synthesis in the texture filling area according to the target texture sample map comprises:
and according to the target texture sample map, performing texture synthesis in the texture filling area through a point-based texture synthesis algorithm.
9. An image processing apparatus characterized by comprising:
a first determining module for determining the shape of the texture filling area;
a second determining module, configured to determine, in a texture material library, a target texture sample image matched with the shape of the texture filling area according to at least the shape of the texture filling area, where the texture material library includes a plurality of texture sample images consistent with semantic tags of the texture filling area;
and the texture synthesis module is used for carrying out texture synthesis in the texture filling area according to the target texture sample map.
10. The apparatus of claim 9, wherein the first determining module comprises:
the edge detection submodule is used for determining the contour edge of the texture filling area through an edge detection algorithm;
and the first determining submodule is used for determining the shape of the texture filling area through a shape context algorithm according to the contour edge of the texture filling area.
11. The apparatus of claim 9, wherein the second determining module comprises:
the edge detection submodule is used for determining the contour edge of each texture sample map in the texture material library through an edge detection algorithm;
the second determining submodule is used for determining the shape of each texture sample map through a shape context algorithm according to the contour edge of each texture sample map;
a third determining submodule, configured to determine, according to the shape of the texture filling area and the shape of each texture sample, a shape similarity between the texture filling area and each texture sample;
and the fourth determining submodule is used for determining the target texture sample map according to the shape similarity between the texture filling area and each texture sample map.
12. The apparatus of claim 11, wherein the fourth determination submodule is specifically configured to:
and determining the texture sample map with the highest shape similarity with the texture filling area as the target texture sample map.
13. The apparatus of claim 11, wherein the fourth determination submodule comprises:
a first determining unit, configured to determine, as a candidate texture sample set, a plurality of texture sample maps having a shape similarity with the texture filling area that is greater than or equal to a threshold;
a second determining unit, configured to determine an area difference between the texture filling area and each candidate texture sample in the candidate texture sample set;
and a third determining unit, configured to determine, as the target texture sample, a candidate texture sample with a smallest area difference with the texture filling region.
14. The apparatus of claim 10 or 11, wherein the edge detection sub-module comprises:
the fourth determining unit is used for performing uniform sampling on the contour edge and determining a plurality of feature points;
a fifth determining unit, configured to determine, for any feature point, a feature vector corresponding to the feature point according to a relative position between the feature point and another feature point;
and a sixth determining unit, configured to determine, as the shape, a feature vector matrix formed by the feature vectors corresponding to each feature point.
15. The apparatus of claim 9, wherein the texture synthesis module is specifically configured to:
and according to the target texture sample map, performing texture synthesis in the texture filling area through a block-based texture synthesis algorithm.
16. The apparatus of claim 9, wherein the texture synthesis module is specifically configured to:
and according to the target texture sample map, performing texture synthesis in the texture filling area through a point-based texture synthesis algorithm.
CN201910389984.XA 2019-05-10 2019-05-10 Image processing method and device Pending CN111915702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910389984.XA CN111915702A (en) 2019-05-10 2019-05-10 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910389984.XA CN111915702A (en) 2019-05-10 2019-05-10 Image processing method and device

Publications (1)

Publication Number Publication Date
CN111915702A true CN111915702A (en) 2020-11-10

Family

ID=73242229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910389984.XA Pending CN111915702A (en) 2019-05-10 2019-05-10 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111915702A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102385757A (en) * 2011-10-25 2012-03-21 北京航空航天大学 Semantic restriction texture synthesis method based on geometric space
CN103440618A (en) * 2013-09-25 2013-12-11 云南大学 Block-based texture synthesis method and device
CN103714561A (en) * 2013-12-27 2014-04-09 浙江工业大学 Structure preserving texture synthesis method based on Chamfer distance
US20150332117A1 (en) * 2014-05-13 2015-11-19 The Penn State Research Foundation Composition modeling for photo retrieval through geometric image segmentation
CN106600552A (en) * 2016-12-14 2017-04-26 中国科学院地质与地球物理研究所兰州油气资源研究中心 Texture image completion method and device
CN107767411A (en) * 2017-11-24 2018-03-06 河南理工大学 A kind of strain-based design method
CN108364276A (en) * 2018-03-13 2018-08-03 重庆大学 Texture image synthetic method based on tag database
CN109255807A (en) * 2017-07-13 2019-01-22 腾讯科技(深圳)有限公司 A kind of image information processing method and server, computer storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102385757A (en) * 2011-10-25 2012-03-21 北京航空航天大学 Semantic restriction texture synthesis method based on geometric space
CN103440618A (en) * 2013-09-25 2013-12-11 云南大学 Block-based texture synthesis method and device
CN103714561A (en) * 2013-12-27 2014-04-09 浙江工业大学 Structure preserving texture synthesis method based on Chamfer distance
US20150332117A1 (en) * 2014-05-13 2015-11-19 The Penn State Research Foundation Composition modeling for photo retrieval through geometric image segmentation
CN106600552A (en) * 2016-12-14 2017-04-26 中国科学院地质与地球物理研究所兰州油气资源研究中心 Texture image completion method and device
CN109255807A (en) * 2017-07-13 2019-01-22 腾讯科技(深圳)有限公司 A kind of image information processing method and server, computer storage medium
CN107767411A (en) * 2017-11-24 2018-03-06 河南理工大学 A kind of strain-based design method
CN108364276A (en) * 2018-03-13 2018-08-03 重庆大学 Texture image synthetic method based on tag database

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
古元亭;吴恩华;: "《一种纹理特征分析与合成的方法》", vol. 19, no. 12, pages 1535 - 1539 *
古元亭;吴恩华;: "一种纹理特征分析与合成的方法", 计算机辅助设计与图形学学报, no. 12 *
柳杨 著: "《数字图像物体识别理论详解与实战》", 31 January 2018, 北京:北京邮电大学出版社, pages: 53 *
郝明 等著: "《空间信息准确性增强遥感变化检测》", 31 May 2017, 北京:测绘出版社, pages: 33 - 35 *

Similar Documents

Publication Publication Date Title
CN111737522B (en) Video matching method, and block chain-based infringement evidence-saving method and device
US10559078B2 (en) Object detection
US11270105B2 (en) Extracting and analyzing information from engineering drawings
US11321822B2 (en) Determining image defects using image comparisons
CN110866930B (en) Semantic segmentation auxiliary labeling method and device
CN107274442A (en) A kind of image-recognizing method and device
CN114187333A (en) Image alignment method, image alignment device and terminal equipment
CN111415371B (en) Sparse optical flow determination method and device
CN108876817B (en) Cross track analysis method and device, electronic equipment and storage medium
CN108280135B (en) Method and device for realizing visualization of data structure and electronic equipment
CN111915702A (en) Image processing method and device
CN116580407A (en) Training method of text detection model, text detection method and device
CN111709951B (en) Target detection network training method and system, network, device and medium
CN113139617B (en) Power transmission line autonomous positioning method and device and terminal equipment
CN111127478B (en) View block segmentation method and device
CN115406452A (en) Real-time positioning and mapping method, device and terminal equipment
CN114332928A (en) Image segmentation method and device, electronic equipment and storage medium
CN111915703B (en) Image generation method and device
CN112749293A (en) Image classification method and device and storage medium
US10713808B2 (en) Stereo matching method and system using rectangular window
US20210034857A1 (en) Processing scanned documents
CN110705479A (en) Model training method, target recognition method, device, equipment and medium
CN110555498A (en) Two-dimensional code generation method and device, electronic equipment and storage medium
CN109739233B (en) AGV trolley positioning method, device and system
CN112101369B (en) Image segmentation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination