WO2011131029A1 - Method for detecting similar units based on outline belt graph - Google Patents

Method for detecting similar units based on outline belt graph Download PDF

Info

Publication number
WO2011131029A1
WO2011131029A1 PCT/CN2011/000691 CN2011000691W WO2011131029A1 WO 2011131029 A1 WO2011131029 A1 WO 2011131029A1 CN 2011000691 W CN2011000691 W CN 2011000691W WO 2011131029 A1 WO2011131029 A1 WO 2011131029A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour map
image
similar
contour
similar units
Prior art date
Application number
PCT/CN2011/000691
Other languages
French (fr)
Chinese (zh)
Inventor
胡事民
程明明
张方略
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Publication of WO2011131029A1 publication Critical patent/WO2011131029A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • G06V10/426Graphical representations

Definitions

  • the invention belongs to the technical field of image processing, and relates to a method for detecting a target, and more particularly to a method for detecting a similar unit based on a contour map. Background technique
  • the technical problem to be solved by the present invention is: How to find similarities from images containing similar elements The unit and its exact location. By detecting these similar units, a series of high-level image editing applications can be made possible.
  • the present invention proposes a method for detecting similar units in an image, providing initial information by means of simple user interaction, and detecting similar units from the image. These test results can be used for image editing applications at a range of scene object levels.
  • the method for detecting a similar unit based on a contour map provided by the present invention comprises the following steps:
  • S5 Matching the sample of the similar unit that is calibrated with the contour map to determine the position of the plurality of similar units in the detected image.
  • the method for obtaining the contour map in the step S3 is: performing hierarchical mean shift division on the foreground region.
  • the method for constructing the contour map in the step S4 is: introducing the local geometric information to the contour map to construct the contour map.
  • the contour map constructed in the step S4 is an array composed of two-dimensional vectors: M ⁇ ⁇ m p ⁇ Hx]V , where is a two-dimensional vector corresponding to each pixel P in the detected image, H x W is the size of the detected image -.
  • step S5 the sample of the similar unit that is calibrated and the contour map are compared.
  • the matching method of matching time is calculated as:
  • n, M ) :£( ) ) 2
  • ( M , v) is the coordinates of the matched points in the contour map; and the positions of the plurality of similar units in the detected image are determined according to the degree of matching.
  • the method for determining the positions of the plurality of similar units in the detected image is: performing non-maximum suppression on the matching degree value, and selecting a maximum value point in the matching degree value to obtain a position of the similar unit.
  • the above technical solution has the following advantages:
  • the method for detecting similar units in an image provided by the present invention provides initial information by using a simple user interaction means, and detects and positions similar units from the image, and the detection method is fast and accurate.
  • the detection results can also be applied to editing operations based on scene object levels in the image, such as image rearrangement, edit propagation, and synchronization deformation.
  • FIG. 1 is a process flow diagram of an embodiment of a similar unit detection method based on a contour map of the present invention
  • FIG. 2 is an input original diagram of an embodiment of a similar unit detection method based on a contour map of the present invention
  • Fig. 3 is a view showing the detection result of an embodiment of the similarity unit detecting method based on the contour map of the present invention. detailed description
  • the invention discloses a method for detecting similar units in an image by means of simple user interaction. After detecting similar units in the image, a series of object level image editing can be performed, including: image rearrangement, synchronous deformation, editing Spread and so on.
  • FIG. 1 it is a processing flowchart of an embodiment of a similar unit detecting method based on a contour map of the present invention, and the steps represented in the figure include:
  • a The user inputs a simple stroke to roughly calibrate one of the similar units in the detected image, and simultaneously calibrates the background area of the image. As shown in FIG. 2, the simple stroke input by the user is drawn with a brush. Curve. Similar units in the input image can have some degree of occlusion, shape differences, color differences, and the like. Users only need to roughly calibrate to meet the processing needs.
  • b Get the contour map and the sample: First, use the segmentation method to get a sample of similar units and a foreground area composed of similar units. Get the outline map from the foreground area.
  • the profile referred to here refers to the profile of the potential object contour calculated from the image.
  • a possible specific solution for obtaining a contour map is to use a hierarchical mean shift partition for the foreground region.
  • the hierarchical mean shifting segmentation referred to here is described in detail in Paris and Durand's 2007 work "A topological approach to hierarchical segmentation using mean shift". Other methods of obtaining a contour map can also be used for this step.
  • each pixel in the contour map corresponds to the probability of the object edge at that point. The darker the point in this picture is the greater the probability of the edge point.
  • the sample boundaries selected by the user are important intermediate data for further testing. ,
  • the array size H x is identical to the input detected image.
  • Each pixel in the input image corresponds to a two-dimensional vector m P .
  • the vector value outside the range of the contour band is 0, and the vector size within the range of the contour band is the average of the reliability values of the contour points in a certain area nearby (the c-step diagram of Fig. 1 is for clear printing, and the darker the point represents the edge as the edge The greater the credibility).
  • the direction of the vector is the direction of the edge gradient at that point.
  • Gradient direction acquisition can refer to common edge detection methods, which have steps of gradient estimation.
  • the 2D vector at the contour point has a magnitude of 1, and the direction is the gradient direction.
  • the two-dimensional vector at the remaining points is o.
  • the nr rate at which there is a similar unit to be detected at each point in the image can be calculated by the following method.
  • the contour map is used to calculate the matching degree between the template graph r and the contour map M, which is represented by the sum of the two-dimensional vector points at the corresponding points:
  • D M (T, M) ⁇ (t ) -m u+U J+v) )
  • u and V respectively represent the abscissa and ordinate of the point, t (ijV ( i+ . +v ) Template map And the two-dimensional vector of the corresponding position (point i, j ) and point ( + W , j+v ) in the contour map M.
  • the template map and the contour map M are matrices of a two-dimensional vector, and ⁇ and are respectively an element in the matrix ⁇ , ⁇ . This will calculate the degree of matching at each point. Since the method essentially solves some convolutional values, the fast Fourier transform method can be used to accelerate the solution, and further non-maximal suppression of the matching values, that is, selecting the maximum value points in the matching values, can obtain similarities. The location of the unit.
  • FIG. 2 it is an input original diagram of an embodiment of a similar unit detecting method based on a contour map of the present invention.
  • the user calibrates a fish in the input picture as an example, and simply marks the background area.
  • the calibration of the fish is displayed in dark color, and the calibration of the background is displayed in bright colors.
  • FIG. 3 it is a detection result of an embodiment of a similar unit detecting method based on a contour map of the present invention.
  • the brightness inside the center circle represents the confidence of the test result, that is, the degree of matching at that position.
  • the template can be rotated and scaled to detect similar units with multiple scales and directions of rotation. Industrial applicability
  • the invention provides a detection method of a similar unit based on a contour map, which can find and locate similar units in an image according to a simple prompt input by a user, and the detection result can be applied to, for example, image rearrangement, edit propagation, Synchronous deformation or the like is based on the editing operation of the scene object level in the image, and has industrial applicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A method for detecting similar units based on an outline belt graph involves following steps: S1, demarcating one of the similar units of a image to be detected by utilizing a stroke and demarcating a background area of the detected image; S2, segmenting the demarcated image to be detected, extracting a sample of the similar units, removing the background area of the image to be detected, and reserving a foreground area consisting of the similar units; S3, acquiring an outline graph from the foreground area, and obtaining a potential object outline boundary in the image; S4, constructing an outline belt graph within the range of an area adjacent to outline points in the outline graph; S5: matching the sample of the similar units with the outline belt graph, and determining the position of the similar units in the detected image. The method enables searching and positioning the similar units in the image according to a simple prompt input by users, and applying detection results to editing operations based on scene object level in the image, such as image rearrangement, editing communication, synchronous deformation, and the like.

Description

一种基于轮廓带图的相似单元的检测方法 技术领域  Detection method of similar unit based on contour map
本发明属于图像处理技术领域, 涉及一种目标的检测方法, 更具体地, 涉及一种基于轮廓带图的相似单元的检测方法。 背景技术  The invention belongs to the technical field of image processing, and relates to a method for detecting a target, and more particularly to a method for detecting a similar unit based on a contour map. Background technique
相似单元不论是在自然场景或是在人工场景中都大量存在。 由于受到遮 挡、 部分缺失、 相似单元间形状差异、 光照变化等因素的影响, 在编辑这些 重复图像元素的过程中, 要保持它们之间协调一致的编辑难度非常大。 现有 的图像编辑方法大多直接操作底层图像元素, 编辑对象过于底层, 用户操作 不便。人们迫切希望能够通过更加感性的简单操作来实现对图像的高效编辑, 并对图像中的语义单元进行直接操作。  Similar units exist in large numbers in natural scenes or in artificial scenes. Due to factors such as occlusion, partial missing, shape differences between similar units, changes in illumination, etc., it is very difficult to maintain a consistent editing between these repeated image elements during editing. Most of the existing image editing methods directly operate the underlying image elements, and the editing objects are too low-level, and the user's operation is inconvenient. People are eager to enable efficient editing of images through more sensible and simple operations, and to directly manipulate the semantic units in the image.
重复物体检测在很长的一段时间都受到了很多研究者的关注。 Leung 和 Malik 于 1996 年提出了" Detecting, localizing and grouping repeated scene elements from an image", 该方法通过对每个独立的图像单元建立一个节点, 把它们之间的仿射变换作为边, 并通过图的方式寻找图像中的重复元素; Liu 等人于 2003年提出了 "A computational model for periodic pattern perception based on frieze and wallpaper groups", 用于发现周期性紋理。 虽然上述两种方 法都能找出图像中的重复单元, 但是所针对的重复单元仍然局限于小区域, 处理对象本身不具备语义性, 而且不能处理有着明显遮挡关系、 形状变化等 复杂情况。 Ahuja和 Todorovic于 2007年提出了 "Extracting texels in 2.1 D natural textures", 该方法虽然能够从图像中检测具有一定语义性的相似单元, 并处理 遮挡的情况, 但是该方法需要数十秒的计算时—间。  Repeated object detection has been the focus of many researchers for a long time. In 1996, Leung and Malik proposed "Detecting, localizing and grouping repeated scene elements from an image", which establishes a node for each individual image unit, and uses the affine transformation between them as an edge. Ways to find repeating elements in images; Liu et al. proposed "A computational model for periodic pattern perception based on frieze and wallpaper groups" in 2003 for the discovery of periodic textures. Although the above two methods can find the repeating unit in the image, the repeated unit is still limited to a small area, and the processing object itself has no semantics, and cannot handle complex situations such as obvious occlusion relations and shape changes. Ahuja and Todorovic proposed "Extracting texels in 2.1 D natural textures" in 2007. Although this method can detect similar units with certain semantics from images and deal with occlusion, this method requires tens of seconds of calculation. -between.
虽然现有的用于图像中相似单元检测的计算机视觉方法离应用还有很大 距离, 但一些最近的图形学交互手段为我们简化这一问题提供了技术支持。 发明内容  While existing computer vision methods for similar unit detection in images are far from application, some recent graphical interactions provide technical support for simplifying this problem. Summary of the invention
本发明要解决的技术问题是: 如何从含有相似单元的图像中寻找出相似 单元及其精确位置。 通过检测这些相似单元, 可以使得一系列高层次的图像 编辑应用变成可能。 The technical problem to be solved by the present invention is: How to find similarities from images containing similar elements The unit and its exact location. By detecting these similar units, a series of high-level image editing applications can be made possible.
为了解决上述技术问题, 本发明提出了一种用于检测图像中相似单元的 方法, 利用简单的用户交互手段提供初始信息, 并从图像中检测出相似单元。 这些检测结果可以用于一系列场景物体级别的图像编辑应用。  In order to solve the above technical problems, the present invention proposes a method for detecting similar units in an image, providing initial information by means of simple user interaction, and detecting similar units from the image. These test results can be used for image editing applications at a range of scene object levels.
本发明提供的基于轮廓带图的相似单元的检测方法, 包括以下步骤: The method for detecting a similar unit based on a contour map provided by the present invention comprises the following steps:
S 1 : 使用笔画标定被检测图像中多个相似单元中的其中一个, 并标定被 检测图像的背景区域; S 1 : using a stroke to calibrate one of a plurality of similar units in the detected image, and calibrating the background area of the detected image;
S2: 对所标定的被检测图像进行分割, 提取出所标定的相似单元的样例, 并去除被检测图像的背景区域, 保留由所述多个相似单元组成的前景区域; S2: segmenting the calibrated detected image, extracting a sample of the calibrated similar unit, and removing a background area of the detected image, and retaining a foreground area composed of the plurality of similar units;
S3 : 从所述前景区域中获取轮廓图, 得到被检测图像中潜在的物体轮廓 边界; S3: acquiring a contour map from the foreground region to obtain a boundary contour of a potential object in the detected image;
S4: 由所述轮廓图中轮廓点附近一定区域的范围构建轮廓带图, 所述轮 廓图中轮廓点附近一定区域的范围为轮廓带;  S4: constructing a contour map by a range of a certain area near the contour point in the contour map, wherein a range of a certain area near the contour point in the contour map is a contour strip;
S5 : 将所标定的相似单元的样例和所述轮廓带图进行匹配, 确定被检测 图像中所述多个相似单元的位置。  S5: Matching the sample of the similar unit that is calibrated with the contour map to determine the position of the plurality of similar units in the detected image.
其中, 所述步骤 S3中获取轮廓图的方法为: 对所述前景区域进行层次化 的均值漂移分割。  The method for obtaining the contour map in the step S3 is: performing hierarchical mean shift division on the foreground region.
其中, 所述步骤 S4中构建轮廓带图的方法为: 对轮廓图引入其局部几何 信息以构建轮廓带图。  The method for constructing the contour map in the step S4 is: introducing the local geometric information to the contour map to construct the contour map.
其中, 所述步骤 S4 中构建的轮廓带图为二维向量组成的数组: M ^ {mp}Hx]V , 其中, 为被检测图像中的每个像素 P对应的二维向量, H x W 为被检测图像的大小-。 The contour map constructed in the step S4 is an array composed of two-dimensional vectors: M ^ {m p } Hx]V , where is a two-dimensional vector corresponding to each pixel P in the detected image, H x W is the size of the detected image -.
其中,所述步骤 S5中将所标定的相似单元的样例和所述轮廓带图进行匹 配时, 将所标定的相似单元的样例的轮廓构建成二维向量组成的数组: T = {tp}hxw , 其中, tp为相似单元的样例中每个像素 P对应的二维向量, h x w为 相似单元的样例的大小。 Wherein, in step S5, when the sample of the similar unit to be calibrated is matched with the contour map, the contour of the sample of the similar unit to be calibrated is constructed into an array of two-dimensional vectors: T = {t p } hxw , where t p is a two-dimensional vector corresponding to each pixel P in the example of the similar unit, and hxw is the size of the example of the similar unit.
其中,所述步骤 S5中将所标定的相似单元的样例和所述轮廓带图进行匹 配时的匹配程度的计算方法为: Wherein, in step S5, the sample of the similar unit that is calibrated and the contour map are compared. The matching method of matching time is calculated as:
n,M) = :£( ) )2 其中, (M, v)为轮廓带图中被匹配点的坐标; 根据匹配程度确定被检测图像中 所述多个相似单元的位置。 n, M ) = :£( ) ) 2 where ( M , v) is the coordinates of the matched points in the contour map; and the positions of the plurality of similar units in the detected image are determined according to the degree of matching.
其中, 确定被检测图像中所述多个相似单元的位置的方法为: 对匹配程 度值进行非极大值抑制, 选取匹配程度值中的极大值点, 得到相似单元的位 置。  The method for determining the positions of the plurality of similar units in the detected image is: performing non-maximum suppression on the matching degree value, and selecting a maximum value point in the matching degree value to obtain a position of the similar unit.
上述技术方案具有如下优点: 本发明所提出的用于检测图像中相似单元 的方法利用简单的用户交互手段提供初始信息, 并从图像中检测出相似单元 并对其定位, 检测方法快速、 精确, 检测结果还可以应用于如图像重排、 编 辑传播、 同步变形等基于图像中场景物体级别的编辑操作。  The above technical solution has the following advantages: The method for detecting similar units in an image provided by the present invention provides initial information by using a simple user interaction means, and detects and positions similar units from the image, and the detection method is fast and accurate. The detection results can also be applied to editing operations based on scene object levels in the image, such as image rearrangement, edit propagation, and synchronization deformation.
附图说明 DRAWINGS
图 1为本发明基于轮廓带图的相似单元检测方法的一个实施例的处理流 程图;  1 is a process flow diagram of an embodiment of a similar unit detection method based on a contour map of the present invention;
图 2为本发明基于轮廓带图的相似单元检测方法的一个实施例的输入原 始图;  2 is an input original diagram of an embodiment of a similar unit detection method based on a contour map of the present invention;
图 3为本发明基于轮廓带图的相似单元检测方法的一个实施例的检测结 果。 具体实施方式  Fig. 3 is a view showing the detection result of an embodiment of the similarity unit detecting method based on the contour map of the present invention. detailed description
以下实施例用于说明本发明, 但不用来限制本发明的范围。  The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
本发明公开了一种通过简单用户交互方式检测图像中相似单元的方法, 通过该方法检测出图像中相似单元之后可以进行一系列的物体级别的图像编 辑, 包括: 图像重排、 同步变形、 编辑传播等。  The invention discloses a method for detecting similar units in an image by means of simple user interaction. After detecting similar units in the image, a series of object level image editing can be performed, including: image rearrangement, synchronous deformation, editing Spread and so on.
如图 1所示, 为本发明基于轮廓带图的相似单元检测方法的一个实施例 的处理流程图, 图中表示的步骤包括:  As shown in FIG. 1 , it is a processing flowchart of an embodiment of a similar unit detecting method based on a contour map of the present invention, and the steps represented in the figure include:
a: 用户输入简单的笔画粗略标定被检测图像中相似单元中的一个, 同时 标定图像的背景区域, 如图 2所示, 所述用户输入的简单的笔画是用笔刷画 的曲线。 输入图像中的相似单元可以有一定程度的遮挡., 形状差异, 色彩差 异等。 用户只需要大致进行标定就可以满足处理需要。 a: The user inputs a simple stroke to roughly calibrate one of the similar units in the detected image, and simultaneously calibrates the background area of the image. As shown in FIG. 2, the simple stroke input by the user is drawn with a brush. Curve. Similar units in the input image can have some degree of occlusion, shape differences, color differences, and the like. Users only need to roughly calibrate to meet the processing needs.
b: 获取轮廓图和样例: 首先利用分割方法分别得到相似单元中的一个样 例和由相似单元组成的前景区域。 从前景区域中获取轮廓图。 这里所指的轮 廓图是指从图像中计算出的潜在的物体轮廓边界得到轮廓图。 一个可能的获 取轮廓图的具体方案是对前景区域釆用层次化的均值漂移分割。 这里所指的 层次化均值飘移分割在 Paris 和 Durand 在 2007 年的工作" A topological approach to hierarchical segmentation using mean shift"中有详细介绍。其它获取 轮廓图的方法也可以用于此步骤。 轮廓图 (如图 1的 b步骤左图所示) 中每 一个像素点的亮度值和该点的物体边缘的概率相对应。 此图中越暗的点是边 缘点的概率越大。 用户所选取的样例边界(如图 1的 b步骤右图所示)是进 行进一步检测的重要中间数据。 、  b: Get the contour map and the sample: First, use the segmentation method to get a sample of similar units and a foreground area composed of similar units. Get the outline map from the foreground area. The profile referred to here refers to the profile of the potential object contour calculated from the image. A possible specific solution for obtaining a contour map is to use a hierarchical mean shift partition for the foreground region. The hierarchical mean shifting segmentation referred to here is described in detail in Paris and Durand's 2007 work "A topological approach to hierarchical segmentation using mean shift". Other methods of obtaining a contour map can also be used for this step. The brightness value of each pixel in the contour map (shown in the left diagram of step b in Figure 1) corresponds to the probability of the object edge at that point. The darker the point in this picture is the greater the probability of the edge point. The sample boundaries selected by the user (shown in the right-hand diagram of step b in Figure 1) are important intermediate data for further testing. ,
c: 构建轮廓带图: 轮廓图中轮廓点附近一定区域的范围为轮廓带。 本实 施例中的轮廓带图是一个二维向量组成的数组, 记为 M = { }W)^。 数组大小 H x 和输入的被检测图像一致。 输入图像中的每个像素 对应一个二维向量 mP。 在轮廓带范围以外的向量值为 0, 轮廓带范围之内的向量大小为附近一 定区域轮廓点可信度值的平均 (图 1的 c步骤图为了打印清晰, 越暗的点代 表作为边缘的可信度越大) 。 向量的方向为该点处的边缘梯度方向。 梯度方 向的获取可参考常见的边缘检测方法, 这些方法中都有梯度估计的步骤。 c: Build a contour map: The extent of a certain area near the contour point in the contour map is the contour strip. The contour map in this embodiment is an array of two-dimensional vectors, denoted as M = { } W) ^. The array size H x is identical to the input detected image. Each pixel in the input image corresponds to a two-dimensional vector m P . The vector value outside the range of the contour band is 0, and the vector size within the range of the contour band is the average of the reliability values of the contour points in a certain area nearby (the c-step diagram of Fig. 1 is for clear printing, and the darker the point represents the edge as the edge The greater the credibility). The direction of the vector is the direction of the edge gradient at that point. Gradient direction acquisition can refer to common edge detection methods, which have steps of gradient estimation.
d: 利用轮廓带图进行匹配, 寻找相似单元: 对样例的轮廓处的每一点也 和轮廓带图类似的赋予一个二维向量, 得到模板图 = {tp } (通常数组大小d: Match the contour map to find similar units: Assign a 2D vector to each point at the contour of the sample and similar to the contour map to get the template graph = {t p } (usual array size
/z x vv远远小于图像大小 H x ) 。 轮廓点处的二维向量幅值为 1, 方向为梯度 方向。 其余点处的二维向量为 o。 图像中每一点处存在待检测相似单元的 nr 率可以通过以下方法计算。 采用轮廓带图计算某一点 处模板图 r与轮廓 带图 M的匹配程度, 用对应点处二维向量点乘的和来表示为: /z x vv is much smaller than the image size H x ). The 2D vector at the contour point has a magnitude of 1, and the direction is the gradient direction. The two-dimensional vector at the remaining points is o. The nr rate at which there is a similar unit to be detected at each point in the image can be calculated by the following method. The contour map is used to calculate the matching degree between the template graph r and the contour map M, which is represented by the sum of the two-dimensional vector points at the corresponding points:
DM (T, M) =∑∑(t )-mu+U J+v) ) 上式中, u、 V分别表示该点的横坐标与纵坐标, t(ijV (i+ .+v)分别是模板图 和轮廓带图 M中对应位置(点 i, j ) 和点 ( +W, j+v ) ) 的二维向量。 模 板图 Γ和轮廓带图 M是二维向量的矩阵, ^和 分别是矩阵 Γ、 Μ中 一个元素。 这样可以计算出每一点处的匹配程度。 由于该方法实质上是求解 一些卷积值, 可以采用快速傅里叶变换方法加速求解, 进一步对这些匹配值 进行非极大值抑制, 即选取匹配值中的极大值点, 就可以得到相似单元的位 置。 D M (T, M) = ∑∑(t ) -m u+U J+v) ) In the above formula, u and V respectively represent the abscissa and ordinate of the point, t (ijV ( i+ . +v ) Template map And the two-dimensional vector of the corresponding position (point i, j ) and point ( + W , j+v ) in the contour map M. The template map and the contour map M are matrices of a two-dimensional vector, and ^ and are respectively an element in the matrix Γ, Μ. This will calculate the degree of matching at each point. Since the method essentially solves some convolutional values, the fast Fourier transform method can be used to accelerate the solution, and further non-maximal suppression of the matching values, that is, selecting the maximum value points in the matching values, can obtain similarities. The location of the unit.
如图 2所示, 为本发明基于轮廓带图的相似单元检测方法的一个实施例 的输入原始图。 用户标定输入图片中的一条鱼为样例, 同时简单的标出背景 区域。 其中对鱼的标定用暗色显示, 对背景的标定用亮色显示。  As shown in Fig. 2, it is an input original diagram of an embodiment of a similar unit detecting method based on a contour map of the present invention. The user calibrates a fish in the input picture as an example, and simply marks the background area. The calibration of the fish is displayed in dark color, and the calibration of the background is displayed in bright colors.
如图 3所示,为本发明基于轮廓带图的相似单元检测方法的一个实施例的 检测结果。 中心的圆圈内部的亮度代表检测结果的可信度, 即该位置处的匹 配程度。 可以对模板进行一定的旋转和缩放来检测含有多个尺度和旋转方向 的相似单元的情况。 工业实用性  As shown in FIG. 3, it is a detection result of an embodiment of a similar unit detecting method based on a contour map of the present invention. The brightness inside the center circle represents the confidence of the test result, that is, the degree of matching at that position. The template can be rotated and scaled to detect similar units with multiple scales and directions of rotation. Industrial applicability
本发明提供一种基于轮廓带图的相似单元的检测方法, 该方法能够根据 用户输入的简单提示, 寻找图像中的相似单元并对其定位, 检测结果可以应 用于如图像重排、 编辑传播、 同步变形等基于图像中场景物体级别的编辑操 作, 具有工业实用性。  The invention provides a detection method of a similar unit based on a contour map, which can find and locate similar units in an image according to a simple prompt input by a user, and the detection result can be applied to, for example, image rearrangement, edit propagation, Synchronous deformation or the like is based on the editing operation of the scene object level in the image, and has industrial applicability.

Claims

权利要求书: Claims:
1、 一种基于轮廓带图的相似单元的检测方法, 其特征在于, 包括以下步 骤: A method for detecting a similar unit based on a contour map, characterized in that it comprises the following steps:
S 1 : 使用笔画标定被检测图像中多个相似单元中的其中一个, 并标定被 检测图像的背景区域;  S 1 : using a stroke to calibrate one of a plurality of similar units in the detected image, and calibrating the background area of the detected image;
S2: 对所标定的被检测图像进行分割, 提取出所标定的相似单元的样例, 并去除被检测图像的背景区域, 保留由所述多个相似单元组成的前景区域; S2: segmenting the calibrated detected image, extracting a sample of the calibrated similar unit, and removing a background area of the detected image, and retaining a foreground area composed of the plurality of similar units;
S3: 从所述前景区域中获取轮廓图, 得到被检测图像中潜在的物体轮廓 边界; S3: acquiring a contour map from the foreground region to obtain a boundary contour of a potential object in the detected image;
S4: 由所述轮廓图中轮廓点附近一定区域的范围构建轮廓带图, 所述轮 廓图中轮廓点附近一定区域的范围为轮廓带;  S4: constructing a contour map by a range of a certain area near the contour point in the contour map, wherein a range of a certain area near the contour point in the contour map is a contour strip;
S5: 将所标定的相似单元的样例和所述轮廓带图进行匹配, 确定被检测 图像中所述多个相似单元的位置。  S5: Matching the sample of the similar unit that is calibrated with the contour map to determine the position of the plurality of similar units in the detected image.
2、 如权利要求 1所述的基于轮廓带图的相似单元的检测方法, 其特征在 于, 所述步骤 S3中获取轮廓图的方法为: 对所述前景区域进行层次化的均值 漂移分割。  The method for detecting a similarity unit based on a contour map according to claim 1, wherein the method for acquiring the contour map in the step S3 is: performing hierarchical mean shift division on the foreground region.
3、 如权利要求 1所述的基于轮廓带图的相似单元的检测方法, 其特征在 于, 所述步骤 S4中构建轮廓带图的方法为: 对轮廓图引入其局部几何信息以 构建轮廓带图。  The method for detecting a similarity unit based on a contour map according to claim 1, wherein the method for constructing the contour map in the step S4 is: introducing a local geometric information to the contour map to construct a contour map. .
4、 如权利要求 1所述的基于轮廓带图的相似单元的检测方法, 其特征在 于, 所述步骤 S4中构建的轮廓带图为二维向量组成的数组: Μ = { }^, 其 中, ^为被检测图像中的每个像素 对应的二维向量, H x 为被检测图像的 大小。  The method for detecting a similarity unit based on a contour map according to claim 1, wherein the contour map constructed in the step S4 is an array of two-dimensional vectors: Μ = { }^, where ^ is a two-dimensional vector corresponding to each pixel in the detected image, and H x is the size of the detected image.
5、 如权利要求 4所述的基于轮廓带图的相似单元的检测方法, 其特征在 于, 所述步骤 S5中将所标定的相似单元的样例和所述轮廓带图进行匹配时, 将所标定的相似单元的样例的轮廓构建成二维向量组成的数组: = {tp } , 其中, ^为相似单元的样例中每个像素 5对应的二维向量, 为相似单元 的样例的大小。 The method for detecting a similarity unit based on a contour map according to claim 4, wherein in the step S5, when the sample of the similar unit to be calibrated is matched with the contour map, The outline of the sample of the similar unit of the calibration is constructed as an array of two-dimensional vectors: = {t p } , where ^ is a two-dimensional vector corresponding to each pixel 5 in the example of the similar unit, which is a similar unit The size of the sample.
6、 如权利要求 5所述的基于轮廓带图的相似单元的检测方法, 其特征在 于,所述步骤 S5中将所标定的相似单元的样例和所述轮廓带图进行匹配时的 匹配程度的计算方法为:  The method for detecting a similarity unit based on a contour map according to claim 5, wherein the matching degree of the sample of the similar unit to be calibrated and the contour map are matched in the step S5. The calculation method is:
D(„,v)(r,M) = nWv))2 其中, (", 为轮廓带图中被匹配点的坐标, tij、, (,+„,+v)分别是模板图 Γ和轮 廓带图 M中对应的点 (i, j) 和点 ( +W, j+v) 的二维向量; 根据匹配程度 确定被检测图像中所述多个相似单元的位置。 D ( „, v)( r,M) = nWv) ) 2 where, (", is the coordinates of the matched points in the contour map, t ij ,, (, +„, +v ) are template maps and The two-dimensional vector of the corresponding point (i, j) and the point (+ W , j+v) in the contour map M; the position of the plurality of similar units in the detected image is determined according to the degree of matching.
7、 如权利要求 6所述的基于轮廓带图的相似单元的检测方法, 其特征在 于, 确定被检测图像中所述多个相似单元的位置的方法为: 对匹配程度值进 行非极大值抑制, 选取匹配程度值中的极大值点, 得到相似单元的位置。  7. The method for detecting a similarity unit based on a contour map according to claim 6, wherein the method for determining the position of the plurality of similar units in the detected image is: performing a non-maximum value on the matching degree value. Suppress, select the maximum point in the matching degree value to get the position of the similar unit.
PCT/CN2011/000691 2010-04-23 2011-04-20 Method for detecting similar units based on outline belt graph WO2011131029A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2010101599318A CN101833668B (en) 2010-04-23 2010-04-23 Detection method for similar units based on profile zone image
CN201010159931.8 2010-04-23

Publications (1)

Publication Number Publication Date
WO2011131029A1 true WO2011131029A1 (en) 2011-10-27

Family

ID=42717731

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/000691 WO2011131029A1 (en) 2010-04-23 2011-04-20 Method for detecting similar units based on outline belt graph

Country Status (2)

Country Link
CN (1) CN101833668B (en)
WO (1) WO2011131029A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116596990A (en) * 2023-07-13 2023-08-15 杭州菲数科技有限公司 Target detection method, device, equipment and storage medium

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833668B (en) * 2010-04-23 2011-12-28 清华大学 Detection method for similar units based on profile zone image
CN102831239B (en) * 2012-09-04 2016-01-20 清华大学 A kind of method and system building image data base
CN105678347A (en) * 2014-11-17 2016-06-15 中兴通讯股份有限公司 Pedestrian detection method and device
CN105513107B (en) * 2015-12-09 2019-02-22 深圳市未来媒体技术研究院 A kind of picture editting's transmission method
US10152213B2 (en) * 2016-09-01 2018-12-11 Adobe Systems Incorporated Techniques for selecting objects in images
CN106895794B (en) * 2017-02-08 2019-05-03 凌云光技术集团有限责任公司 A kind of method and device obtaining laser beam scan path
JP7438702B2 (en) * 2019-09-25 2024-02-27 株式会社東芝 Similar region detection device, similar region detection method and program
CN110705427A (en) * 2019-09-25 2020-01-17 中国人民解放军61646部队 Extraction processing method and device for remote sensing image target area
CN113031793B (en) * 2021-04-06 2024-05-31 维沃移动通信有限公司 Contour acquisition method and device and intelligent pen

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1758284A (en) * 2005-10-17 2006-04-12 浙江大学 Method for quickly rebuilding-up three-D jaw model from tomographic sequence
US20080024628A1 (en) * 2006-06-07 2008-01-31 Samsung Electronics Co., Ltd. Image composing apparatus and method of portable terminal
CN101833668A (en) * 2010-04-23 2010-09-15 清华大学 Detection method for similar units based on profile zone image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002117409A (en) * 2000-10-10 2002-04-19 Canon Inc Image processing method and device thereof
JP2004361987A (en) * 2003-05-30 2004-12-24 Seiko Epson Corp Image retrieval system, image classification system, image retrieval program, image classification program, image retrieval method, and image classification method
WO2006092961A1 (en) * 2005-03-02 2006-09-08 Matsushita Electric Industrial Co., Ltd. Image generation method, object detection method, object detection device, and image generation program
JP2007233871A (en) * 2006-03-02 2007-09-13 Fuji Xerox Co Ltd Image processor, control method for computer, and program
CN101425182B (en) * 2008-11-28 2011-07-20 华中科技大学 Image object segmentation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1758284A (en) * 2005-10-17 2006-04-12 浙江大学 Method for quickly rebuilding-up three-D jaw model from tomographic sequence
US20080024628A1 (en) * 2006-06-07 2008-01-31 Samsung Electronics Co., Ltd. Image composing apparatus and method of portable terminal
CN101833668A (en) * 2010-04-23 2010-09-15 清华大学 Detection method for similar units based on profile zone image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116596990A (en) * 2023-07-13 2023-08-15 杭州菲数科技有限公司 Target detection method, device, equipment and storage medium
CN116596990B (en) * 2023-07-13 2023-09-29 杭州菲数科技有限公司 Target detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN101833668A (en) 2010-09-15
CN101833668B (en) 2011-12-28

Similar Documents

Publication Publication Date Title
WO2011131029A1 (en) Method for detecting similar units based on outline belt graph
US10885659B2 (en) Object pose estimating method and apparatus
CN108257139B (en) RGB-D three-dimensional object detection method based on deep learning
US9715761B2 (en) Real-time 3D computer vision processing engine for object recognition, reconstruction, and analysis
Bujnak et al. A general solution to the P4P problem for camera with unknown focal length
US10225473B2 (en) Threshold determination in a RANSAC algorithm
CN109479098A (en) Multiple view scene cut and propagation
CN108010123B (en) Three-dimensional point cloud obtaining method capable of retaining topology information
US20130251260A1 (en) Method and system for segmenting an image
US20230085468A1 (en) Advanced Automatic Rig Creation Processes
CN110021000B (en) Hairline repairing method and device based on layer deformation
CN109472828A (en) A kind of localization method, device, electronic equipment and computer readable storage medium
CN108322724A (en) Image solid matching method and binocular vision equipment
CN111882546A (en) Weak supervised learning-based three-branch convolutional network fabric defect detection method
JP2015171143A (en) Camera calibration method and apparatus using color-coded structure, and computer readable storage medium
CN110443228B (en) Pedestrian matching method and device, electronic equipment and storage medium
CN113436251B (en) Pose estimation system and method based on improved YOLO6D algorithm
CN105574844B (en) Rdaiation response Function Estimation method and apparatus
CN106408654B (en) A kind of creation method and system of three-dimensional map
CN110111341B (en) Image foreground obtaining method, device and equipment
Shen et al. Re-texturing by intrinsic video
US20220198707A1 (en) Method and apparatus with object pose estimation
Bajramovic et al. Global Uncertainty-based Selection of Relative Poses for Multi Camera Calibration.
CN111435448B (en) Image saliency object detection method, device, equipment and medium
CN110400349B (en) Robot navigation tracking recovery method in small scene based on random forest

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11771482

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11771482

Country of ref document: EP

Kind code of ref document: A1