CN106709870B - Close-range image straight-line segment matching method - Google Patents

Close-range image straight-line segment matching method Download PDF

Info

Publication number
CN106709870B
CN106709870B CN201710019244.8A CN201710019244A CN106709870B CN 106709870 B CN106709870 B CN 106709870B CN 201710019244 A CN201710019244 A CN 201710019244A CN 106709870 B CN106709870 B CN 106709870B
Authority
CN
China
Prior art keywords
line segment
straight
straight line
target
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710019244.8A
Other languages
Chinese (zh)
Other versions
CN106709870A (en
Inventor
王竞雪
张雪
熊俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Technical University
Original Assignee
Liaoning Technical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Technical University filed Critical Liaoning Technical University
Priority to CN201710019244.8A priority Critical patent/CN106709870B/en
Publication of CN106709870A publication Critical patent/CN106709870A/en
Application granted granted Critical
Publication of CN106709870B publication Critical patent/CN106709870B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • G06T3/147Transformations for image registration, e.g. adjusting or mapping for alignment of images using affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A close-range image straight-line segment matching method belongs to the technical field of close-range photogrammetry and computer vision; the method comprises the following steps: extracting straight line segments; determining a set of candidate straight-line segments of a target straight-line segment; establishing a parallel support domain of a target straight-line segment, and establishing a plane support domain of a candidate straight-line segment based on the parallel support domain of the target straight-line segment; calculating the Euclidean distance between the target straight-line segment feature descriptor and the candidate straight-line segment feature descriptor; determining the homonymous straight line segment of the target straight line segment according to the Euclidean distance to obtain a homonymous straight line pair set on the reference image and the search image; according to the method, a candidate straight-line segment parallel support domain is determined by utilizing epipolar line constraint, and a descriptor is constructed on the basis of the candidate straight-line segment parallel support domain, so that the effectiveness of the descriptor is enhanced; the method has the advantages that regional affine transformation is carried out on the parallel support domains of the target straight-line segment and the candidate straight-line segment, reliable matching of the straight-line segments of the images with different scales is achieved, the limitation of the existing descriptor is improved, and the method is suitable for matching of the straight-line segments of the close-range images with different types such as translation, rotation, blurring and illumination.

Description

Close-range image straight-line segment matching method
Technical Field
The invention belongs to the technical field of close-range photogrammetry and computer vision, and particularly relates to a close-range image straight-line segment matching method.
Background
The image matching is to search for homonymous features on two or more images covering the same area or the same ground object by adopting a certain matching algorithm, and establish the corresponding relation of the homonymous features on different images. In an artificial target scene, the straight line features contain richer semantic and structural information, so that the reconstruction of an artificial three-dimensional model of a ground object by using the straight line information on the image has important research significance in the fields of close-range photogrammetry and computer vision, wherein the straight line matching is a key core technology.
At present, scholars at home and abroad propose a plurality of straight line matching methods. The existing method for researching the center of gravity mainly comprises the following two aspects: the method comprises the following steps of firstly, search range constraint, commonly used constraint conditions such as epipolar line constraint, trifocal tensor constraint, homography matrix constraint, homonymy point constraint, triangulation network constraint and the like, wherein the existing constraint conditions all need prior knowledge or strict known conditions; and secondly, establishing a straight-line segment descriptor, wherein the traditional straight-line segment feature descriptor mostly utilizes the geometric attributes of the straight-line segment, such as direction, length, distance, repeatability and the like, in combination with the existing constraint conditions. Such simple geometric attributes are limited by the influence of the straight line extraction result, and the matching accuracy is low for the extracted result with fracture and occlusion images. In addition, the gray scale information of the linear neighborhood window is also often used to describe and match linear features, such as the gray scale correlation of the linear neighborhood window, and the gray scale correlation of the adaptive moving window, which are suitable for image sequences or images with short baselines by directly using the window gray scale correlation, but are difficult to adapt to images with large changes in the view angle scale. Therefore, in order to improve the reliability of the descriptor, the current line descriptor mostly utilizes information such as a pixel gray level histogram, a gradient, a mean value, a variance and the like in the line neighborhood region. Under the condition of no constraint, the Wangzhiweigh and the like propose an MSLD linear Descriptor (MSLD), which constructs a Descriptor matrix by counting gradient vectors of 4 directions of each sub-area in a pixel support domain, so that the MSLD has the invariance of translation, rotation and illumination. However, in the matching process, the linear segment parallel support domains constructed by the target linear segment on the reference image and the candidate linear segment descriptor on the search image are respectively and directly established by taking the target linear segment and the candidate linear segment descriptor as centers, and the consistency of information contained in the two linear support domains is weak for the image with scale change or view angle change, so that the descriptor is difficult to obtain a reliable matching result for the image with scale change or view angle change.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present invention provides a method for matching a line segment of a close-range image.
The technical scheme of the invention is as follows:
a close-range image straight-line segment matching method comprises the following steps:
step 1: inputting coordinates of two close-range images of a reference image and a search image and the homonymous points of the two close-range images;
step 2: respectively extracting straight line segments of the reference image and the search image;
and step 3: matching the straight line segments extracted from the reference image and the search image to obtain a set of homonymous straight line pairs:
step 3.1: determining a candidate straight-line segment set of the target straight-line segment on the search image by using the same-name points in the neighborhood of the target straight-line segment on the reference image:
step 3.1.1: extracting a straight line segment l from any one of the reference imagesiAs a target straight line segment, straight line segment liTwo end points a and b respectively, which are respectively taken as two end points of the straight line segment and are vertical to the straight line segment liStraight line of
Figure BDA0001206999650000021
And
Figure BDA0001206999650000022
step 3.1.2: extracting straight lines on the reference image from the input homonym points
Figure BDA0001206999650000023
And
Figure BDA0001206999650000024
two points in between: point u and point v, which are respectively located on straight line segment liOn both sides, and are each liTwo-side distance straight line segment liTwo points with the closest distance;
step 3.1.3: connecting points u and v with the same name points u 'and v' on the search image to obtain a straight line segment lu'v'Searching for the upper and straight line segment l of the imageu'v'Intersect and have an intersection point of lu'v'The upper straight line segment forms a target straight line segment liThe set of candidate straight-line segments.
Step 3.2: using a target straight line segment liA rectangular area established as the center is used as a target straight line segmentliThe end point of the target straight-line segment and four corners of the support domain are sequentially marked as points a, 1,2, b, 3 and 4 clockwise;
step 3.3: according to the parallel support domain of the target straight-line segment, constructing a region affine transformation-based mean-standard deviation feature descriptor for the target straight-line segment:
step 3.3.1: determining four-corner point coordinates of the parallel support domain of the target straight-line segment after affine transformation according to the length and the width of the parallel support domain of the target straight-line segment;
step 3.3.2: carrying out affine transformation on the parallel support domain of the target straight-line segment according to a 6-parameter affine transformation formula to obtain the parallel support domain of the target straight-line segment after affine transformation;
step 3.3.3: constructing a mean-standard deviation feature descriptor of the target straight-line segment according to the parallel support domain of the target straight-line segment after affine transformation:
step 3.3.3.1: respectively taking each pixel point in a certain range on the horizontal central axis of the straight-line parallel support domain after affine transformation as the center to establish a rectangular region as a pixel support domain along the horizontal direction and the vertical direction, wherein the pixel support domain defined by each pixel is G1,G2,...,Gi,...,GNN is the number of pixels contained in a certain range on the central axis, GiA support field representing the ith pixel in the range, i ═ 1,2, …, N; the straight line segment l is a target straight line segment or a candidate straight line segment;
step 3.3.3.2: dividing each pixel support domain into M sub-regions G with different sizes and different sizes along the vertical directioni=Gi1∪Gi2∪......∪GiM
Step 3.3.3.3: respectively counting the gradient vectors of each subregion of each pixel support domain in four directions of 0 degree, 90 degrees, 180 degrees and 360 degrees to obtain the four-dimensional characteristic vector of each subregion of each pixel support domain, and forming a gradient description submatrix of a straight-line segment:
Figure BDA0001206999650000031
wherein,
Figure BDA0001206999650000032
for a four-dimensional feature vector of the jth sub-region corresponding to the pixel support domain i, j is 1,2, …, M
Step 3.3.3.4: respectively calculating the mean value and the standard deviation of the row vectors of the gradient descriptor matrix to obtain a mean value column vector and a standard deviation column vector which are respectively M (l)i)=Mean{GDM(li)}∈R4MAnd S (l)i)=Std{GDM(li)}∈R4M
Step 3.3.3.5: respectively carrying out normalization processing on the mean column vector and the standard deviation column vector to form a mean-standard deviation feature descriptor of the target straight-line segment as follows:
Figure BDA0001206999650000033
step 3.4: determining an overlapped straight line area corresponding to the candidate straight line segment and the target straight line segment according to the epipolar lines of two end points of the target straight line segment on the reference image on the search image:
step 3.4.1: respectively calculating a target straight line segment liEpipolar line H of two end points a and b on search imageaAnd Hb
Step 3.4.2: separately calculating epipolar lines HaAnd HbAnd the intersection points of the candidate straight line segments are marked as a 'and b', and the connecting line of the two intersection points is an overlapped straight line region of the target straight line segment corresponding to the candidate straight line segment.
Step 3.5: taking an overlapped straight line area on the candidate straight line segment as a center, and establishing a parallel support area of the candidate straight line segment according to the parallel support area of the target straight line segment:
step 3.5.1: respectively calculating four corner points of a parallel support domain of the target straight-line segment: the epipolar lines of point 1, point 2, point 3 and point 4 on the search image are respectively marked as H1、H2、H3And H4
Step 3.5.2: on the search image, straight lines which respectively cross the points a 'and b' and are perpendicular to the candidate straight lines are calculated
Figure BDA0001206999650000034
And
Figure BDA0001206999650000035
step 3.5.3: respectively calculating straight lines
Figure BDA0001206999650000036
And the epipolar line H1And H4Respectively calculate straight lines, marked as 1' and 4And the epipolar line H2And H3The corresponding rectangular regions constructed by the points 1 ', 2', 3 ', and 4' are parallel support regions of the candidate straight-line segments.
Step 3.6: according to a straight line parallel support domain of the candidate straight line segment, constructing a mean-standard deviation feature descriptor based on region affine transformation for the candidate straight line segment:
step 3.6.1: determining coordinates of four corner points of the candidate straight-line parallel support domain after affine transformation according to the length and the width of the target straight-line parallel support domain;
step 3.6.2: carrying out affine transformation on the parallel support domain of the candidate straight line segment according to a 6-parameter affine transformation formula to obtain the parallel support domain of the candidate straight line segment after affine transformation;
step 3.6.3: constructing a mean-standard deviation feature descriptor of the candidate straight-line segment according to the parallel support domain of the candidate straight-line segment after affine transformation:
step 3.6.3.1: respectively taking each pixel point in a certain range on the horizontal central axis of the straight-line parallel support domain after affine transformation as the center to establish a rectangular region as a pixel support domain along the horizontal direction and the vertical direction, wherein the pixel support domain defined by each pixel is G1,G2,...,Gi,...,GNN is the number of pixels contained in a certain range on the central axis, GiA support field representing the ith pixel in the range, i ═ 1,2, …, N; the straight line segment l is a target straight lineA segment or candidate straight-line segment;
step 3.6.3.2: dividing each pixel support domain into M sub-regions G with different sizes and different sizes along the vertical directioni=Gi1∪Gi2∪......∪GiM
Step 3.6.3.3: respectively counting the gradient vectors of each subregion of each pixel support domain in four directions of 0 degree, 90 degrees, 180 degrees and 360 degrees to obtain the four-dimensional characteristic vector of each subregion of each pixel support domain, and forming a gradient description submatrix of a straight-line segment:
Figure BDA0001206999650000041
wherein,
Figure BDA0001206999650000042
for a four-dimensional feature vector of the jth sub-region corresponding to the pixel support domain i, j is 1,2, …, M
Step 3.6.3.4: respectively calculating the mean value and the standard deviation of the row vectors of the gradient descriptor matrix to obtain a mean value column vector and a standard deviation column vector which are respectively M (r)i)=Mean{GDM(ri)}∈R4MAnd S (r)i)=Std{GDM(ri)}∈R4M
Step 3.6.3.5: respectively carrying out normalization processing on the mean column vector and the standard deviation column vector to form a mean-standard deviation feature descriptor of the candidate straight-line segment as follows:
Figure BDA0001206999650000043
step 3.7: calculating the Euclidean distance between the target straight-line segment descriptor and the candidate straight-line segment descriptor;
step 3.8: respectively executing the step 3.4 to the step 3.7 to each candidate straight-line segment in the candidate straight-line segment set to respectively obtain the Euclidean distance between the target straight-line segment descriptor and each candidate straight-line segment descriptor in the candidate straight-line segment set;
step 3.9: determining the same straight line segment of the target straight line segment according to the Euclidean distance, namely the matching straight line segment of the target straight line segment:
step 3.9.1: determining the number of candidate straight line segments, executing steps 3.9.2-3.9.4 when the number S of candidate straight line segments is more than or equal to 2, and executing step 3.9.5 when the number S of candidate straight line segments is 1;
step 3.9.2: determining the minimum Euclidean distance value and recording as DT1The next smallest Euclidean distance value is recorded as DT2
Step 3.9.3: calculating the minimum Euclidean distance DT1From the next smallest Euclidean distance DT2I.e. τ ═ DT1/DT2
Step 3.9.4: when τ is less than the threshold and DT1Under the condition that the distance between the candidate straight line segment and the target straight line segment descriptor is smaller than the threshold value, taking the candidate straight line segment corresponding to the descriptor with the minimum Euclidean distance to the target straight line segment descriptor as the same-name straight line segment of the target straight line segment, and establishing the matching relation between the candidate straight line segment and the target straight line segment to obtain a pair of same-name straight line pairs;
step 3.9.5: when the Euclidean distance between the target straight line segment and the candidate straight line segment is less than the threshold value TDUnder the condition, the candidate straight-line segment is determined to be the same straight-line segment of the target straight-line segment, and the matching relation between the candidate straight-line segment and the target straight-line segment is established to obtain a pair of same straight-line pairs.
Step 3.10: and (3) sequentially taking each straight line segment extracted from the reference image as a target straight line segment, and executing the matching operation of the steps 3.1-3.8 to obtain a set of the same-name straight line pairs on the reference image and the search image.
Has the advantages that: compared with the prior art, the close-range image straight-line segment matching method has the following advantages:
(1) in the construction process of the candidate straight-line segment descriptor, based on a target straight-line segment and a parallel support domain thereof, determining an overlapped straight-line segment region corresponding to the target straight-line segment on the candidate straight-line segment and a parallel support domain thereof by using epipolar line constraint, and constructing the descriptor based on the overlapped straight-line segment region and the parallel support domain thereof to enhance the effectiveness of the descriptor;
(2) on the basis of the existing MSLD descriptor, the method respectively carries out regional affine transformation on the parallel support domains of the target straight-line segment and the candidate straight-line segment, unifies the positions and the sizes of the target straight-line segment and the candidate straight-line segment, realizes the reliable matching of the straight-line segments on images with different scales, and improves the limitation of the existing descriptor; meanwhile, the method is suitable for linear segment matching of different types of close-range images such as translation, rotation, blurring and illumination, and provides a reliable linear matching result for three-dimensional reconstruction based on the digital close-range image.
Drawings
FIG. 1 is a flowchart illustrating a method for matching a line segment of a close-up image according to an embodiment of the present invention;
FIG. 2 is a flow chart of a process for matching straight line segments in accordance with an embodiment of the present invention;
FIG. 3 is a diagram illustrating the use of homonym points to determine candidate straight-line segments according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a process of constructing a linear parallel support domain of a target linear segment and a candidate linear segment and a process of performing regional affine transformation according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a process of constructing a mean-standard deviation feature descriptor based on regional affine transformation in accordance with an embodiment of the present invention;
FIG. 6 is a schematic diagram of experimental image points in which (a) is a pair of scale-varying images, (b) is a pair of illumination-varying images, and (c) is a pair of rotation-varying images;
fig. 7 is a diagram of a result of performing line segment matching on a scale-changed image by using the method of the present invention according to an embodiment of the present invention, where (a) is a homonymous line graph obtained by matching on a reference image, and (b) is a homonymous line graph obtained by matching on a search image;
fig. 8 is a graph of the result of performing line segment matching on an illumination variation image by applying the method of the present invention in an embodiment of the present invention, wherein (a) is a homonymous line graph obtained by matching on a reference image, and (b) is a homonymous line graph obtained by matching on a search image;
fig. 9 is a graph of the result of performing line segment matching on a rotation variation image by applying the method of the present invention in an embodiment of the present invention, wherein (a) is a homonymous line graph obtained by matching on a reference image, and (b) is a homonymous line graph obtained by matching on a search image.
Detailed Description
An embodiment of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, a close-up image straight-line segment matching method of the present invention includes the following steps:
step 1: inputting coordinates of two close-range images of a reference image and a search image and the homonymous points of the two close-range images;
the same name point set isWherein,
Figure BDA0001206999650000062
andcoordinates of the same-name points of the reference image and the search image are respectively, K is an index number of the same-name points, K is 1,2, …, and K is the number of the same-name points;
step 2: respectively extracting straight line segments of the reference image and the search image;
respectively extracting straight line segments of the reference image and the search image by adopting a straight line extraction algorithm, and removing the straight line segments with the length less than a threshold value TLStraight line segment of (1), in this embodiment, TL10, the resulting straight line sets are each L ═ L1,…li,…,lI},R={r1,…rj,…,rJIn which liAnd rjThe number of straight line segments on the reference image and the search image is I1, 2, …, I, J1, 2, …, J, I and J;
and step 3: matching the straight line segments extracted from the reference image and the search image to obtain a set of homonymous straight line pairs, as shown in fig. 2, the specific steps include;
step 3.1: determining a candidate straight-line segment set of the target straight-line segment on the search image by using the same-name points in the neighborhood of the target straight-line segment on the reference image;
step 3.1.1: as shown in FIG. 3(a), a straight line segment l is extracted from any one of the reference imagesiAs a target straight line segment, straight line segment liTwo end points a and b respectively, which are respectively taken as two end points of the straight line segment and are vertical to the straight line segment liStraight line of
Figure BDA0001206999650000071
And
Figure BDA0001206999650000072
step 3.1.2: extracting straight lines on the reference image from the input homonym points
Figure BDA0001206999650000073
And
Figure BDA0001206999650000074
two points in between: a point u and a point v, which are respectively located on the straight line segment liOn both sides, and are each liTwo-side distance straight line segment liTwo points with the closest distance;
step 3.1.3: connecting points u and v with the same name points u 'and v' on the search image to obtain a straight line segment lu'v'Searching for the upper and straight line segment l of the imageu'v'Intersect and have an intersection point of lu'v'The upper straight line segment forms a target straight line segment liAs shown in fig. 3(b), the straight line segment satisfying the condition has r1、r2、r3
Step 3.2: establishing a parallel support domain of a target straight-line segment;
with a target straight line liA rectangular area established for the center is taken as a target straight line liOf the support structure. As shown in FIG. 4, the target straight line length is len, and the target straight line liThe rectangular window is formed by extending r pixel lengths in the vertical direction to both sides of the straight line segment, with the size of (2r +1) × len as the center, and in the present embodiment, r is 22. The target straight line point and the four corners of the supporting field are marked as a point a, a,1、2、b、3、4;
Step 3.3: constructing a mean-standard deviation feature descriptor based on regional affine transformation for the target straight-line segment according to the parallel support domain of the target straight-line segment;
step 3.3.1: determining four-corner point coordinates of the parallel support domain of the target straight-line segment after affine transformation according to the length and the width of the parallel support domain of the target straight-line segment; as shown in fig. 4, the coordinate system after affine transformation coincides with the original image pixel coordinate system, the upper left corner of the linear segment parallel support domain coincides with the upper left corner of the original image, the four sides of the linear segment parallel support domain are parallel to the image pixel coordinate system, the length and width of the linear segment parallel support domain are the same as the length and width of the target linear segment parallel support domain, and len and W are (2r +1), respectively, in this embodiment, W is 45. Pixel coordinates of four corner points of a parallel support domain of the straight line segment after affine transformation are 1(1,1), 2(1, len), 3(W, 1) and 4(W, len) in sequence;
step 3.3.2: carrying out affine transformation on the parallel support domain of the target straight-line segment according to a 6-parameter affine transformation formula to obtain the parallel support domain of the target straight-line segment after affine transformation; the affine transformation adopts a 6-parameter affine transformation formula as follows:
x=a0+a1x'+a2y'
y=b0+b1x'+b2y'
wherein x 'and y' are pixel coordinates of the affine-transformed gray scale region, x and y are pixel coordinates on the original image, a0、a1、a2、b0、b1、b2Are transformation parameters. Calculating affine transformation parameters by using the original image pixel coordinates of four corner points of a known target straight-line-segment parallel support domain and pixel coordinates after affine transformation, calculating the pixel point coordinates of an original straight-line-segment support domain region corresponding to each point in a rectangular region after affine transformation according to the affine transformation parameters, interpolating and calculating the gray value corresponding to each point in the region after affine transformation according to adjacent points, and finally obtaining the target straight-line-segment parallel support domain after affine transformation for constructing a mean-standard deviation feature descriptor;
step 3.3.3: constructing a mean-standard deviation feature descriptor of the target straight-line segment according to the parallel support domain of the target straight-line segment after affine transformation:
step 3.3.3.1: as shown in fig. 4, D +1 th pixel to len-D pixel on r +1 th row of horizontal central axis of straight-line parallel support domain after affine transformation are respectively extracted, rectangular regions with the size of W × H, i.e. pixel support domains, are established along the vertical and horizontal directions with each pixel as the center in turn, and as shown in fig. 5, the pixel support domain defined by each pixel is sequentially represented as G1,G2,...,Gi,...,GNWherein, D is half of the width of the pixel support domain window and is rounded to the direction of the small value, i.e. D ═ INT (H/2); N-len-2D, which is the number of pixels included from the D +1 th pixel to the len-D th pixel on the r +1 th line of the support field; giA support field indicating an ith pixel from the D +1 th pixel; in the present embodiment, the value W is an integer multiple of the value K, H is 5, and D is 2;
step 3.3.3.2: as shown in FIG. 4, each pixel support field is divided into M sub-fields, G, of different sizes and not overlapping each other in the vertical directioni=Gi1∪Gi2∪......∪GiMIn the present embodiment, the size of the sub-region is 5 × 5, and M ═ W/H ═ 9;
step 3.3.3.3: as shown in fig. 4, the gradient vectors of each sub-region of each pixel support domain in four directions of 0 °, 90 °, 180 °, and 360 ° are respectively counted to obtain the four-dimensional feature vector of each sub-region of each pixel support domain, and a gradient descriptor matrix of a straight-line segment is formed:
Figure BDA0001206999650000081
the descriptor matrix size is 4M × N, wherein,
Figure BDA0001206999650000082
and j is a four-dimensional feature vector of a j-th sub-region corresponding to the pixel support domain i, wherein j is 1,2, … and M.
Step 3.3.3.4: respectively calculating the mean value and the standard deviation of the row vectors of the gradient descriptor matrix to obtain a mean value column vector and a standard deviation column vector which are respectively M (l)i)=Mean{GDM(li)}∈R4MAnd S (l)i)=Std{GDM(li)}∈R4MObtaining M (l)i) And S (l)i) Column vectors of size 4M x 1, respectively;
step 3.3.3.5: respectively carrying out normalization processing on the mean column vector and the standard deviation column vector to form a mean-standard deviation feature descriptor of the straight line segment as follows:
Figure BDA0001206999650000091
obtaining a column vector with the size of the mean-standard deviation feature descriptor being 8M x 1, and obtaining a descriptor of the target straight-line segment and marking the descriptor as RAT _ MSLD (l)i)=(a1,a2,...,a8M)T
Step 3.4: determining an overlapped straight line area corresponding to the candidate straight line segment and the target straight line segment according to the epipolar lines of two end points of the target straight line segment on the reference image on the search image;
step 3.4.1: as shown in FIG. 4, a target straight line segment l is calculatediEpipolar line H of two end points a and b on search imageaAnd Hb
Step 3.4.2: separately calculating epipolar lines HaAnd HbAnd the intersection points of the candidate straight line segments are marked as a 'and b', and the connecting line of the two intersection points is an overlapped straight line region of the target straight line segment corresponding to the candidate straight line segment.
Step 3.5: taking an overlapped straight line region on the candidate straight line segment as a center, and establishing a straight line parallel support domain of the candidate straight line segment according to a parallel support domain of the target straight line segment;
step 3.5.1: as shown in fig. 4, four corner points of the target straight-line parallel support domain are calculated respectively: the epipolar lines of point 1, point 2, point 3 and point 4 on the search image are respectively marked as H1、H2、H3And H4
Step 3.5.2: on the search image, straight lines which respectively cross the points a 'and b' and are perpendicular to the candidate straight lines are calculatedAnd
Figure BDA0001206999650000093
step 3.5.3: as shown in FIG. 4, straight lines are calculated respectively
Figure BDA0001206999650000094
And the epipolar line H1And H4Respectively calculate straight lines, marked as 1' and 4
Figure BDA0001206999650000095
And the epipolar line H2And H3The corresponding rectangular regions constructed by the points 1 ', 2', 3 ', and 4' are parallel support regions of the candidate straight-line segments.
Step 3.6: constructing a mean-standard deviation feature descriptor based on regional affine transformation for the candidate straight-line segment according to the straight-line parallel support domain of the candidate straight-line segment;
step 3.6.1: determining coordinates of four corner points of the straight-line parallel support domain after affine transformation according to the length and the width of the target straight-line parallel support domain, and synchronizing the coordinates by 3.3.1, wherein pixel coordinates of the four corner points of the straight-line parallel support domain after affine transformation are 1 '(1, 1), 2' (1, len), 3 '(W, 1) and 4' (W, len) in sequence;
step 3.6.2: carrying out affine transformation on the candidate straight line parallel support domain according to a 6-parameter affine transformation formula to obtain a candidate straight line parallel support domain after affine transformation, and synchronizing the steps to 3.3.2;
step 3.6.3: constructing a mean-standard deviation feature descriptor of the candidate straight-line segment according to the parallel support domain of the candidate straight-line segment after affine transformation:
step 3.6.3.1: as shown in fig. 4, D +1 th pixel to len-D pixel on r +1 th row of horizontal central axis of straight-line parallel support domain after affine transformation are extracted, rectangular regions with the size of W × H, i.e. pixel support domains, are established along vertical and horizontal directions with each pixel as a center in turn, and as shown in fig. 5, the pixel support domain defined by each pixel is sequentially represented as G1,G2,...,Gi,...,GNWherein D is a pixelSupporting half width of the domain window and rounding to the direction of small value, namely D ═ INT (H/2); N-len-2D, which is the number of pixels included from the D +1 th pixel to the len-D th pixel on the r +1 th line of the support field; giA support field indicating an ith pixel from the D +1 th pixel; in the present embodiment, the value W is an integer multiple of the value H, H is 5, and D is 2;
step 3.6.3.2: as shown in FIG. 4, the support domain is divided into M sub-domains with different sizes, G, in the vertical directioni=Gi1∪Gi2∪......∪GiMIn the present embodiment, the size of the sub-region is 5 × 5, and M ═ W/H ═ 9;
step 3.6.3.3: as shown in fig. 4, the gradient vectors of each sub-region of each pixel support domain in four directions of 0 °, 90 °, 180 °, and 360 ° are respectively counted to obtain the four-dimensional feature vector of each sub-region of each pixel support domain, and a gradient descriptor matrix of a straight-line segment is formed:
Figure BDA0001206999650000101
the descriptor matrix size is 4M × N, wherein,
Figure BDA0001206999650000102
and j is a four-dimensional feature vector of a j-th sub-region corresponding to the pixel support domain i, wherein j is 1,2, … and M.
Step 3.6.3.4: respectively calculating the mean value and the standard deviation of the row vectors of the gradient descriptor matrix to obtain a mean value column vector and a standard deviation column vector which are respectively M (r)i)=Mean{GDM(ri)}∈R4MAnd S (r)i)=Std{GDM(ri)}∈R4MObtaining M (r)i) And S (r)i) Column vectors of size 4M x 1, respectively;
step 3.6.3.5: respectively carrying out normalization processing on the mean column vector and the standard deviation column vector to form a mean-standard deviation feature descriptor of the straight line segment as follows:
Figure BDA0001206999650000103
obtaining a column vector with the size of the mean-standard deviation feature descriptor being 8M x 1, and obtaining a descriptor of the candidate straight-line segment and marking the descriptor as RAT _ MSLD (r)i)=(b1,b2,...,b8M)T
Step 3.7: according to the formula
Figure BDA0001206999650000104
Calculating the Euclidean distance between the target straight-line segment descriptor and the candidate straight-line segment descriptor;
step 3.8: and (3) executing the operation of the step (3.4) to the step (3.7) on each candidate straight-line segment in the candidate straight-line segment set to respectively obtain the Euclidean distance (D) between the target straight-line segment descriptor and each candidate straight-line segment descriptor in the candidate straight-line segment set1,D2,...,DSS is the number of candidate straight line segments;
step 3.9: determining the same straight line segment of the target straight line segment according to the Euclidean distance, namely determining the matching straight line segment of the target straight line segment:
step 3.9.1: determining the number of candidate straight line segments, executing the steps 3.9.2 to 3.9.4 when the number S of candidate straight line segments is more than or equal to 2, and executing the step 3.9.5 when the number S of candidate straight line segments is 1;
step 3.9.2: determining the minimum Euclidean distance value and recording as DT1The next smallest Euclidean distance value is recorded as DT2
Step 3.9.3: calculating the minimum Euclidean distance DT1From the next smallest Euclidean distance DT2I.e. τ ═ DT1/DT2
Step 3.9.4: satisfying that tau is less than threshold TτAnd DT1Less than threshold TDDetermining a candidate straight-line segment corresponding to a descriptor with the minimum Euclidean distance from a target straight-line segment descriptor as a homonymous straight-line segment of the target straight-line segment, and establishing a matching relation between the candidate straight-line segment and the target straight-line segment to obtain a pair of homonymous straight-line pairs; in this embodiment, Tτ=0.8,TD=1.2;
Step 3.9.5: satisfying the Euclidean distance D between the target straight line segment and the candidate straight line segment1Is less thanThreshold value TDUnder the condition, determining the candidate straight-line segment as a same-name straight-line of the target straight-line segment, and establishing a matching relation between the candidate straight-line segment and the target straight-line segment to obtain a pair of same-name straight-line pairs; in this embodiment, TD′=0.8。
Step 3.10: and (4) taking each straight line segment extracted from the reference image as a target straight line segment in sequence, and repeating the matching operation of the step 3.1 to the step 3.9 to obtain a set of the same-name straight line pairs on the reference image and the search image.
The performance of the method provided by the invention is verified through experiments, the method is compared with the conventional classical mean-standard deviation descriptor method, typical public image data is selected as experimental data, and the reliability of the matching results of the two algorithms is compared through qualitative and quantitative analysis means.
FIG. 6 is an experimental image, wherein (a) is a pair of scale-varying images, the scale between the two images is about 1:1.5, the size of the two images is 600 pixels by 800 pixels, and the number of straight line segments extracted from the two images is 667 and 861 respectively; (b) the two images are 900 pixels by 600 pixels, and the number of straight line segments extracted from the two images is 729 and 446 respectively; (c) the rotation angle between two images is about 30 degrees, the size of the two images is 640 pixels by 480 pixels, and the number of straight line segments extracted from the two images is 471 and 477 respectively.
Fig. 7 is a graph of the result of performing line segment matching on a scale-changed image according to the present invention, wherein (a) is a graph of a straight line with the same name obtained by matching on a reference image, and (b) is a graph of a straight line with the same name obtained by matching on a search image; the circular area contains an error match, which is labeled 252.
Fig. 8 is a graph of the result of performing straight-line segment matching on an illumination variation image by applying the method of the present invention in an embodiment of the present invention, where (a) is a homonymous straight-line graph obtained by matching on a reference image, and (b) is a homonymous straight-line graph obtained by matching on a search image, and a circular area includes an error match, and the error match marks are 58 and 253.
Fig. 9 is a graph of the result of performing line segment matching on a rotation variation image by applying the method of the present invention in an embodiment of the present invention, in which (a) is a graph of a straight line with the same name obtained by matching on a reference image, and (b) is a graph of a straight line with the same name obtained by matching on a search image, a circular area contains an error match, and the error match is marked with 153.
Table 1 shows the statistical results of matching three groups of experimental images by two different methods, wherein the three groups of images are sequentially matched by the method of the present invention to obtain the same-name number of straight lines 307, 253, and 256, the number of wrong straight lines 1,2, and 1, and the matching accuracy is 99.7%, 99.2%, and 99.6%, respectively; the classical mean-standard deviation descriptor method matches three groups of images in sequence to obtain the numbers of homonymous straight lines of 100, 252 and 240 respectively, the numbers of error straight lines of 1, 3 and 7 respectively, and the matching accuracy rates of 99.0%, 98.8% and 97.0% respectively. The experimental result shows that the method can obtain a large number of matching straight lines and keep the highest accuracy. Meanwhile, the method can obtain reliable matching results and has certain stability by performing linear matching on different types of close-range images.
TABLE 1 straight-line matching result table for different methods
Figure BDA0001206999650000121
The specific embodiments described above are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (8)

1. A close-range image straight-line segment matching method is characterized by comprising the following steps:
step 1: inputting coordinates of two close-range images of a reference image and a search image and the homonymous points of the two close-range images;
step 2: respectively extracting straight line segments of the reference image and the search image;
and step 3: matching the straight line segments extracted from the reference image and the search image to obtain a set of homonymous straight line pairs:
step 3.1: determining a candidate straight-line segment set of the target straight-line segment on the search image by using the same-name points in the neighborhood of the target straight-line segment on the reference image;
step 3.2: establishing a parallel support domain of a target straight-line segment;
step 3.3: constructing a mean-standard deviation feature descriptor based on regional affine transformation for the target straight-line segment according to the parallel support domain of the target straight-line segment;
respectively taking each pixel point in a certain range on the horizontal central axis of the straight-line parallel support domain after affine transformation as the center to establish a rectangular region as a pixel support domain along the horizontal direction and the vertical direction, wherein the pixel support domain defined by each pixel is G1,G2,...,Gi,...,GNN is the number of pixels contained in a certain range on the central axis, GiA support field representing the ith pixel in the range, i ═ 1,2, …, N; the straight line segment l is a target straight line segment or a candidate straight line segment;
dividing each pixel support domain into M sub-regions G with different sizes and different sizes along the vertical directioni=Gi1∪Gi2∪......∪GiM
Respectively counting the gradient vectors of each subregion of each pixel support domain in four directions of 0 degree, 90 degrees, 180 degrees and 360 degrees to obtain the four-dimensional characteristic vector of each subregion of each pixel support domain, and forming a gradient description submatrix of a straight-line segment:
Figure FDA0002127747790000011
wherein,
Figure FDA0002127747790000012
a four-dimensional feature vector of a jth sub-region corresponding to the pixel support domain i, j being 1,2, …, M;
calculating mean values of row vectors of the gradient descriptor matrix respectivelyAnd standard deviation, obtaining a Mean column vector and a standard deviation column vector which are respectively M (l) ═ Mean { GDM (l) } ∈ R4MAnd s (l) ═ Std { gdm (l) } ∈ R4M
Respectively carrying out normalization processing on the mean column vector and the standard deviation column vector to form a mean-standard deviation feature descriptor of the straight line segment l as follows:
Figure FDA0002127747790000013
step 3.4: determining an overlapped straight line area of the candidate straight line segment and the target straight line segment according to epipolar lines of two end points of the target straight line segment on the search image;
step 3.5: establishing a parallel support domain of the candidate straight-line segment according to the parallel support domain of the target straight-line segment by taking the area of the overlapped straight line on the candidate straight-line segment as the center;
step 3.6: performing regional affine transformation on the parallel support domain of the candidate straight-line segment, and constructing a mean-standard deviation feature descriptor on the basis, wherein the construction process is the same as the step 3.3;
step 3.7: calculating the Euclidean distance between the target straight-line segment descriptor and the candidate straight-line segment descriptor;
step 3.8: respectively executing the step 3.4 to the step 3.7 to each candidate straight-line segment in the candidate straight-line segment set to respectively obtain the Euclidean distance between the target straight-line segment descriptor and each candidate straight-line segment descriptor in the candidate straight-line segment set;
step 3.9: determining the same-name straight line segment of the target straight line segment according to the Euclidean distance, namely the matching straight line segment of the target straight line segment;
step 3.10: and (3) sequentially taking each straight line segment extracted from the reference image as a target straight line segment, and executing the matching operation of the steps 3.1-3.9 to obtain a set of the same-name straight line pairs on the reference image and the search image.
2. The method for matching straight-line segments of close-range images as claimed in claim 1, wherein said step 3.1 comprises:
step 3.1.1: will be on the reference imageAny one extracted straight line segment liAs a target straight line segment, straight line segment liTwo end points a and b respectively, which are respectively taken as two end points of the straight line segment and are vertical to the straight line segment liStraight line of
Figure FDA0002127747790000021
And
Figure FDA0002127747790000022
step 3.1.2: extracting straight lines on the reference image from the input homonym points
Figure FDA0002127747790000023
And
Figure FDA0002127747790000024
two points in between: a point u and a point v, which are respectively located on the straight line segment liOn both sides, and are each liTwo-side distance straight line segment liTwo points with the closest distance;
step 3.1.3: connecting points u and v with the same name points u 'and v' on the search image to obtain a straight line segment lu'v'Searching for the upper and straight line segment l of the imageu'v'Intersect and have an intersection point of lu'v'The upper straight line segment forms a target straight line segment liThe set of candidate straight-line segments.
3. The method for matching the straight-line segment of the close-range image as claimed in claim 1, wherein the step 3.2 comprises the following steps: using a target straight line segment liA rectangular region established as the center is taken as a target straight line segment liThe end points of the target straight line segment and the four corners of the support domain are sequentially marked as points a, 1,2, b, 3 and 4 clockwise.
4. The close-up image straight-line segment matching method according to claim 1, wherein the process of performing area affine transformation on the parallel support domain of the target straight-line segment in the step 3.3 is as follows:
step 3.3.1: determining four-corner point coordinates of the parallel support domain of the target straight-line segment after affine transformation according to the length and the width of the parallel support domain of the target straight-line segment;
step 3.3.2: and carrying out affine transformation on the parallel support domain of the target straight line segment according to a 6-parameter affine transformation formula to obtain the parallel support domain of the target straight line segment after affine transformation.
5. The method for matching straight-line segments of close-range images as claimed in claim 1, wherein said step 3.4 comprises:
step 3.4.1: respectively calculating a target straight line segment liEpipolar line H of two end points a and b on search imageaAnd Hb
Step 3.4.2: separately calculating epipolar lines HaAnd HbAnd the intersection points of the candidate straight line segments are marked as a 'and b', and the connecting line of the two intersection points is an overlapped straight line region of the target straight line segment corresponding to the candidate straight line segment.
6. The method for matching the straight-line segment of the close-up image as claimed in claim 1, wherein the step 3.5 comprises:
step 3.5.1: respectively calculating four corner points of a parallel support domain of the target straight-line segment: the epipolar lines of point 1, point 2, point 3 and point 4 on the search image are respectively marked as H1、H2、H3And H4
Step 3.5.2: on the search image, straight lines which respectively cross the points a 'and b' and are perpendicular to the candidate straight lines are calculatedAnd
Figure FDA0002127747790000032
step 3.5.3: respectively calculating straight linesAnd the epipolar line H1And H4Point of intersection ofRespectively marked as 1 'and 4', respectively calculating straight lines
Figure FDA0002127747790000034
And the epipolar line H2And H3The corresponding rectangular regions constructed by the points 1 ', 2', 3 ', and 4' are parallel support regions of the candidate straight-line segments.
7. The method for matching straight-line segments in close-range images as claimed in claim 1, wherein the process of performing affine transformation on the parallel support domain of the candidate straight-line segment in step 3.6 is as follows:
step 3.6.1: determining coordinates of four corner points of the candidate straight-line parallel support domain after affine transformation according to the length and the width of the target straight-line parallel support domain;
step 3.6.2: and carrying out affine transformation on the parallel support domain of the candidate straight line segment according to a 6-parameter affine transformation formula to obtain the parallel support domain of the candidate straight line segment after affine transformation.
8. The method for matching straight-line segments of close-up images as claimed in claim 1, wherein said step 3.9 comprises:
step 3.9.1: determining the number S of the candidate straight line segments, executing the steps 3.9.2 to 3.9.4 when the number S of the candidate straight line segments is more than or equal to 2, and executing the step 3.9.5 when the number S of the candidate straight line segments is 1;
step 3.9.2: determining the minimum Euclidean distance value and recording as DT1The next smallest Euclidean distance value is recorded as DT2
Step 3.9.3: calculating the minimum Euclidean distance DT1From the next smallest Euclidean distance DT2Is τ ═ DT1/DT2
Step 3.9.4: when τ is less than the threshold and DT1Under the condition that the distance between the candidate straight line segment and the target straight line segment descriptor is smaller than the threshold value, the candidate straight line segment corresponding to the descriptor with the minimum Euclidean distance with the target straight line segment descriptor is used as the same-name straight line segment of the target straight line segment, the matching relation between the candidate straight line segment and the target straight line segment is established, and a pair of same-name straight lines is obtainedCarrying out pairing;
step 3.9.5: when the Euclidean distance between the target straight line segment and the candidate straight line segment is less than the threshold value TDUnder the condition, the candidate straight-line segment is determined to be the same straight-line segment of the target straight-line segment, and the matching relation between the candidate straight-line segment and the target straight-line segment is established to obtain a pair of same straight-line pairs.
CN201710019244.8A 2017-01-11 2017-01-11 Close-range image straight-line segment matching method Active CN106709870B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710019244.8A CN106709870B (en) 2017-01-11 2017-01-11 Close-range image straight-line segment matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710019244.8A CN106709870B (en) 2017-01-11 2017-01-11 Close-range image straight-line segment matching method

Publications (2)

Publication Number Publication Date
CN106709870A CN106709870A (en) 2017-05-24
CN106709870B true CN106709870B (en) 2020-02-14

Family

ID=58907242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710019244.8A Active CN106709870B (en) 2017-01-11 2017-01-11 Close-range image straight-line segment matching method

Country Status (1)

Country Link
CN (1) CN106709870B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009556B (en) * 2017-12-23 2021-08-24 浙江大学 River floating object detection method based on fixed-point image analysis
CN108305277B (en) * 2017-12-26 2020-12-04 中国航天电子技术研究院 Heterogeneous image matching method based on straight line segments
CN110490913B (en) * 2019-07-22 2022-11-22 华中师范大学 Image matching method based on feature description operator of corner and single line segment grouping
CN111461140B (en) * 2020-03-30 2022-07-08 北京航空航天大学 Linear descriptor construction and matching method suitable for SLAM system
CN115346058B (en) * 2022-10-19 2022-12-20 深圳市规划和自然资源数据管理中心(深圳市空间地理信息中心) Linear feature matching method, system, electronic device and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521597A (en) * 2011-12-14 2012-06-27 武汉大学 Hierarchical strategy-based linear feature matching method for images
CN103886306A (en) * 2014-04-08 2014-06-25 山东大学 Tooth X-ray image matching method based on SURF point matching and RANSAC model estimation
CN104574347A (en) * 2013-10-24 2015-04-29 南京理工大学 On-orbit satellite image geometric positioning accuracy evaluation method on basis of multi-source remote sensing data
WO2016129612A1 (en) * 2015-02-10 2016-08-18 Mitsubishi Electric Corporation Method for reconstructing a three-dimensional (3d) scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521597A (en) * 2011-12-14 2012-06-27 武汉大学 Hierarchical strategy-based linear feature matching method for images
CN104574347A (en) * 2013-10-24 2015-04-29 南京理工大学 On-orbit satellite image geometric positioning accuracy evaluation method on basis of multi-source remote sensing data
CN103886306A (en) * 2014-04-08 2014-06-25 山东大学 Tooth X-ray image matching method based on SURF point matching and RANSAC model estimation
WO2016129612A1 (en) * 2015-02-10 2016-08-18 Mitsubishi Electric Corporation Method for reconstructing a three-dimensional (3d) scene

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《Line Matching Algorithm for Aerial Image Combining image and object space similarity constraints》;Jingxue Wang等;《The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences》;20161231;第783-788页 *
《同名点及高程平面约束的航空影像直线匹配算法》;王竞雪等;《测绘学报》;20160131;第87-95页 *
《线段元支撑区主成分相似性约束特征线匹配》;宋伟东等;《信号处理》;20160831;第904-910页 *
《重合度约束的近景影像直线匹配算法》;朱红等;《信号处理》;20150831;第912-917页 *

Also Published As

Publication number Publication date
CN106709870A (en) 2017-05-24

Similar Documents

Publication Publication Date Title
CN106709870B (en) Close-range image straight-line segment matching method
Chen et al. Improved saliency detection in RGB-D images using two-phase depth estimation and selective deep fusion
Huang et al. A systematic approach for cross-source point cloud registration by preserving macro and micro structures
WO2022002150A1 (en) Method and device for constructing visual point cloud map
CN105844669B (en) A kind of video object method for real time tracking based on local Hash feature
US20160328601A1 (en) Three-dimensional facial recognition method and system
Tau et al. Dense correspondences across scenes and scales
CN110866953A (en) Map construction method and device, and positioning method and device
CN107369183A (en) Towards the MAR Tracing Registration method and system based on figure optimization SLAM
CN112967341B (en) Indoor visual positioning method, system, equipment and storage medium based on live-action image
CN104867137A (en) Improved RANSAC algorithm-based image registration method
CN111028292A (en) Sub-pixel level image matching navigation positioning method
Ni et al. Pats: Patch area transportation with subdivision for local feature matching
CN108961385A (en) A kind of SLAM patterning process and device
Chalom et al. Measuring image similarity: an overview of some useful applications
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN115147599A (en) Object six-degree-of-freedom pose estimation method for multi-geometric feature learning of occlusion and truncation scenes
CN112907569A (en) Head image area segmentation method and device, electronic equipment and storage medium
CN113838058A (en) Automatic medical image labeling method and system based on small sample segmentation
CN113177592A (en) Image segmentation method and device, computer equipment and storage medium
CN115035089A (en) Brain anatomy structure positioning method suitable for two-dimensional brain image data
CN109087344B (en) Image selection method and device in three-dimensional reconstruction
CN106780577B (en) A kind of matching line segments method based on group feature
CN112767457A (en) Principal component analysis-based plane point cloud matching method and device
CN107146215A (en) A kind of conspicuousness detection method based on color histogram and convex closure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant