CN104867183A - Three-dimensional point cloud reconstruction method based on region growing - Google Patents

Three-dimensional point cloud reconstruction method based on region growing Download PDF

Info

Publication number
CN104867183A
CN104867183A CN201510317432.XA CN201510317432A CN104867183A CN 104867183 A CN104867183 A CN 104867183A CN 201510317432 A CN201510317432 A CN 201510317432A CN 104867183 A CN104867183 A CN 104867183A
Authority
CN
China
Prior art keywords
image
matched
point
matching
pair
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510317432.XA
Other languages
Chinese (zh)
Inventor
张伟
秦嘉卓
尤新革
张彩平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201510317432.XA priority Critical patent/CN104867183A/en
Publication of CN104867183A publication Critical patent/CN104867183A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a three-dimensional point cloud reconstruction method based on region growing, which uses a known seed point to search a new matching point pair so as to spread a matching relation to other point pairs, and the specific steps of the method is as follows: respectively filtering noises of a first image to be matched and a second image to be matched via a filtering algorithm; carrying out initial feature matching to the first image to be matched and the second image to be matched by using a nearest matching algorithm to obtain an initial matching point pair; removing a wrong matching point pair in the matching point pair, and using the left matching point pair as a seed point pair; carrying out dense spread based on the seed point pair; obtaining a basis matrix based on the matching point pair and realizing overall situation geometric constraint via the basis matrix. The method provided by the invention can know the condition that the matching point corresponding to the point is adjacent to the known matching point if some point is adjacent to the known matching point by using a principle of continuity and greatly improve efficiency of dense matching by using the continuity constraint.

Description

A kind of three-dimensional point cloud method for reconstructing increased based on region
Technical field
The present invention relates to three-dimensional reconstruction field, espespecially a kind of method being obtained dense reconstruction effect by a small amount of matching double points.
Background technology
In recent years, three-dimensional reconstruction has important application in many aspects such as archaeology, mapping, education, amusements.In general, usually be all adopt the method for exercise recovery structure to carry out three-dimensional reconstruction, but the usual three-dimensional point of image that this method rebuilds out is to less, and these a small amount of three-dimensional point well can not react the grain effect of the object reconstructed, be therefore just necessary to reconstruct dense texture image.
Dense matching is the core of all dense reconstruction algorithm.The dense degree of matching double points determines the density of reconstruction three-dimensional object surface out.For each pixel in two width images, if its corresponding match point can be found, so just a three-dimensional cloud point can be calculated by this to match point.
Gray-scale Matching is the current still traditional matching process of widely used one. the key factor affecting quality of match in Gray-scale Matching is the size of match window: on the one hand, window size will be tried one's best greatly, so that for reliable matching contains enough grey scale change, if window is too little, do not cover enough grey scale change, then the valuation of gained parallax can be caused inaccurate because signal to noise ratio (S/N ratio) is too low; On the other hand, window size is as far as possible little, to avoid the impact of projection distortion, if window is too large and comprise the larger change of parallax, then can, because of projection distortion different on left images, make matched position incorrect.Thus utilize self-adapting window to carry out Gray-scale Matching to be necessary matching with adaptive windows algorithm.The window size that this method adopts and shape change adaptively according to the localized variation of gray scale and the estimated value of current depth, improve quality of match, but its computation complexity are very large.Thus one can reduce complexity in matching process, can ensure that again the method for the dense row of reconstruction three-dimensional object surface out becomes a kind of demand.
Summary of the invention
For the problems referred to above, the present invention aims to provide a kind of dense matching method increased based on region, with traditional based on compared with Gray-scale Matching algorithm, based on the efficiency of three-dimensional dense matching method by utilizing continuity and Epipolar geometry constraint to substantially increase dense matching that region increases.
Technical scheme provided by the invention is as follows:
Based on the dense matching method that region increases, described dense matching method utilizes known Seed Points to searching new matching double points, and right to be propagated by matching relationship to other point, concrete steps comprise:
S1 uses the noise in filtering algorithm difference filtering first image to be matched and the second image to be matched;
S2 uses most proximity matching algorithm to carry out initial characteristic matching to the first image to be matched and the second image to be matched, obtains initial matching double points;
S3 rejects the Mismatching point pair in described matching double points, and using remaining matching double points as Seed Points pair;
S4 based on described Seed Points to carrying out dense propagation;
S5 obtains basis matrix based on described matching double points, realizes overall geometrical constraint by described basis matrix.
Preferably, in step sl, described filtering algorithm is Mean Filtering Algorithm or median filtering algorithm.
Preferably, in step s 2, use most proximity matching algorithm to carry out initial characteristic matching, obtain initial match point, concrete steps are as follows:
S21, according to the first eigenvector Va of first in the first image to be matched, obtains proper vector Vb and the proper vector Vc thirdly of the nearest second point of the proper vector Va of proper vector middle distances first all in the second image to be matched;
S22 obtains the first distance value dist (Va, Vb) between second feature vector Vb and first eigenvector Va, obtains the second distance value dist (Va, Vc) between third feature vector Vc and first eigenvector Va simultaneously;
Described first distance value dist (Va, Vb) and second distance value dist (Va, Vc) compare by S23, if the first distance value dist (Va, Vb) is less than with the ratio of second distance value dist (Va, Vc)
Predetermined threshold value, then the second point in the second image to be matched and the first Point matching in the first image to be matched, obtain initial matching double points.
Preferably, in step s3, the Mismatching point pair in RANSAC algorithm rejecting matching double points is used.
Preferably, in step s 4 which, based on described Seed Points to carrying out dense propagation, concrete steps are as follows:
S41 selects similarity function based on described first image to be matched and the second image to be matched;
S42 chooses at least one pair of point of described Seed Points centering, using described similarity function as objective function, puts simultaneously and carry out dense propagation to the mode increased according to region based at least one pair of point described in choosing.
Preferably, in described step S42, put based at least one pair of point described in choosing and carry out dense propagation to the mode increased according to region, detailed process is as follows:
The 4th aL in described first image to be matched is a pair match point with the 5th aR in the second image to be matched, 6th bL and the 7th cL respectively with first in image to be matched 4th aL adjacent, then when mating the bL and the 7th of the 6th in a first image to be matched cL in the second image to be matched, carrying out dense propagation around the 5th aR in the second image to be matched and can obtain corresponding match point.
Preferably, in described step S42, put based at least one pair of point described in choosing and carry out dense propagation to the mode increased according to region, detailed process is as follows:
The 8th dL in described first image to be matched and the 9th dR in the second image to be matched is a pair match point, the 10th eL in described first image to be matched and the ten one eR in the second image to be matched is that another is to match point, and some P respectively with first in image to be matched 8th dL and the tenth eL adjacent, then when mating the some P in the first image to be matched in the second image to be matched, carrying out dense propagation around the 9th dR and the 11 eR in the second image to be matched and can obtain corresponding match point.
Traditional dense matching method is all after obtaining initial match point, just directly propagate, but initial matching double points contains Mismatching point, if directly propagated, can cause the accumulation of error, thus make to also have a lot of Mismatching point among the match point that finally obtains, this will bring very large interference to process below, even cause the result Severe distortion finally reconstructed, the result wanted can not be obtained.Thus in the present invention, provide a kind of dense matching method increased based on region, it is meeting under certain condition, it is right that matching relationship is propagated to other point, just stop when the conditions are not met propagating, and its before carrying out dense matching by the Mismatching point in the matching double points of initial matching to rejecting, ensured the accuracy that dense matching method of the present invention is mated image like this.
Compared with the method independently calculated with traditional pointwise, method provided by the invention utilizes this principle of continuity, if can know, certain a bit known with one match point closes on, so the match point of this some correspondence is also positioned near known match point, utilizes continuity constraint to drastically increase the efficiency of dense matching like this.
Accompanying drawing explanation
Below by clearly understandable mode, accompanying drawings preferred implementation, is further described a kind of above-mentioned characteristic of the dense matching method increased based on region, technical characteristic, advantage and implementation thereof.
Fig. 1 is the schematic flow sheet based on the dense matching method of region growth in the present invention;
Fig. 2 (a) is the first image schematic diagram to be matched in first embodiment of the invention;
Fig. 2 (b) is the second image schematic diagram to be matched in first embodiment of the invention;
Fig. 3 (a) is the first image schematic diagram to be matched in second embodiment of the invention;
Fig. 3 (b) is the second image schematic diagram to be matched in second embodiment of the invention.
Embodiment
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, contrast accompanying drawing is illustrated the specific embodiment of the present invention below.Apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings, and obtain other embodiment.
Be illustrated in figure 1 the schematic flow sheet of the dense matching method based on region growth provided by the invention, this dense matching method utilizes known Seed Points to searching new matching double points, and right to be propagated by matching relationship to other point, concrete steps comprise:
S1 uses the noise in filtering algorithm difference filtering first image to be matched and the second image to be matched;
S2 uses most proximity matching algorithm to carry out initial characteristic matching to the first image to be matched and the second image to be matched, obtains initial matching double points;
S3 rejects the Mismatching point pair in matching double points, and using remaining matching double points as Seed Points pair;
S4 based on Seed Points to carrying out dense propagation;
S5 obtains basis matrix based on matching double points, realizes overall geometrical constraint by basis matrix.
Specifically, in step sl, in a particular embodiment, mean filter or median filtering method algorithm can be used to remove noise in the first image to be matched and the second image to be matched.To improve the quality of two width images, the interference that noise decrease brings for the coupling of successive image.Certainly in the present invention, we are not specifically limited above-mentioned filtering algorithm, can also use other filtering algorithm, as long as it can realize object of the present invention, are all included in content of the present invention.
In step s 2, use most proximity matching algorithm to carry out initial characteristic matching, obtain initial match point, concrete steps are as follows:
S21, according to the first eigenvector Va of first in the first image to be matched, obtains proper vector Vb and the proper vector Vc thirdly of the nearest second point of the proper vector Va of proper vector middle distances first all in the second image to be matched.In simple terms, nearest point is found by the proper vector distance calculated between 2 here.In the process of initial characteristics coupling, we are according to finding point nearest with it at first o'clock to mate in the second image to be matched in the first image to be matched, specifically, we calculate all proper vectors in the second image to be matched and the distance between first characteristic of correspondence vector Va, then extract two relatively nearest points of distance (above-mentioned second point and thirdly) match point alternatively.
S22 obtains the first distance value dist (Va, Vb) between second feature vector Vb and first eigenvector Va, obtains the second distance value dist (Va, Vc) between third feature vector Vc and first eigenvector Va simultaneously.In this step, specifically, obtain above-mentioned second point and thirdly after, we just utilize these 2 distance values calculated respectively between distance first to find and the point of the first Point matching.
S23 is by the first distance value dist (Va, Vb) with second distance value dist (Va, Vc) compare, if the first distance value dist (Va, Vb) be less than predetermined threshold value T with the ratio of second distance value dist (Va, Vc), T is 0.36 here, then the second point in the second image to be matched and the first Point matching in the first image to be matched, obtain initial matching double points.Specifically, if dist (Va, Vb)/dist (Va, Vc) <T, then illustrate that second point Vb had both been the match point of first Va; Certainly, in like manner, if dist (Va, Vc)/dist (Va, Vb) <T, then illustrate that thirdly Vc had both been the match point of first Va.In addition, the predetermined threshold value T said here is a threshold value of carrying out setting between initial matching, and this predetermined threshold value T can choose according to specific circumstances, and in theory, it is less that predetermined threshold value T chooses, then the some degree of accuracy of mating out is higher.
In step s3, the Mismatching point pair in RANSAC algorithm rejecting matching double points is used.We know, adopt which kind of matching algorithm that the rate of accuracy reached of match point all can not be made to absolutely, usually all can contain error hiding in match point.But the accuracy rate of matching double points is very important for follow-up reconstruction procedures, therefore Mismatching point pair will be eliminated as much as possible, Mismatching point can be rejected by adopting RANSAC algorithm, other algorithm also can be adopted to reject, if can realize object of the present invention can.
In step s 4 which, based on Seed Points to carrying out dense propagation, concrete steps are as follows:
For dense matching part, with known Seed Points for core, around Seed Points 3*3 or 4*4 scope in search for the match point of candidate point.And the ZNCC value of each point and Seed Points in calculated candidate region, then with the sequence of all ZNCC values, the point of the some correspondence that ZNCC value is maximum is exactly match point.
S41 selects similarity function based on the first image to be matched and the second image to be matched.In general, judge that the function of the similarity between two width images (the first image to be matched and the second image to be matched) has: zero-mean Normalized Cross Correlation Function (ZNCC) and squared difference and ((SSD, sum of squaredifferences).
In the present invention, the ZNCC algorithm of our use is as objective function:
Wherein: ZNCC represents the normalized-cross-correlation function between the template window of two width images,
X=(x, y) represents a pixel.
I (x+i) represents the gray-scale value of image at pixel coordinate x+i place.
represent the average gray of image at pixel x place.
Need to be noted that larger then these two pixels of ZNCC value between two pixels (in the pixel of any in the first image to be matched and the second image to be matched another point pixel) are more similar.
Here is a concrete example.Va, Vb, Vc are the vector of 128 dimension elements, find and find vectorial Vb and Vc that in image to be matched, Va faces recently with first, threshold value T=0.36 in the second image to be matched in by image, and here is the concrete corresponding vector value of Va, Vb, Vc.
Va=[49 49 25 0 0 2 10 21 121 33 8 19 6425 12 53 1 1 10 69 120 26 14 3 0 9 2716 8 28 17 2 27 17 25 4 0 8 67 54 14069 31 32 12 11 34 59 31 36 60 140 89 5 03 6 66 130 53 10 0 0 0 15 10 69 22 71 11 7 140 26 12 17 17 6 5 68 71 17 1079 123 26 5 24 15 34 21 42 25 19 22 13 132 3 4 10 7 18 22 140 7 1 4 19 11 252 76 3 4 31 67 40 6 18 5 2 0 18 3758 31 1]。
Vb=[40 34 22 3 3 7 11 12 124 43 7 17 6316 7 37 1 18 59 110 27 22 1 0 0 1416 5 33 35 0 22 17 24 5 1 23 67 32 14865 30 33 12 16 25 60 32 28 59 148 83 4 03 3 53 131 69 14 1 1 0 19 6 55 27 137 14 5 148 26 8 13 16 6 4 70 74 11 1193 118 24 4 24 16 48 35 49 20 14 10 9 112 2 4 169 18 12 148 8 1 3 18 5 14285 5 12 36 52 31 2 16 0 0 3 35 3356 27 1]
Vc=[28 1 2 4 8 334869 57 0 1 15 662815 99 0 0 1 28 59 29 38 7 6 2 21 0 19 45 7 46 9 0 0 2 5 17 28 16122 18 45 21 1 1 58 21 22 23 154 142 4 40 8 4549 9 8 3 4 2 39 9 0 0 22 4 21 161 18 2 15 38 0 0 24 80 20 457 161 3 2 8 13 27 18 4 15 13 31 16 2918 3 1 0 0 1 6 161 16 0 2 29 4 663 65 2 0 8 161 29 12 25 2 0 0 0 2318 42 23]
The Euclidean distance of Va and Vb be 5924, Va and Vc Euclidean distance be 73806,5924/73806=0.08, due to 0.08<T (T is the threshold value of setting), Vb and Va can be obtained thus and mate.
In step s 4 which, based on Seed Points to carrying out dense propagation, concrete steps are as follows:
For dense matching part, with known Seed Points for core, around Seed Points 3*3 or 4*4 scope in search for the match point of candidate point.And the ZNCC value of each point and Seed Points in calculated candidate region, then with the sequence of all ZNCC values, the point of the some correspondence that ZNCC value is maximum is exactly match point.
S41 selects similarity function based on the first image to be matched and the second image to be matched.In general, judge that the function of the similarity between two width images (the first image to be matched and the second image to be matched) has: zero-mean Normalized Cross Correlation Function (ZNCC) and squared difference and ((SSD, sum of squaredifferences).In the present invention, the ZNCC algorithm of our use is as objective function:
Wherein, ZNCC represents the normalized-cross-correlation function between the template window of two width images, and X=(x, y) represents a pixel.I (x+i) represents the gray-scale value of image at pixel coordinate x+i place. represent the average gray of image at pixel x place.Be noted that larger then these two pixels of ZNCC value between two pixels (in the pixel of any in the first image to be matched and the second image to be matched another point pixel) are more similar.
At least one pair of point of S42 selected seed point centering, using similarity function as objective function, put based at least one pair of point chosen simultaneously and dense propagation is carried out to the mode increased according to region, two kinds of embodiments in the present invention, can be adopted to carry out dense propagation here.
First embodiment, as shown in Fig. 2 (a) He Fig. 2 (b):
The 4th aL in first image to be matched is a pair match point with the 5th aR in the second image to be matched, 6th bL and the 7th cL respectively with first in image to be matched 4th aL adjacent, then when mating the bL and the 7th of the 6th in a first image to be matched cL in the second image to be matched, carrying out dense propagation around the 5th aR in the second image to be matched and can obtain corresponding match point.In simple terms, if the 5th aR coupling on the 4th aL and Fig. 2 (b) on known Fig. 2 (a), and the 6th bL and the 7th cL and the 4th aL is adjacent, so the matching double points of the 6th bL and the 7th cL is located in Fig. 2 (b) and puts near the 5th aR, so just can reduce the hunting zone of the match point of the 6th bL and the 7th cL in the second image to be matched (match point the ten three cR of match point the ten two bR and the 7th cL of the 6th bL).We see, in Fig. 2 (b), thick lines black surround is the region of search scope near the 5th aR, and the match point of the 6th bL and the 7th cL in the second image to be matched wherein can be searched at this and find.
Second embodiment, as shown in Fig. 3 (a) He Fig. 3 (b):
The 8th dL in first image to be matched and the 9th dR in the second image to be matched is a pair match point, the 10th eL in first image to be matched and the ten one eR in the second image to be matched is that another is to match point, and some P respectively with first in image to be matched 8th dL and the tenth eL adjacent, then when mating the some P in the first image to be matched in the second image to be matched, carrying out dense propagation around the 9th dR and the 11 eR in the second image to be matched and can obtain corresponding match point.In simple terms, if known a certain pixel (the some P in image as to be matched in first in Fig. 3 (a)) and two known match points adjacent (the 8th dL and the tenth eL in image as to be matched in first in Fig. 3 (a)), so the position of the match point of this some correspondence will be determined jointly by these two known adjacent match points (as dR and the 11 eR of the 9th in Fig. 3 (b)).As can be seen from figure we, some P and the 8th dL and the tenth eL adjacent, but the 9th dR and the 11 eR is non-conterminous, then the match point putting P is determined jointly by the 9th dR and the 11 eR.With similar in a upper embodiment, in Fig. 3 (b), thick lines black surround is the region of search scope near the 9th dR and the 11 eR, and the match point of some P in the second image to be matched wherein can be searched at this and find.
In step s 5, iff only determining follow-up matching double points according to Seed Points, final result will be made to be absorbed in suboptimization, this will cause the result obtained not to be optimum.Therefore the problem considering global optimum is just needed.Two kinds of methods can be had to incorporate overall geometrical constraint.First method calculates basis matrix by initial matching double points, then utilizes basis matrix to carry out global restriction to follow-up match point.Second method is free propagation matching double points.The benefit that constrained matching double points is propagated to reject bad match point, but the scope that coupling is propagated will reduce greatly.The global restriction of two width views will make the basis matrix between two width images all satisfied to all match points.But the basis matrix sometimes calculated does not meet this condition.Therefore, the basis matrix precision that calculates will be made higher.After calculating basis matrix, RANSAC method also to be used to be optimized.In the present invention, two methods can use, and select as the case may be.
Finally, we are noted that in the description, if it is mentioned that matching double points, the matching double points said here for be match point pair in the first image to be matched and the second image to be matched; If it is mentioned that match point, then what say is match point in the first image to be matched or the second image to be matched, is one of them match point in matching double points.
It should be noted that above-described embodiment all can independent assortment as required.The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (8)

1., based on the three-dimensional point cloud method for reconstructing that region increases, it is characterized in that, described dense matching method utilizes known Seed Points to searching new matching double points, and right to be propagated by matching relationship to other point, concrete steps comprise:
S1 uses the noise in filtering algorithm difference filtering first image to be matched and the second image to be matched;
S2 uses most proximity matching algorithm to carry out initial characteristic matching to the first image to be matched and the second image to be matched, obtains initial matching double points;
S3 rejects the Mismatching point pair in described matching double points, using remaining matching double points as Seed Points pair;
S4 based on described Seed Points to carrying out dense propagation;
S5 obtains basis matrix based on described matching double points, realizes overall geometrical constraint by described basis matrix.
2. as claimed in claim 1 a kind of based on region increase three-dimensional point cloud method for reconstructing, it is characterized in that: in step sl, described filtering algorithm is Mean Filtering Algorithm or median filtering algorithm.
3. a kind of three-dimensional point cloud method for reconstructing increased based on region as claimed in claim 1 or 2, it is characterized in that, in step s 2, use most proximity matching algorithm to carry out initial characteristic matching, obtain initial match point, concrete steps are as follows:
S21, according to the first eigenvector Va of first in the first image to be matched, obtains proper vector Vb and the proper vector Vc thirdly of the nearest second point of the proper vector Va of proper vector middle distances first all in the second image to be matched;
S22 obtains the first distance value dist (Va, Vb) between second feature vector Vb and first eigenvector Va, obtains the second distance value dist (Va, Vc) between third feature vector Vc and first eigenvector Va simultaneously;
S23 is by described first distance value dist (Va, Vb) with second distance value dist (Va, Vc) compare, if the first distance value dist (Va, Vb) with second distance value dist (Va, Vc) ratio is less than predetermined threshold value, then the second point in the second image to be matched and the first Point matching in the first image to be matched, obtain initial matching double points.
4. a kind of three-dimensional point cloud method for reconstructing increased based on region as claimed in claim 3, is characterized in that: in step s3, uses RANSAC algorithm to reject Mismatching point pair in matching double points.
5. as claimed in claim 4 a kind of based on region increase three-dimensional point cloud method for reconstructing, it is characterized in that: in step s 4 which, based on described Seed Points to carrying out dense propagation, concrete steps are as follows:
S41 selects similarity function based on described first image to be matched and the second image to be matched;
S42 chooses at least one pair of point of described Seed Points centering, using described similarity function as objective function, puts simultaneously and carry out dense propagation to the mode increased according to region based at least one pair of point described in choosing.
6. as claimed in claim 5 a kind of based on region increase three-dimensional point cloud method for reconstructing, it is characterized in that: in step S41, adopt ZNCC algorithm as objective function:
Wherein: ZNCC represents the normalized-cross-correlation function between the template window of two width images,
X=(x, y) represents a pixel;
I (x+i) represents the gray-scale value of image at pixel coordinate x+i place.
represent the average gray of image at pixel x place.
7. as claimed in claim 5 a kind of based on region increase three-dimensional point cloud method for reconstructing, it is characterized in that: in described step S42, put based at least one pair of point described in choosing and carry out dense propagation to the mode increased according to region, detailed process is as follows:
The 4th aL in described first image to be matched is a pair match point with the 5th aR in the second image to be matched, 6th bL and the 7th cL respectively with first in image to be matched 4th aL adjacent, then when mating the bL and the 7th of the 6th in a first image to be matched cL in the second image to be matched, carrying out dense propagation around the 5th aR in the second image to be matched and can obtain corresponding match point.
8. as claimed in claim 5 a kind of based on region increase three-dimensional point cloud method for reconstructing, it is characterized in that: in described step S42, put based at least one pair of point described in choosing and carry out dense propagation to the mode increased according to region, detailed process is as follows:
The 8th dL in described first image to be matched and the 9th dR in the second image to be matched is a pair match point, the 10th eL in described first image to be matched and the ten one eR in the second image to be matched is that another is to match point, and some P respectively with first in image to be matched 8th dL and the tenth eL adjacent, then when mating the some P in the first image to be matched in the second image to be matched, carrying out dense propagation around the 9th dR and the 11 eR in the second image to be matched and can obtain corresponding match point.
CN201510317432.XA 2015-06-11 2015-06-11 Three-dimensional point cloud reconstruction method based on region growing Pending CN104867183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510317432.XA CN104867183A (en) 2015-06-11 2015-06-11 Three-dimensional point cloud reconstruction method based on region growing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510317432.XA CN104867183A (en) 2015-06-11 2015-06-11 Three-dimensional point cloud reconstruction method based on region growing

Publications (1)

Publication Number Publication Date
CN104867183A true CN104867183A (en) 2015-08-26

Family

ID=53912997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510317432.XA Pending CN104867183A (en) 2015-06-11 2015-06-11 Three-dimensional point cloud reconstruction method based on region growing

Country Status (1)

Country Link
CN (1) CN104867183A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683173A (en) * 2016-12-22 2017-05-17 西安电子科技大学 Method of improving density of three-dimensional reconstructed point cloud based on neighborhood block matching
CN107194334A (en) * 2017-05-10 2017-09-22 武汉大学 Video satellite image dense Stereo Matching method and system based on optical flow estimation
CN108154552A (en) * 2017-12-26 2018-06-12 中国科学院深圳先进技术研究院 A kind of stereo laparoscope method for reconstructing three-dimensional model and device
CN108269300A (en) * 2017-10-31 2018-07-10 杭州先临三维科技股份有限公司 Tooth three-dimensional data re-establishing method, device and system
CN112767426A (en) * 2021-01-07 2021-05-07 珠海格力电器股份有限公司 Target matching method and device and robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070057403A1 (en) * 2005-09-09 2007-03-15 Nielson Scott L Methods involving a molded impression of a natural nail surface in the creation of an artificial nail
CN102156985A (en) * 2011-04-11 2011-08-17 上海交通大学 Method for counting pedestrians and vehicles based on virtual gate
CN102542593A (en) * 2011-09-30 2012-07-04 中山大学 Interactive video stylized rendering method based on video interpretation
WO2015040119A1 (en) * 2013-09-20 2015-03-26 Eth Zurich 3d reconstruction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070057403A1 (en) * 2005-09-09 2007-03-15 Nielson Scott L Methods involving a molded impression of a natural nail surface in the creation of an artificial nail
CN102156985A (en) * 2011-04-11 2011-08-17 上海交通大学 Method for counting pedestrians and vehicles based on virtual gate
CN102542593A (en) * 2011-09-30 2012-07-04 中山大学 Interactive video stylized rendering method based on video interpretation
WO2015040119A1 (en) * 2013-09-20 2015-03-26 Eth Zurich 3d reconstruction

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KANADE T. 等: "A st ereo matching algorithm w ith an adaptive window :Theory and experiment", 《IEEE TRANSACTIONS ON PAT TERN ANALYSIS AND MACHINE INTELLIGENCE》 *
唐丽 等: "基于区域增长的立体像对稠密匹配算法", 《计算机学报》 *
王国美 等: "SIFT特征匹配算法研究", 《盐城工学院学报(自然科学版)》 *
金玲: "稠密立体匹配方法的研究", 《中国优秀硕士学位论文全文数据库_信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683173A (en) * 2016-12-22 2017-05-17 西安电子科技大学 Method of improving density of three-dimensional reconstructed point cloud based on neighborhood block matching
CN106683173B (en) * 2016-12-22 2019-09-13 西安电子科技大学 A method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud
CN107194334A (en) * 2017-05-10 2017-09-22 武汉大学 Video satellite image dense Stereo Matching method and system based on optical flow estimation
CN107194334B (en) * 2017-05-10 2019-09-10 武汉大学 Video satellite image dense Stereo Matching method and system based on optical flow estimation
CN108269300A (en) * 2017-10-31 2018-07-10 杭州先临三维科技股份有限公司 Tooth three-dimensional data re-establishing method, device and system
CN108269300B (en) * 2017-10-31 2019-07-09 先临三维科技股份有限公司 Tooth three-dimensional data re-establishing method, device and system
CN108154552A (en) * 2017-12-26 2018-06-12 中国科学院深圳先进技术研究院 A kind of stereo laparoscope method for reconstructing three-dimensional model and device
CN112767426A (en) * 2021-01-07 2021-05-07 珠海格力电器股份有限公司 Target matching method and device and robot
CN112767426B (en) * 2021-01-07 2023-11-17 珠海格力电器股份有限公司 Target matching method and device and robot

Similar Documents

Publication Publication Date Title
CN104867183A (en) Three-dimensional point cloud reconstruction method based on region growing
Kim et al. Adaptive smoothness constraints for efficient stereo matching using texture and edge information
Zhan et al. Accurate image-guided stereo matching with efficient matching cost and disparity refinement
Kim et al. A dense stereo matching using two-pass dynamic programming with generalized ground control points
Jiao et al. Local stereo matching with improved matching cost and disparity refinement
Ma et al. A modified census transform based on the neighborhood information for stereo matching algorithm
Xu et al. Multi-scale geometric consistency guided and planar prior assisted multi-view stereo
Kuhn et al. Deepc-mvs: Deep confidence prediction for multi-view stereo reconstruction
CN104680510A (en) RADAR parallax image optimization method and stereo matching parallax image optimization method and system
CN107845073B (en) Local self-adaptive three-dimensional point cloud denoising method based on depth map
Hua et al. Extended guided filtering for depth map upsampling
CN108010075B (en) Local stereo matching method based on multi-feature combination
CN109146922B (en) Forward-looking sonar underwater target tracking method based on adaptive particle swarm optimization
CN103824305A (en) Improved Meanshift target tracking method
CN115601406A (en) Local stereo matching method based on fusion cost calculation and weighted guide filtering
Mordohai The self-aware matching measure for stereo
Hirner et al. FC-DCNN: A densely connected neural network for stereo estimation
CN106097336B (en) Front and back scape solid matching method based on belief propagation and self similarity divergence measurement
Emlek et al. Variable window size for stereo image matching based on edge information
CN110246169A (en) A kind of window adaptive three-dimensional matching process and system based on gradient
CN115841618A (en) Remote sensing image coastline extraction method based on clustering and edge detection
Wei et al. Dense and occlusion-robust multi-view stereo for unstructured videos
Vellanki et al. Enhanced stereo matching technique using image gradient for improved search time
Huang et al. Adaptive Local Stereo Matching Based on Improved Census Transform
Xu et al. Dense stereo matching optimization algorithm based on image segmentation and ORB gravitational field

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150826

WD01 Invention patent application deemed withdrawn after publication