CN103996201A - Stereo matching method based on improved gradient and adaptive window - Google Patents

Stereo matching method based on improved gradient and adaptive window Download PDF

Info

Publication number
CN103996201A
CN103996201A CN201410258432.2A CN201410258432A CN103996201A CN 103996201 A CN103996201 A CN 103996201A CN 201410258432 A CN201410258432 A CN 201410258432A CN 103996201 A CN103996201 A CN 103996201A
Authority
CN
China
Prior art keywords
parallax
cost
gradient
value
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410258432.2A
Other languages
Chinese (zh)
Inventor
祝世平
李政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201410258432.2A priority Critical patent/CN103996201A/en
Publication of CN103996201A publication Critical patent/CN103996201A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a stereo matching method based on an improved gradient and an adaptive window, and aims to solve the problem that an existing local stereo matching algorithm is low in precision and is influenced by amplitude distortion easily. Firstly, on the basis that a traditional gradient vector only contains amplitude information, phase information is introduced, conversion is carried out on the original matching cost, and abnormal values are further eliminated. Then, the adaptive window can be built through the criss-cross adaptive window generation method according to the color and spatial position relation between adjacent pixels, wherein a large window is provided for a low-texture area to improve matching precision, a small window is generated in a high-texture area to protect detail information of the edge of an object and the like, and the aggregated cost is selected according to the Winner-Takes-All strategy so that a disparity value corresponding to the minimum value of the total cost can serve as an initial matching result. Finally, a disparity refinement method based on a local disparity histogram is provided, and a high-precision disparity map is obtained. As is shown in experimental results, the stereo matching method is high in machining precision and has high robustness under the condition of amplitude distortion.

Description

A kind of solid matching method based on improving gradient and self-adapting window
Technical field:
The present invention relates to a kind of binocular image Stereo matching and obtain depth map method, be particularly related to a kind of solid matching method based on improving gradient and self-adapting window, the depth map result that the present invention obtains can further be applied to the fields such as vision measurement, three-dimensionalreconstruction, drawing virtual view image.
Background technology:
Stereo matching is a classical problem in computer vision, is the focus of research always.For many years, researchers have proposed a large amount of algorithms and have attempted to address this problem, but due to the pathosis of problem itself, at present perfect solution without comparison also.Scharstein is (referring to Daniel Scharstein, Richard Szeliski.A taxonomy and evaluation of dense two-frame stereo correspondence algorithms[J] .International Journal of Computer Vision, 2002,47 (1): 7-42.) etc. furtherd investigate some typical Stereo Matching Algorithm, various main method have been carried out to more comprehensive summary.They Stereo matching process be summarised as that coupling cost is calculated, coupling cost polymerization, initial parallax calculate and parallax four steps of refining, and according to cost polymerization methods, Stereo Matching Algorithm be divided into partial approach and global approach.Global approach generally has higher matching precision, but efficiency is lower; Partial approach travelling speed is fast, be easy to realize, but the coupling cost computing method of the support window that How to choose is suitable and pixel are difficult problem (referring to YANG Qing-xiong.A non-local cost aggregation method for stereo matching[C] .IEEE Conference on Computer Vision and Pattern Recognition, 2012:1402-1409.).
The similarity measure of most of Stereo Matching Algorithm is all the gray-scale value based on pixel, the same unique point in two width images should have identical gray-scale value (referring to Wang Junzheng under desirable illumination condition, Zhu Huajian, Li Jing. a kind of variable weight Stereo Matching Algorithm [J] based on Census conversion. Beijing Institute of Technology's journal, 2013,33 (7): 704-710.).For example gray scale difference absolute value and (AD), gray scale difference quadratic sum (SD), Adapt Weight is (referring to Yoon K, Kweon S.Locally adaptive support weight approch for visual correspondence search[J] .IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28 (4): 924-931.), Segment Support is (referring to Tombari F, Mattoccia S.Segmentation based adaptive support for accurate stereo correspondence[C] .IEEE Pacific Rim Symposium on Video and Technology, 2007:427-438.) etc. can obtain the matching result of degree of precision for ideal image, but these methods are for by illumination variation, difference in exposure, the picture amplitude distortion that the factors such as the dark angle of camera cause is very responsive, therefore be difficult to use in the coupling of real scene image.The insensitive coupling cost of amplitude distortion is mainly contained to normalized crosscorrelation (NCC), gradient (Gradient) is (referring to Daniel Scharstein.View synthesis using stereo vision.Phd thesis, 1997, 23 (5): 98-109.), Rank and Census convert (referring to Ramin Zabih, John Woodfill.Non-parametric local transforms for computing visual correspondence[C] .Proceedings of European Conference on Computer Vision, 1994:151-158.) etc.
There is the selection problem of mapping window size in traditional stationary window: if window is too little, mates cost discrimination too low, easily occur mistake coupling at low texture region; If window is excessive, can occur mating at degree of depth discontinuity zone again (referring to Zhou Long, Xu Guili, Li Kaiyu etc. based on the Stereo Matching Algorithm [J] of Census conversion and improvement self-adapting window. aviation journal; by mistake 2012,33 (5): 886-892.).Fusiello and Roberto are (referring to Fusiello A, Roberto V.Efficient stereo with multiple windowing[C] .IEEE Computer Society Conference on Computer vision and Pattern Recognition, 1997:858-863.) propose to select optimum window as support window in multiple windows given in advance; Veksler (referring to Veksler O.Fast variable window for stereo correspondence using integral image[C] .IEEE Computer Society Conference on Computer vision and Pattern Recognition, 2003:556-561.) proposes pointwise self-adaptation and chooses shape and the size of support window; Zhang (referring to Zhang K.Cross-based local stereo matching using orthogonal integral images[J] .IEEE Transactions on Circuits and systems for Video Technology, 2009,19 (7): 1073-1079.), according to the adaptively selected arbitrary shape of the color relations of neighbor and big or small support window, obtain good parallax result.
Summary of the invention:
For above-mentioned coupling cost and window selection problem, the present invention proposes a kind of matching process based on improving gradient coupling cost and self-adapting window.First only comprise on the basis of amplitude information in traditional gradient coupling cost, introduce Gradient Phase information, and original match cost is converted, further eliminate exceptional value; Then utilize picture structure and color information to build that self-adapting window carries out cost polymerization and " the victor is a king " (Winner-Takes-All (WTA)) strategy carries out parallax selection; Finally, propose the histogrammic parallax refined method of a kind of local parallax, obtained high-precision disparity map.
The technical problem to be solved in the present invention is:
1. traditional local matching method is lower to picture amplitude distortion-robust, the problem that matching precision reduces rapidly under illumination distortion condition.
2. the cost polymerization of stationary window is difficult to obtain at the low texture of image and the degree of depth discontinuity zone problem of higher matching precision simultaneously.
The technical solution adopted for the present invention to solve the technical problems: based on the matching process that improves gradient coupling cost and self-adapting window, it is characterized in that comprising the following steps:
Step 1: coupling cost is calculated: coupling cost is the tolerance of corresponding point similarity between the image of left and right, utilizes two components of gradient vector in image x, y direction, mould m and the phase angle of definition gradient vector then adopt the linearity of mould and phase angle to be combined as coupling cost function, to utilize to greatest extent gradient information.Finally utilize Geman-McClure function original match cost function to be converted to eliminate the impact of abnormal cost value.
Step 2: adaptive windows outlet structure: treat the polymerization window of a self-adaptation size of each pixel structure of matching image, it is how many that the large young pathbreaker of window directly determines to participate in the neighborhood territory pixel of polymerization.The present invention adopts a kind of improved right-angled intersection self-adapting window generation method, can build self-adapting window according to the color of neighbor and spatial relation.At low texture region, provide larger window to improve matching precision; Produce less window at high texture region, to protect the detailed information such as object edge.
Step 3: cost polymerization: after determining the self-adapting window of each pixel, need carry out polymerization to the original match cost of each single pixel in window and obtain total cost, finally select to make parallax value that total Least-cost is corresponding as initial matching result.
Step 4: parallax is refined: the initial parallax obtaining by above-mentioned steps also exists some Mismatching points and insincere value with true parallax, need to carry out the parallax processing of refining.The present invention proposes one and based on the histogrammic parallax refined method of local parallax, initial parallax figure is further processed.Then, adopt left and right consistency check to detect the Mismatching point still existing, utilize the value that in adjacent available point, parallax is less to carry out assignment to Mismatching point.
Brief description of the drawings:
Fig. 1 is the process flow diagram of the inventive method.
Fig. 2 is self-adapting window construction process schematic diagram.
Fig. 3 (a) is the right experimental result of Tsukuba image.
Fig. 3 (b) is the right experimental result of Venus image.
Fig. 3 (c) is the right experimental result of Teddy image.
Fig. 3 (d) is the right experimental result of Cones image.
Fig. 4 (a) is the experimental result under same light photograph and conditions of exposure.
Fig. 4 (b) is the experimental result under different illumination conditions.
Fig. 4 (c) is the experimental result under different exposure.
Fig. 4 (d) is the experimental result under different light and conditions of exposure.
Embodiment:
Be described in further detail the present invention below in conjunction with the drawings and the specific embodiments.
The present invention proposes a kind of matching process based on improving gradient coupling cost and self-adapting window, mainly contain four steps.
Step 1: coupling cost is calculated.Concrete methods of realizing is as follows:
Image gradient is defined as the single order partial derivative of image along x and y direction:
G = G x G y = ∂ I ∂ x ∂ I ∂ y - - - ( 1 )
Wherein I is gradation of image, in practical application, and can be by the formwork calculation gradient vector of horizontal direction and vertical direction.Like this, we just can obtain the gradient map G of left and right image l=(G lx, G ly) t, G r=(G rx, G ry) t; Consider the image after proofreading and correct, establishing p (x, y) is a bit on left image, and on right image, the match point of corresponding parallax d is pd (x-d, y).
Utilize two components of gradient vector in x, y direction, mould and the phase angle of definition gradient vector:
m = G x 2 + G y 2 - - - ( 2 )
The mould m of gradient characterizes rate of gray level, phase angle direction while characterizing rate of gray level maximum, they provide the different information of neighborhood of pixels, and illumination distortion is had to different unchangeability.Input picture can affect gradient-norm to gain distortions, and phase angle can not change, but they can not be subject to bias distortion impact.Thereby, the mould of gradient and phase angle are separately considered to be more conducive to the susceptibility of control method to noise.The present invention adopts the linearity of mould and phase angle to be combined as coupling cost function, to utilize to greatest extent gradient information.Be expressed as follows:
M in formula c, represent respectively corresponding to coloured image R, G, gradient mode and the phase angle of tri-passages of B, α is weighting coefficient.Because phase angle is taking π as the cycle, need to be normalized in the monocycle, therefore definition f:
f ( x ) = x 0 &le; x &le; &pi; 2 &pi; - x &pi; < x < 2 &pi; - - - ( 5 )
Owing to having introduced weighting coefficient α, we can be by adjusting the value change method of the parameter alpha robustness to illumination distortion and noise.α is less, and the impact of phase place is larger, and α is larger, and the impact of mould value is larger.Because different images has illumination distortion in various degree, in reality, need to come by experiment to determine the reasonable value scope of α.
Due to e (p, d) represent be the original match cost of single pixel, in actual conditions, still can there are some excessive exceptional values, need to get rid of to improve matching precision.Conventional method is to adopt a truncated function, compares with a constant by e (p, d), gets its minimum value as coupling cost.The method is very little to the improvement of result, and the present invention adopts Geman-McClure function to process exceptional value:
&rho; ( x ) = x 2 x 2 + &sigma; 2 - - - ( 6 )
When input, x exceedes after certain value, and it will drop to 0 smoothly on the impact of output valve, and the coupling cost after conversion will converge to 1, and can be controlled by parameter σ.Thereby, no matter input original match cost much, after Geman-McClure functional transformation, its output valve will can not exceed 1.
Step 2: adaptive windows outlet structure.Concrete methods of realizing is as follows:
The core of method therefor of the present invention is to build self-adapting window according to the color of neighbor and spatial relation, and concrete construction process as shown in Figure 2.First, determine a right-angled intersection region of current pixel p to be matched according to picture structure and color information, this right-angled intersection district inclusion horizontal and vertical direction, uses respectively H (p) and V (p) to represent, the size in region is by the brachium of 4 directions determine, and can be according to the structure of image and color information adaptively modifying.With for example, the criterion of brachium is as follows:
1.D c(p i, p) < τ 1and D c(p i, p i+ (1,0)) < τ 1;
2.D s(p i,p)<L 1
3.D c(p i,p)<τ 2,L 2<D s(p i,p)<L 1
Wherein, D s(p i, p) be pixel p ipoor with the space length of p; D c(p i, p) be color difference, be defined as τ 1> τ 2, L 1>L 2, be default color threshold value and distance threshold.Criterion 1 not only defines p ithe color difference opposite sex with p requires p simultaneously iwith its right side neighbor p ithe color difference opposite sex of+(1,0) is less than τ 1, avoided brachium to stride across borderline region; Criterion 2 and 3 has been relaxed brachium scope, uses larger distance threshold L at low texture region 1can obtain larger window; And in the time that brachium exceedes preset value L 2time, will adopt stricter threshold tau 2ensure that brachium is only in the very close low texture region expansion of color, make high texture region and the degree of depth discontinuity zone window can be not excessive.
Utilize said method can determine respectively 4 brachiums and then obtain orthogonal right-angled intersection region H (p) and V (p):
H ( p ) = { ( x , y ) | x &Element; [ x p - h p - , x p + h p + ] , y = y p } V ( p ) = { ( x , y ) | x = x p , y &Element; [ y p - v p - , y p + v p + ] } - - - ( 7 )
Finally, along vertical direction, each pixel q in V (p) is repeated to said process, the adaptive region of trying to achieve any pixel p in image is:
U ( p ) = &cup; q &Element; V ( p ) H ( q ) - - - ( 8 )
Step 3: cost polymerization.Concrete methods of realizing is as follows:
The present invention will consider left and right image local support area separately symmetrically.For two corresponding match point p (x in the image of left and right, and pd (x-d y), y), utilize said method can generate respectively adaptive region U (p) and U'(pd), their associating public domain is defined as final support area by we:
U d(p)={(x,y)|(x,y)∈U(p),(x-d,y)∈U'(pd)} (9)
Then, in associating support area, original single pixel matching cost is carried out to polymerization, tries to achieve total cost in region:
E d ( p ) = 1 N &Sigma; q &Element; U d ( p ) e ( q , d ) - - - ( 10 )
In formula, N is zone of convergency U d(p) the total number of pixel in.Finally adopt " the victor is a king " (Winner-Takes-All (WTA)) strategy, in parallax interval, select the point of coupling Least-cost to carry out parallax selection as matching double points p point, obtain initial parallax:
d p 0 = arg min 0 &le; d &le; d max E d ( p ) - - - ( 11 )
Wherein d represents the possible parallax in disparity space, and its value is generally 0 to maximum disparity d maxbetween integer.
Step 4: parallax is refined.Concrete methods of realizing is as follows:
The present invention proposes one and based on the histogrammic parallax refined method of local parallax, initial parallax figure is further processed.For certain pixel p in disparity map, we construct a local parallax histogram centered by it within the scope of its neighborhood the number of times that in statistics field, each parallax value occurs.In histogram, will there is a peak value, represent the maximum times that parallax occurs.The parallax value that this peak value is corresponding is the optimum parallax value in statistical significance the present invention adopts this optimal value to replace the initial parallax of pixel p
This process has been utilized the self-adapting window that generates in the previous step neighborhood as pixel, thereby does not produce extra computation burden.In addition, the present invention carries out iteration three times to this process, makes optimum parallax value more accurate.
Then, adopt left and right consistency check to detect the Mismatching point still existing, be labeled as available point by the point of consistency check, otherwise be Null Spot.For the Mismatching point detecting, first available point of horizontal scan direction left and right, and utilize the value that in both, parallax is less to carry out assignment to Mismatching point.
For the validity of verification method, on VS2008 platform, to the inventive method realization of programming, and the standard stereotome of being issued by Middlebury website that adopts academic circles at present to generally acknowledge is tested and is evaluated and tested method.This website provides 4 groups of reference color images to Tsukuba, Venus, Teddy and Cones, and corresponding real depth map.Experimental result and true disparity map (Ground truth) relatively can be obtained to the matching error quantizing, thus evaluation method precision objectively.Table 1 is that the inventive method is at limits of error δ dthe mistake matched pixel number percent data of=1 o'clock.The row at Noocc, All, Disc place be respectively unshielding region mistake matched pixel ratio, always by mistake matched pixel than and degree of depth discontinuity zone mistake matched pixel ratio.And with GC+occ (referring to Kolmogorov V, Rabih R.Computing visual correspondence with occlusions using graph cuts[C] .IEEE Conference on Computer Vision, 2001.), SemiGlob (referring to Hirschm ü ller H.Accurate and efficient stereo processing by semi-global matching and mutual information[J] .IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008,30 (2): 328-341.), AdaptWeight is (referring to Yoon K, Kweon S.Locally adaptive support weight approch for visual correspondence search[J] .IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006,28 (4): 924-931.) and Enhanced BP (referring to Larsen S, Mordohai P, Pollefeys M et al.Temporally consistent reconstruction from multiple video streams using enhanced belief propagation[C] .IEEE International Conference on Computer Vision, 2007:1-8.) method contrasts.
Table 1 matching result objective evaluation
Fig. 3 has reacted the precision of the inventive method operation result intuitively.(a), (b), (c), (d) are followed successively by Tsukuba, Venus, Teddy and the right experimental result of Cones image; In figure, the 1st classifies original image to be matched as; The 2nd classifies corresponding true disparity map as; The 3rd classifies the disparity map that the inventive method obtains as; The 4th classifies the mistake matched pixel figure of the inventive method as, and in figure, white bulk zone is the point that coupling is correct, and gray area and black region represent respectively the Mismatching point in occlusion area and unobstructed region.
Fig. 4 is the operation result of the present invention under illumination distortion condition, and is that the method for mating cost compares with using traditional gray scale absolute difference (AD), to verify the robustness of the inventive method to illumination distortion.As shown in Figure 4, in figure, (a), (b), (c), (d) are respectively the experimental result under same light photograph and exposure, different light, different exposure and different light and conditions of exposure to experimental result.The 1st of every group of result classified original left image as, and the 2nd classifies original right image as, and the 3rd classifies the Variable Cross method matching result that adopts SAD coupling cost as, and the 4th classifies the inventive method matching result as.Experimental result shows, the present invention, owing to having adopted the gradient information that amplitude distortion is had to repellence as coupling cost, in the situation that having illumination distortion, still can obtain the matching result that precision is higher, has good robustness.

Claims (4)

1. the solid matching method based on improving gradient and self-adapting window, it is characterized in that: the method only comprises on the basis of amplitude information in traditional gradient coupling cost, introduce Gradient Phase information, and original match cost is converted, further eliminate exceptional value; Then utilize picture structure and color information to build that self-adapting window carries out cost polymerization and " the victor is a king " (Winner-Takes-All (WTA)) strategy carries out parallax selection; Finally, propose the histogrammic parallax refined method of a kind of local parallax, obtained high-precision disparity map.Method concrete steps are as follows:
Step 1: coupling cost is calculated: coupling cost is the tolerance of corresponding point similarity between the image of left and right, utilizes two components of gradient vector in image x, y direction, mould m and the phase angle of definition gradient vector then adopt the linearity of mould and phase angle to be combined as coupling cost function, to utilize to greatest extent gradient information.Finally utilize Geman-McClure function original match cost function to be converted to eliminate the impact of abnormal cost value.
Step 2: adaptive windows outlet structure: treat the polymerization window of a self-adaptation size of each pixel structure of matching image, it is how many that the large young pathbreaker of window directly determines to participate in the neighborhood territory pixel of polymerization.The present invention adopts a kind of improved right-angled intersection self-adapting window generation method, can build self-adapting window according to the color of neighbor and spatial relation.At low texture region, provide larger window to improve matching precision; Produce less window at high texture region, to protect the detailed information such as object edge.
Step 3: cost polymerization: after determining the self-adapting window of each pixel, need carry out polymerization to the original match cost of each single pixel in window and obtain total cost, finally select to make parallax value that total Least-cost is corresponding as initial matching result.
Step 4: parallax is refined: the initial parallax obtaining by above-mentioned steps also exists some Mismatching points and insincere value with true parallax, need to carry out the parallax processing of refining.The present invention proposes one and based on the histogrammic parallax refined method of local parallax, initial parallax figure is further processed.Then, adopt left and right consistency check to detect the Mismatching point still existing, utilize the value that in adjacent available point, parallax is less to carry out assignment to Mismatching point.
2. according to a kind of solid matching method based on improving gradient and self-adapting window described in claim 1, it is characterized in that: the gradient cost of the coupling cost application enhancements of described step 1, concrete computation process is as follows:
Image gradient is defined as the single order partial derivative of image along x and y direction:
G = G x G y = &PartialD; I &PartialD; x &PartialD; I &PartialD; y
Wherein I is gradation of image, in practical application, and can be by the formwork calculation gradient vector of horizontal direction and vertical direction.Like this, we just can obtain the gradient map G of left and right image l=(G lx, G ly) t, G r=(G rx, G ry) t; Consider the image after proofreading and correct, establishing p (x, y) is a bit on left image, and on right image, the match point of corresponding parallax d is pd (x-d, y).
Utilize two components of gradient vector in x, y direction, mould and the phase angle of definition gradient vector:
m = G x 2 + G y 2
The mould m of gradient characterizes rate of gray level, phase angle direction while characterizing rate of gray level maximum, they provide the different information of neighborhood of pixels, and illumination distortion is had to different unchangeability.Input picture can affect gradient-norm to gain distortions, and phase angle can not change, but they can not be subject to bias distortion impact.Thereby, the mould of gradient and phase angle are separately considered to be more conducive to the susceptibility of control method to noise.The present invention adopts the linearity of mould and phase angle to be combined as coupling cost function, to utilize to greatest extent gradient information.Be expressed as follows:
M in formula c, represent respectively corresponding to coloured image R, G, gradient mode and the phase angle of tri-passages of B, α is weighting coefficient.Because phase angle is taking π as the cycle, need to be normalized in the monocycle, therefore definition f (x):
f ( x ) = x 0 &le; x &le; &pi; 2 &pi; - x &pi; < x < 2 &pi;
Owing to having introduced weighting coefficient α, we can be by adjusting the value change method of the parameter alpha robustness to illumination distortion and noise.α is less, and the impact of phase place is larger, and α is larger, and the impact of mould value is larger.Because different images has illumination distortion in various degree, in reality, need to come by experiment to determine the reasonable value scope of α.
Due to e (p, d) represent be the original match cost of single pixel, in actual conditions, still can there are some excessive exceptional values, need to get rid of to improve matching precision.Conventional method is to adopt a truncated function, compares with a constant by e (p, d), gets its minimum value as coupling cost.The method is very little to the improvement of result, and the present invention adopts Geman-McClure function to process exceptional value:
&rho; ( x ) = x 2 x 2 + &sigma; 2
When input, x exceedes after certain value, and it will drop to 0 smoothly on the impact of output valve, and the coupling cost after conversion will converge to 1, and can be controlled by parameter σ.Thereby, no matter input original match cost much, after Geman-McClure functional transformation, its output valve will can not exceed 1.
3. according to a kind of solid matching method based on improving gradient and self-adapting window described in claim 1, it is characterized in that: the right-angled intersection self-adapting window generation method of described step 2 and step 3 application enhancements, detailed process is as follows:
First, determine a right-angled intersection region of current pixel p to be matched according to picture structure and color information, this right-angled intersection district inclusion horizontal and vertical direction, uses respectively H (p) and V (p) to represent, the size in region is by the brachium of 4 directions determine, and can be according to the structure of image and color information adaptively modifying.With for example, the criterion of brachium is as follows:
1.D c(p i, p) < τ 1and D c(p i, p i+ (1,0)) < τ 1;
2.D s(p i,p)<L 1
3.D c(p i,p)<τ 2,L 2<D s(p i,p)<L 1
Wherein, D s(p i, p) be pixel p ipoor with the space length of p; D c(p i, p) be color difference, be defined as τ 1> τ 2, L 1>L 2, be default color threshold value and distance threshold.Criterion 1 not only defines p ithe color difference opposite sex with p requires p simultaneously iwith its right side neighbor p ithe color difference opposite sex of+(1,0) is less than τ 1, avoided brachium to stride across borderline region; Criterion 2 and 3 has been relaxed brachium scope, uses larger distance threshold L at low texture region 1can obtain larger window; And in the time that brachium exceedes preset value L 2time, will adopt stricter threshold tau 2ensure that brachium is only in the very close low texture region expansion of color, make high texture region and the degree of depth discontinuity zone window can be not excessive.
Utilize said method can determine respectively 4 brachiums and then obtain orthogonal right-angled intersection region H (p) and V (p):
H ( p ) = { ( x , y ) | x &Element; [ x p - h p - , x p + h p + ] , y = y p } V ( p ) = { ( x , y ) | x = x p , y &Element; [ y p - v p - , y p + v p + ] }
Finally, along vertical direction, each pixel q in V (p) is repeated to said process, the adaptive region of trying to achieve any pixel p in image is:
U ( p ) = &cup; q &Element; V ( p ) H ( q )
The present invention will consider left and right image local support area separately symmetrically.For two corresponding match point p (x in the image of left and right, and pd (x-d y), y), utilize said method can generate respectively adaptive region U (p) and U'(pd), their associating public domain be defined as to final support area:
U d(p)={(x,y)|(x,y)∈U(p),(x-d,y)∈U'(pd)}
Then, in associating support area, original single pixel matching cost is carried out to polymerization, tries to achieve total cost in region:
E d ( p ) = 1 N &Sigma; q &Element; U d ( p ) e ( q , d )
In formula, N is zone of convergency U d(p) the total number of pixel in.Finally adopt " the victor is a king " (Winner-Takes-All (WTA)) strategy, in parallax interval, select the point of coupling Least-cost to carry out parallax selection as matching double points p point, obtain initial parallax:
d p 0 = arg min 0 &le; d &le; d max E d ( p )
Wherein d represents the possible parallax in disparity space, and its value is generally 0 to maximum disparity d maxbetween integer.
4. according to a kind of solid matching method based on improving gradient and self-adapting window described in claim 1, it is characterized in that: described step 4 has proposed based on the histogrammic parallax refined method of local parallax, concrete computation process is as follows:
For certain pixel p in disparity map, centered by it, within the scope of its neighborhood, construct a local parallax histogram the number of times that in statistics field, each parallax value occurs.In histogram, will there is a peak value, represent the maximum times that parallax occurs.The parallax value that this peak value is corresponding is the optimum parallax value in statistical significance the present invention adopts this optimal value to replace the initial parallax of pixel p
This process has been utilized the self-adapting window that generates in the previous step neighborhood as pixel, thereby does not produce extra computation burden.In addition, the present invention carries out iteration three times to this process, makes optimum parallax value more accurate.
Then, adopt left and right consistency check to detect the Mismatching point still existing, be labeled as available point by the point of consistency check, otherwise be Null Spot.For the Mismatching point detecting, first available point of horizontal scan direction left and right, and utilize the value that in both, parallax is less to carry out assignment to Mismatching point.
CN201410258432.2A 2014-06-11 2014-06-11 Stereo matching method based on improved gradient and adaptive window Pending CN103996201A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410258432.2A CN103996201A (en) 2014-06-11 2014-06-11 Stereo matching method based on improved gradient and adaptive window

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410258432.2A CN103996201A (en) 2014-06-11 2014-06-11 Stereo matching method based on improved gradient and adaptive window

Publications (1)

Publication Number Publication Date
CN103996201A true CN103996201A (en) 2014-08-20

Family

ID=51310356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410258432.2A Pending CN103996201A (en) 2014-06-11 2014-06-11 Stereo matching method based on improved gradient and adaptive window

Country Status (1)

Country Link
CN (1) CN103996201A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574391A (en) * 2014-12-29 2015-04-29 西安交通大学 Stereoscopic vision matching method based on adaptive feature window
CN105513069A (en) * 2015-12-08 2016-04-20 天津大学 Stereo matching method based on guide filtering and sensitive area
CN105631887A (en) * 2016-01-18 2016-06-01 武汉理工大学 Two step parallax improvement method based on adaptive support weight matching algorithm and system
CN105787932A (en) * 2016-02-07 2016-07-20 哈尔滨师范大学 Stereo matching method based on segmentation cross trees
CN107316326A (en) * 2017-06-29 2017-11-03 海信集团有限公司 Applied to disparity map computational methods of the binocular stereo vision based on side and device
CN107397658A (en) * 2017-07-26 2017-11-28 成都快眼科技有限公司 A kind of multiple dimensioned full convolutional network and vision blind-guiding method and device
CN107481271A (en) * 2017-07-25 2017-12-15 成都通甲优博科技有限责任公司 A kind of solid matching method, system and mobile terminal
CN107578430A (en) * 2017-07-26 2018-01-12 昆明理工大学 A kind of solid matching method based on adaptive weight and local entropy
CN108322724A (en) * 2018-02-06 2018-07-24 上海兴芯微电子科技有限公司 Image solid matching method and binocular vision equipment
CN108682026A (en) * 2018-03-22 2018-10-19 辽宁工业大学 A kind of binocular vision solid matching method based on the fusion of more Matching units
CN109564691A (en) * 2016-06-01 2019-04-02 弗劳恩霍夫应用研究促进协会 Device and method for executing 3D estimation based on locally determining 3D information hypothesis
CN109658443A (en) * 2018-11-01 2019-04-19 北京华捷艾米科技有限公司 Stereo vision matching method and system
CN109741389A (en) * 2018-11-27 2019-05-10 华南农业大学 One kind being based on the matched sectional perspective matching process of region base
CN110246169A (en) * 2019-05-30 2019-09-17 华中科技大学 A kind of window adaptive three-dimensional matching process and system based on gradient
CN112348871A (en) * 2020-11-16 2021-02-09 长安大学 Local stereo matching method
CN113077504A (en) * 2021-04-12 2021-07-06 中国电子科技集团公司第二十八研究所 Large scene depth map generation method based on multi-granularity feature matching
CN113822915A (en) * 2021-07-30 2021-12-21 济宁安泰矿山设备制造有限公司 Image stereo matching method for intelligent pump cavity endoscope fault diagnosis
CN114332790A (en) * 2021-11-24 2022-04-12 华人运通(上海)自动驾驶科技有限公司 Binocular vision 3D target detection method, storage medium and terminal equipment

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574391A (en) * 2014-12-29 2015-04-29 西安交通大学 Stereoscopic vision matching method based on adaptive feature window
CN105513069A (en) * 2015-12-08 2016-04-20 天津大学 Stereo matching method based on guide filtering and sensitive area
CN105631887A (en) * 2016-01-18 2016-06-01 武汉理工大学 Two step parallax improvement method based on adaptive support weight matching algorithm and system
CN105631887B (en) * 2016-01-18 2019-10-25 武汉理工大学 Based on the adaptive two step parallax modification methods and system for supporting weight matching algorithm
CN105787932A (en) * 2016-02-07 2016-07-20 哈尔滨师范大学 Stereo matching method based on segmentation cross trees
CN105787932B (en) * 2016-02-07 2019-03-29 哈尔滨师范大学 Solid matching method based on segmentation Cross-Tree
CN109564691B (en) * 2016-06-01 2023-09-01 弗劳恩霍夫应用研究促进协会 Apparatus and method for performing 3D estimation based on locally determined 3D information hypothesis
CN109564691A (en) * 2016-06-01 2019-04-02 弗劳恩霍夫应用研究促进协会 Device and method for executing 3D estimation based on locally determining 3D information hypothesis
CN107316326A (en) * 2017-06-29 2017-11-03 海信集团有限公司 Applied to disparity map computational methods of the binocular stereo vision based on side and device
CN107316326B (en) * 2017-06-29 2020-10-30 海信集团有限公司 Edge-based disparity map calculation method and device applied to binocular stereo vision
CN107481271A (en) * 2017-07-25 2017-12-15 成都通甲优博科技有限责任公司 A kind of solid matching method, system and mobile terminal
CN107481271B (en) * 2017-07-25 2020-08-11 成都通甲优博科技有限责任公司 Stereo matching method, system and mobile terminal
CN107578430B (en) * 2017-07-26 2021-01-05 昆明理工大学 Stereo matching method based on self-adaptive weight and local entropy
CN107578430A (en) * 2017-07-26 2018-01-12 昆明理工大学 A kind of solid matching method based on adaptive weight and local entropy
CN107397658A (en) * 2017-07-26 2017-11-28 成都快眼科技有限公司 A kind of multiple dimensioned full convolutional network and vision blind-guiding method and device
CN107397658B (en) * 2017-07-26 2020-06-19 成都快眼科技有限公司 Multi-scale full-convolution network and visual blind guiding method and device
CN108322724B (en) * 2018-02-06 2019-08-16 上海兴芯微电子科技有限公司 Image solid matching method and binocular vision equipment
CN108322724A (en) * 2018-02-06 2018-07-24 上海兴芯微电子科技有限公司 Image solid matching method and binocular vision equipment
CN108682026B (en) * 2018-03-22 2021-08-06 江大白 Binocular vision stereo matching method based on multi-matching element fusion
CN108682026A (en) * 2018-03-22 2018-10-19 辽宁工业大学 A kind of binocular vision solid matching method based on the fusion of more Matching units
CN109658443A (en) * 2018-11-01 2019-04-19 北京华捷艾米科技有限公司 Stereo vision matching method and system
CN109741389B (en) * 2018-11-27 2023-05-16 华南农业大学 Local stereo matching method based on region base matching
CN109741389A (en) * 2018-11-27 2019-05-10 华南农业大学 One kind being based on the matched sectional perspective matching process of region base
CN110246169A (en) * 2019-05-30 2019-09-17 华中科技大学 A kind of window adaptive three-dimensional matching process and system based on gradient
CN112348871A (en) * 2020-11-16 2021-02-09 长安大学 Local stereo matching method
CN112348871B (en) * 2020-11-16 2023-02-10 长安大学 Local stereo matching method
CN113077504A (en) * 2021-04-12 2021-07-06 中国电子科技集团公司第二十八研究所 Large scene depth map generation method based on multi-granularity feature matching
CN113077504B (en) * 2021-04-12 2021-11-12 中国电子科技集团公司第二十八研究所 Large scene depth map generation method based on multi-granularity feature matching
CN113822915A (en) * 2021-07-30 2021-12-21 济宁安泰矿山设备制造有限公司 Image stereo matching method for intelligent pump cavity endoscope fault diagnosis
CN114332790A (en) * 2021-11-24 2022-04-12 华人运通(上海)自动驾驶科技有限公司 Binocular vision 3D target detection method, storage medium and terminal equipment

Similar Documents

Publication Publication Date Title
CN103996201A (en) Stereo matching method based on improved gradient and adaptive window
CN103996202A (en) Stereo matching method based on hybrid matching cost and adaptive window
Uhrig et al. Sparsity invariant cnns
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
US10353271B2 (en) Depth estimation method for monocular image based on multi-scale CNN and continuous CRF
Wang et al. PVStereo: Pyramid voting module for end-to-end self-supervised stereo matching
Pantilie et al. SORT-SGM: Subpixel optimized real-time semiglobal matching for intelligent vehicles
CN102006425A (en) Method for splicing video in real time based on multiple cameras
CN115601406A (en) Local stereo matching method based on fusion cost calculation and weighted guide filtering
Gao et al. Exploiting key points supervision and grouped feature fusion for multiview pedestrian detection
Huang et al. ES-Net: An efficient stereo matching network
Zhang et al. The farther the better: Balanced stereo matching via depth-based sampling and adaptive feature refinement
CN114120012A (en) Stereo matching method based on multi-feature fusion and tree structure cost aggregation
Tao et al. An efficient 3D object detection method based on fast guided anchor stereo RCNN
Jia et al. Multi-scale cost volumes cascade network for stereo matching
Li et al. Graph-based saliency fusion with superpixel-level belief propagation for 3D fixation prediction
Wang et al. Robust obstacle detection based on a novel disparity calculation method and G-disparity
Liu et al. A novel method for stereo matching using Gabor Feature Image and Confidence Mask
Brockers et al. Stereo vision using cost-relaxation with 3D support regions
CN112200853A (en) Progressive real-time stereo matching method based on three-dimensional convolutional neural network
Zhu et al. A Stereo Matching and Depth Map Acquisition Algorithm Based on Deep Learning and Improved Winner Takes All-Dynamic Programming
Zahari et al. Stereo matching algorithm based on combined matching cost computation and edge preserving filters
Wen Convolutional neural network and adaptive guided image filter based stereo matching
KANG et al. Encoder-Decoder network for local structure preserving stereo matching
Zahari et al. Stereo matching algorithm for autonomous vehicle navigation using integrated matching cost and non-local aggregation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140820