CN108596975B - Stereo matching algorithm for weak texture region - Google Patents

Stereo matching algorithm for weak texture region Download PDF

Info

Publication number
CN108596975B
CN108596975B CN201810376408.7A CN201810376408A CN108596975B CN 108596975 B CN108596975 B CN 108596975B CN 201810376408 A CN201810376408 A CN 201810376408A CN 108596975 B CN108596975 B CN 108596975B
Authority
CN
China
Prior art keywords
image
matching
cost
point
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810376408.7A
Other languages
Chinese (zh)
Other versions
CN108596975A (en
Inventor
杜娟
沈思昀
冯颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201810376408.7A priority Critical patent/CN108596975B/en
Publication of CN108596975A publication Critical patent/CN108596975A/en
Application granted granted Critical
Publication of CN108596975B publication Critical patent/CN108596975B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a stereo matching algorithm for weak texture areas, which comprises the steps of firstly carrying out Gaussian sampling on an input image to construct image pairs to be matched under different scales, then carrying out edge detection on the original image by utilizing a canny algorithm to detect the edge of an object and divide a low texture area, setting a smaller cost aggregation window for the edge area, strengthening consistency constraint between different scales of the same pixel aiming at the low texture area, respectively calculating the matching cost of the image pixel pairs under each scale by utilizing an improved sliding window method, delimiting a parallax searching range by utilizing a characteristic point matching result, calculating the aggregation matching cost corresponding to each parallax value in the parallax range by utilizing a cost aggregation function under multiple scales, selecting the parallax value corresponding to the minimum cost as a pixel parallax value, and traversing the original image pixels to obtain a dense parallax image. The invention has higher matching precision aiming at the low texture area and can effectively reduce the mismatching.

Description

Stereo matching algorithm for weak texture region
Technical Field
The invention relates to the field of digital image processing, in particular to a stereo matching algorithm for a weak texture region.
Background
Binocular stereo vision is an important research subject of computer vision, the binocular stereo vision simulates human eye structure through a camera, different image information acquired through two different angles in the same scene is compared, and a computer is utilized to simulate a brain for processing, so that three-dimensional coordinate information of an object is acquired. The binocular stereo vision directly simulates the way that human beings observe external objects, and has the advantages of low cost, high efficiency and the like.
The binocular stereo vision system mainly comprises image acquisition, camera calibration, image correction, stereo matching and three-dimensional reconstruction, wherein the stereo matching is the most important link in stereo vision, the result of finally matching a disparity map is determined, and the correctness of finally recovered three-dimensional information is also determined. The stereo matching mainly comprises the steps of establishing a pixel corresponding relation between two-dimensional images, calculating parallax and obtaining a parallax image. The problem difficult to overcome in stereo matching mainly has illumination influence, and mismatching caused by shielding and mismatching of a low-texture region of a scene to which the stereo matching belongs. In the low-texture region, because the pixel values of each pixel point are very close, the difference between regions is not obvious during matching, and the matching result is easy to be wrong.
The local-based stereo matching comprises four steps: calculating matching cost, aggregating the matching cost, calculating parallax and optimizing results. A commonly used cost calculation method includes matching cost calculation based on pixel points: absolute difference Absolute Differences (AD), Squared Differences (SD), truncated absolute differences, etc., Truncated Absolute Differences (TAD), etc., and region-based matching cost calculations: sum of Absolute Differences (SAD), sum of squared differences (HADAMARD), Sum of Absolute Transformed Differences (SATD), etc.). Commonly used cost aggregation methods include a filter-based cost aggregation method (binary filter, a defined image filter, a box filter, etc.), a cost aggregation method based on a partition tree, and the like.
At present, most of improvement researches on stereo matching algorithms focus on improving the first two steps, but no matching algorithm with good robustness is provided at present, and most of the improvement researches need to design corresponding matching methods according to image characteristics and processing requirements.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention provides a stereo matching algorithm for weak texture regions.
The invention adopts the following technical scheme:
a stereo matching algorithm for weak texture regions comprises the following steps:
s1, correcting the input image, and carrying out scale transformation on the corrected image pair to obtain the image pair under different scales;
s2, detecting the image edge by adopting a Canny algorithm, obtaining a smooth continuous boundary through closed operation and marking the edge position, wherein the region in the edge is a weak texture region;
s3, detecting Harris angular points of the corrected image pair, matching the characteristic points, and determining a parallax search range in the same connected domain;
s4, calculating the matching cost among the pixels of the image in each scale by adopting an improved sliding window method;
s5, calculating aggregation costs corresponding to all parallax values in the parallax search range by adopting a multi-scale cost aggregation function under regularization constraint, and selecting a parallax value corresponding to the minimum aggregation cost as a pixel parallax value;
s6 obtains a dense disparity map by traversing the original image pixels.
In the step S1, image pairs at different scales are obtained by using a gaussian pyramid method.
Detecting Harris angular points of the corrected image pair in the S3, matching the characteristic points, and determining a parallax search range in the same connected domain, wherein the method specifically comprises the following steps:
the corrected image pair comprises a left image and a right image, Harris angular points in the two images are extracted as characteristic points, the left image is set as a reference image, one characteristic point is taken out, a corresponding point of the point in the right image is found out under the condition that epipolar constraint is met, if more than one characteristic point in the right image meets the condition, whether the characteristic point is a matched characteristic point is determined by calculating the ratio of the nearest distance to the next nearest distance until all the characteristic points in the reference image are traversed, and a matched point set is obtained;
and performing parallax solution on the matching point set, and determining a parallax search range in a certain connected domain by judging the positions of the feature points and integrating the parallaxes of all the feature points in the connected domain.
The S4 calculates the matching cost of each scale image pixel by using an improved sliding window method, specifically:
judging whether the pixel point i is positioned on the edge of the image, if so, taking the pixel point as a central pixel point, carrying out region judgment on points in eight adjacent regions of the pixel point, and only calculating the matching cost of the pixel point in the region with high priority according to the priority of the region to obtain an optimal value;
if the image edge is not the image edge, a parallax search range is obtained through S3, the window size during matching cost calculation is set, and the matching cost is further solved.
The areas comprise single connected areas, multiple connected areas and other areas, and the priority order is single connected area > multiple connected area > other areas.
S5, specifically:
calculating the matching cost of each pixel pair of the image under each scale;
calculating an aggregation result of the matching cost under a single scale;
solving the fused matching cost of the multi-scale image under the regularization constraint;
selecting the point with the minimum matching cost as a corresponding point;
the final disparity value is obtained.
Matching a cost aggregation model:
Figure BDA0001639847560000031
wherein l is an unknown parallax variable, j is other pixel points in a window N with a pixel i as a center, z is an overall cost, and K (i, j) is a kernel function capable of reflecting the similarity degree between the two points i and j.
And adopting a WTA strategy to select the point with the minimum matching cost as the corresponding point.
The invention has the beneficial effects that:
1. by extracting edge features and judging regions, different windows are adopted when matching cost is calculated according to the respective characteristics of different marked regions, and parameters in the function are properly set in the cost aggregation step, so that the final result has higher accuracy on both a low-texture region and a deep mutation region;
2. through the extraction and matching of the diagonal features, more reasonable parallax range estimation of each region in the image is obtained, and the calculation amount during matching can be reduced to a certain extent.
Drawings
FIG. 1 is a schematic workflow diagram of the present invention;
fig. 2 is a schematic flow chart of S4;
FIG. 3 is a schematic diagram of an improved sliding window approach;
fig. 4 is a flowchart of S5.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited to these examples.
Examples
As shown in fig. 1 to 4, a stereo matching algorithm for weak texture regions includes the following steps:
s1 performs image correction on the input image, performs scale transformation on the corrected image pair to obtain an image pair at different scales, where the corrected image pair includes a left image and a right image, and the left image is selected as a reference image.
For human vision, the size of an image, the degree of blurring, and the main information observed are different when the same image is viewed from far to near. The multi-scale transformation of the image is similar to the property, wherein the image pyramid is one of multi-scale expressions of the image and is an image set which is arranged layer by layer and has gradually reduced resolution and gradually reduced size. The Gaussian pyramid is the most commonly used one in practical application, the main features of the images under different scales are highlighted by constructing the Gaussian pyramid, and then the pixel region features can be more accurately described by fusing the matching cost results of the images with the multiple scales and solving the aggregation cost, so that a better matching effect is obtained.
The step of constructing the Gaussian pyramid of the image comprises the steps of blurring the image by using a Gaussian low-pass filter and sampling, when the method is realized, 5 x 5 Gaussian kernels are adopted to perform convolution operation on an original image Gi to obtain a blurred image, and then downsampling operation is performed to obtain an upper layer image G (i + 1).
S2, detecting the image edge by adopting a Canny algorithm, obtaining a smooth continuous boundary through closed operation and marking the edge position, wherein the region in the edge is a weak texture region;
because the background in the image is an open area, the method is mainly used for matching the observed object area with the characteristic of low texture on the surface, the areas in the connected domain can be regarded as low texture areas, a larger regularization factor is adopted in the subsequent calculation of the aggregate matching cost, the consistency constraint among different scales of pixels is enhanced, and a better matching result is obtained.
The Canny edge detection algorithm can accurately detect as many edges in an image as possible, and the implementation mainly comprises the following three steps: firstly, the gradient amplitude and the direction of the image gray value are calculated by utilizing the finite difference of first-order partial derivatives, then the maximum value of the gradient amplitude is suppressed so as to reduce the possibility of false detection, and finally, the edge is connected by using a double-threshold method.
The horizontal and vertical differences Gx and Gy in this embodiment are generally obtained by the following two convolution kernels:
Figure BDA0001639847560000041
Gx=K_Gx*G Gy=K_Gy*G
the corresponding gradient amplitude M and phase angle theta can be obtained according to a coordinate conversion formula:
Figure BDA0001639847560000051
s3 detecting Harris corners of the corrected image pair and matching the feature points, and determining a disparity search range in the same connected domain, specifically:
the corrected image pair comprises a left image and a right image, Harris angular points in the two images are extracted as characteristic points, the left image is set as a reference image, one characteristic point is taken out, a corresponding point of the point in the right image is found out under the condition that epipolar constraint is met, if more than one characteristic point in the right image meets the condition, whether the characteristic point is a matched characteristic point is determined by calculating the ratio of the nearest distance to the next nearest distance until all the characteristic points in the reference image are traversed, and a matched point set is obtained;
and performing parallax solution on the matching point set, and determining a parallax search range in a certain connected domain by judging the positions of the feature points and integrating the parallaxes of all the feature points in the connected domain.
The principle of the Harris corner feature extraction algorithm is as follows: a local window W is created by taking a certain characteristic point P (x, y) in an image as a center, if the slight movement of the window W in any direction causes obvious change of the image gray scale, the point is considered as a characteristic point of the image, and an autocorrelation matrix for defining the image brightness is as follows:
Figure BDA0001639847560000052
in the above formula
Figure BDA0001639847560000053
Is a Gaussian function, IxAnd IyThe derivatives of the image in the x and y directions, respectively.
Solving the eigenvalue of the autocorrelation matrix M of the image brightness if two eigenvalues lambda1、λ2Large enough, the point is detected as a feature point of the image. Harris defines the response function for the feature points as follows:
R=Det(M)-k(trace(M))2>TR
wherein, det (m) ═ λ1λ2Rank of matrix M, trace (M) λ12K is the sum of the eigenvalues of the matrix M, and is a given constant, usually taking a value in the range of 0.4-0.6, in this example k is 0.04. When the R value of the pixel point on the image is larger than a given certain threshold value TRThen, the point is considered as a feature point of the image.
Let Harris feature point sets of left and right images be C respectivelyL={ci1, …, n and CR={cj1, …, n, and the corresponding characteristic value is RL(ci)、RR(ci). Then the feature values of two feature points in the two graphs can be considered to match when the two feature points satisfy the following condition:
|RL(ci)-RR(cj)|≤δ
in the above formula, δ is an allowable error to suppress the influence of noise and other interference on the characteristic point value.
Performing parallax solving on the matching point set obtained in the steps, and integrating all the feature points c in a certain area A by judging the positions of the feature pointsiObtaining the regionThe disparity search range d (a) of the domain.
Figure BDA0001639847560000061
S4, calculating the matching cost among the pixels of the image in each scale by adopting an improved sliding window method, which specifically comprises the following steps:
judging whether the pixel point i is positioned on the edge of the image, if so, taking the pixel point as a central pixel point, carrying out region judgment on points in eight adjacent regions of the pixel point, and only calculating the matching cost of the pixel point in the region with high priority according to the priority of the region to obtain an optimal value; and dividing points in the eight adjacent regions of the edge pixels into a single connected region, a multi-connected region and other regions, wherein the priority order is single connected region > multi-connected region > other regions.
If the image edge is not the image edge, a parallax search range is obtained through S3, the window size during matching cost calculation is set, and the matching cost is further solved.
The central idea of the sliding window method is to take the field of N × N size around the pixel point of the image pair, respectively take all the points in the field as the center, and use the window method to obtain the optimal value of N × N matching costs as the final matching cost. In this example, as shown in fig. 3, for edge pixel points, N is 3, area discrimination is performed on points in eight adjacent domains, effective areas are selected according to priorities of a single connected domain, a multiple connected domain and other areas, only matching costs of pixel points in a high-priority area are calculated, and an optimal value is selected, so that reduction of edge matching accuracy caused by abrupt depth of field can be effectively reduced. And determining the size of a window by judging the position of the pixel point aiming at the non-edge pixel point, and further solving the matching cost.
As shown in fig. 3, this diagram assumes that the upper left region of the edge line is a region with higher priority, and then processes only the information of six pixels in this region.
As shown in fig. 4, in S5, an aggregation cost corresponding to each disparity value in the disparity search range is calculated by using a multi-scale cost aggregation function under regularization constraint, and a disparity value corresponding to a minimum aggregation cost is selected as a pixel disparity value;
for an image pair at a certain scale, the matching cost aggregation model is as follows:
Figure BDA0001639847560000062
the model can be used for solving the minimum matching cost C of the current pixel point i. Wherein l is an unknown parallax variable, j is other pixel points in a window N with a pixel i as a center, z is an overall cost, K (i, j) is a kernel function capable of reflecting the similarity degree between the two points i and j, and a cost aggregation kernel function adopted in a classical algorithm DoubleBP is adopted.
For the cost aggregation of multi-scale fusion under S scales, the cost aggregation function based on regularization constraint is as follows:
Figure BDA0001639847560000071
in the above equation, adding a regularization term can add constraints to keep the cost of the same pixel at different scales consistent. And a larger regularization factor is set for a limited low-texture region, so that the mismatching rate can be effectively reduced, and a more accurate parallax result can be obtained.
And selecting a value which enables the fusion matching cost to be minimum in the parallax searching range as the parallax value of each pixel point.
S6 obtains a dense disparity map by traversing the original image pixels.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (8)

1. A stereo matching algorithm for weak texture regions is characterized by comprising the following steps:
s1, correcting the input image, and carrying out scale transformation on the corrected image pair to obtain the image pair under different scales;
s2, detecting the image edge by adopting a Canny algorithm, obtaining a smooth continuous boundary through closed operation and marking the edge position, wherein the region in the edge is a weak texture region;
s3, detecting Harris angular points of the corrected image pair, matching the characteristic points, and determining a parallax search range in the same connected domain;
s4, calculating the matching cost among the pixels of the image in each scale by adopting an improved sliding window method;
the method specifically comprises the following steps: selecting N × N neighborhoods around the pixel points, taking N as 3, carrying out region discrimination on the points in the eight neighborhoods, selecting effective regions according to the priorities of single connected regions, multiple connected regions and other regions, only calculating the matching cost of the pixel points in the high-priority region, and selecting an optimal value from the matching cost;
s5, calculating aggregation costs corresponding to all parallax values in the parallax search range by adopting a multi-scale cost aggregation function under regularization constraint, and selecting a parallax value corresponding to the minimum aggregation cost as a pixel parallax value;
s6 obtains a dense disparity map by traversing the original image pixels.
2. The stereo matching algorithm according to claim 1, wherein in S1, the image pairs at different scales are obtained by using a gaussian pyramid method.
3. The stereo matching algorithm according to claim 1, wherein the Harris corner points of the corrected image pair are detected in S3, feature points are matched, and a disparity search range in the same connected domain is determined, specifically:
the corrected image pair comprises a left image and a right image, Harris angular points in the two images are extracted as characteristic points, the left image is set as a reference image, one characteristic point is taken out, a corresponding point of the point in the right image is found out under the condition that epipolar constraint is met, if more than one characteristic point in the right image meets the condition, whether the characteristic point is a matched characteristic point is determined by calculating the ratio of the nearest distance to the next nearest distance until all the characteristic points in the reference image are traversed, and a matched point set is obtained;
and performing parallax solution on the matching point set, and determining a parallax search range in a certain connected domain by judging the positions of the feature points and integrating the parallaxes of all the feature points in the connected domain.
4. The stereo matching algorithm according to claim 1, wherein the S4 calculates the matching cost of each scale of image pixels by using an improved sliding window method, specifically:
judging whether the pixel point i is positioned on the edge of the image, if so, taking the pixel point as a central pixel point, carrying out region judgment on points in eight adjacent regions of the pixel point, and only calculating the matching cost of the pixel point in the region with high priority according to the priority of the region to obtain an optimal value;
if the image edge is not the image edge, a parallax search range is obtained through S3, the window size during matching cost calculation is set, and the matching cost is further solved.
5. The stereo matching algorithm according to claim 4, wherein the regions comprise single connected regions, multiple connected regions and other regions, and the priority order is single connected region > multiple connected regions > other regions.
6. The stereo matching algorithm according to claim 1, wherein S5 specifically is:
calculating the matching cost of each pixel pair of the image under each scale;
calculating an aggregation result of the matching cost under a single scale;
solving the fused matching cost of the multi-scale image under the regularization constraint;
selecting the point with the minimum matching cost as a corresponding point;
the final disparity value is obtained.
7. The stereo matching algorithm according to claim 6,
matching a cost aggregation model:
Figure FDA0003341765180000021
wherein l is an unknown parallax variable, j is other pixel points in a window N with a pixel i as a center, z is an overall cost, and K (i, j) is a kernel function capable of reflecting the similarity degree between the two points i and j.
8. The stereo matching algorithm according to claim 6, wherein a WTA strategy is adopted to select a point with the minimum matching cost as the corresponding point.
CN201810376408.7A 2018-04-25 2018-04-25 Stereo matching algorithm for weak texture region Expired - Fee Related CN108596975B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810376408.7A CN108596975B (en) 2018-04-25 2018-04-25 Stereo matching algorithm for weak texture region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810376408.7A CN108596975B (en) 2018-04-25 2018-04-25 Stereo matching algorithm for weak texture region

Publications (2)

Publication Number Publication Date
CN108596975A CN108596975A (en) 2018-09-28
CN108596975B true CN108596975B (en) 2022-03-29

Family

ID=63609493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810376408.7A Expired - Fee Related CN108596975B (en) 2018-04-25 2018-04-25 Stereo matching algorithm for weak texture region

Country Status (1)

Country Link
CN (1) CN108596975B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741306B (en) * 2018-12-26 2021-07-06 北京石油化工学院 Image processing method applied to dangerous chemical storehouse stacking
CN110246169B (en) * 2019-05-30 2021-03-26 华中科技大学 Gradient-based window adaptive stereo matching method and system
CN110570467B (en) * 2019-07-11 2023-09-19 华南理工大学 Stereo matching parallax calculation method based on parallel queues
CN110570444B (en) * 2019-09-06 2023-03-21 合肥工业大学 Threshold calculation method based on Box Filter algorithm
CN111197976A (en) * 2019-12-25 2020-05-26 山东唐口煤业有限公司 Three-dimensional reconstruction method considering multi-stage matching propagation of weak texture region
CN111415516A (en) * 2020-03-30 2020-07-14 福建工程学院 Vehicle exhaust monitoring method of global road network
CN111462195B (en) * 2020-04-09 2022-06-07 武汉大学 Irregular angle direction cost aggregation path determination method based on dominant line constraint
CN112053394B (en) * 2020-07-14 2024-06-07 北京迈格威科技有限公司 Image processing method, device, electronic equipment and storage medium
CN113486729A (en) * 2021-06-15 2021-10-08 北京道达天际科技有限公司 Unmanned aerial vehicle image feature point extraction method based on GPU
CN113627429A (en) * 2021-08-12 2021-11-09 深圳市爱培科技术股份有限公司 Low-texture region identification method and device of image, storage medium and equipment
CN117911465B (en) * 2024-03-20 2024-05-17 深圳市森歌数据技术有限公司 Natural protection area dense point cloud generation method based on binocular stereo matching

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103512892A (en) * 2013-09-22 2014-01-15 上海理工大学 Method for detecting electromagnetic wire film wrapping
CN105551035A (en) * 2015-12-09 2016-05-04 深圳市华和瑞智科技有限公司 Stereoscopic vision matching method based on weak edge and texture classification
CN106384363A (en) * 2016-09-13 2017-02-08 天津大学 Fast adaptive weight stereo matching algorithm
CN107341822A (en) * 2017-06-06 2017-11-10 东北大学 A kind of solid matching method based on the polymerization of minimum branch cost
CN107392950A (en) * 2017-07-28 2017-11-24 哈尔滨理工大学 A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103512892A (en) * 2013-09-22 2014-01-15 上海理工大学 Method for detecting electromagnetic wire film wrapping
CN105551035A (en) * 2015-12-09 2016-05-04 深圳市华和瑞智科技有限公司 Stereoscopic vision matching method based on weak edge and texture classification
CN106384363A (en) * 2016-09-13 2017-02-08 天津大学 Fast adaptive weight stereo matching algorithm
CN107341822A (en) * 2017-06-06 2017-11-10 东北大学 A kind of solid matching method based on the polymerization of minimum branch cost
CN107392950A (en) * 2017-07-28 2017-11-24 哈尔滨理工大学 A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Cross-Scale Cost Aggregation for Stereo Matching;ZHANG K etal;《IEEE Conference on Computer Vision & Pattern Recognition》;20141231;1593-1594页 *

Also Published As

Publication number Publication date
CN108596975A (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN108596975B (en) Stereo matching algorithm for weak texture region
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110473217B (en) Binocular stereo matching method based on Census transformation
CN106780590B (en) Method and system for acquiring depth map
CN106355570B (en) A kind of binocular stereo vision matching method of combination depth characteristic
Premebida et al. High-resolution lidar-based depth mapping using bilateral filter
CN109685732B (en) High-precision depth image restoration method based on boundary capture
WO2022088982A1 (en) Three-dimensional scene constructing method, apparatus and system, and storage medium
US9214013B2 (en) Systems and methods for correcting user identified artifacts in light field images
CN111833393A (en) Binocular stereo matching method based on edge information
CN104867135B (en) A kind of High Precision Stereo matching process guided based on guide image
CN118212141A (en) System and method for hybrid depth regularization
CN108921895B (en) Sensor relative pose estimation method
CN107578430B (en) Stereo matching method based on self-adaptive weight and local entropy
CN109961417B (en) Image processing method, image processing apparatus, and mobile apparatus control method
Lo et al. Joint trilateral filtering for depth map super-resolution
Hua et al. Extended guided filtering for depth map upsampling
AliAkbarpour et al. Fast structure from motion for sequential and wide area motion imagery
CN115601406A (en) Local stereo matching method based on fusion cost calculation and weighted guide filtering
CN109493373A (en) A kind of solid matching method based on binocular stereo vision
CN113763269A (en) Stereo matching method for binocular images
CN112734822A (en) Stereo matching algorithm based on infrared and visible light images
CN113887624A (en) Improved feature stereo matching method based on binocular vision
CN111739071A (en) Rapid iterative registration method, medium, terminal and device based on initial value
CN114120012A (en) Stereo matching method based on multi-feature fusion and tree structure cost aggregation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220329

CF01 Termination of patent right due to non-payment of annual fee