CN102136136A - Luminosity insensitivity stereo matching method based on self-adapting Census conversion - Google Patents

Luminosity insensitivity stereo matching method based on self-adapting Census conversion Download PDF

Info

Publication number
CN102136136A
CN102136136A CN 201110065196 CN201110065196A CN102136136A CN 102136136 A CN102136136 A CN 102136136A CN 201110065196 CN201110065196 CN 201110065196 CN 201110065196 A CN201110065196 A CN 201110065196A CN 102136136 A CN102136136 A CN 102136136A
Authority
CN
China
Prior art keywords
parallax
self
census
pixel
luminosity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110065196
Other languages
Chinese (zh)
Other versions
CN102136136B (en
Inventor
徐贵力
倪炜基
周龙
汪凌燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201110065196A priority Critical patent/CN102136136B/en
Publication of CN102136136A publication Critical patent/CN102136136A/en
Application granted granted Critical
Publication of CN102136136B publication Critical patent/CN102136136B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a luminosity insensitivity stereo matching method based on self-adapting Census conversion. Firstly, a self-adapting area based on a cross skeleton is determined according to the structure and color information of an image so as to obtain Census conversion windows of any shape and any size; secondarily, the hamming distance after Census conversion is used as a matching cost and a local optimization method is adopted to calculate an initial parallax; finally, a two-step extracting method based on parallax statistics column diagram and left-right consistency verifying is disclosed to organically integrate the self-adapting area of the cross skeleton into the extracting process to obtain a high-precision parallax diagram. The luminosity insensitivity stereo matching method based on self-adapting Census conversion can obtain the high-precision parallax diagram from a stereo image pair with differences in illumination intensity and exposure time, integrate the matching precision and robustness on amplitude distortion, and be better adapted to application context of visual navigation for unmanned planes.

Description

The insensitive solid matching method of luminosity based on self-adaptation Census conversion
Technical field
The present invention relates to the solid matching method in the stereo visual system, belong to computer vision field, be used for existing under the condition of illumination and difference in exposure, obtain high-precision dense parallax information at left and right sides view, thereby for recovering to provide reliable assurance based on the three-dimensional depth information of stereoscopic vision.
Background technology
Three-dimensional coupling is a vital task in the computer vision field, and it obtains dense disparity map by binocular or many orders images match, thus the three-dimensional depth information in the perception scene.Domestic and international many scholars further investigate this field.Current dense Stereo Matching Algorithm can be divided into the coupling cost calculating, the coupling cost accumulation, calculating/optimizations of parallax, parallax carry the essence four steps analyze and research.In these four steps, the calculating of coupling cost is as the basis of solid coupling, and its significance level is self-evident, has only the suitable coupling cost of selection could obtain high-precision disparity map, and common coupling cost can be divided into two classes:
First kind coupling cost is based on the hypothesis of brightness/color consistency, be that feature in the scene has identical brightness/color information in different images, as the absolute value of gray scale difference, gray scale difference square, the gray scale difference absolute value that blocks etc. all is based on this hypothesis.
Yet, owing to there is the influence of factors such as global brightness variation, local brightness variation and noise between image, cause the brightness/color information of character pair inequality, this phenomenon is referred to as amplitude distortion.Although it is the matching algorithm based on brightness/color consistency can obtain high-precision disparity map for the image that satisfies this hypothesis, quite responsive for amplitude distortion.
Another kind of coupling cost reaches the insensitive purpose of amplitude distortion by hypothesis lax or that abandon brightness/color consistency, as normalized crosscorrelation, Rank and the variation of Census nonparametric, mutual information and Laplce's gaussian sum medium filtering etc.Studies show that the Census non-parametric transformations all has good robustness under the condition of various amplitude distortions.But there is the selection problem of window size in tradition based on the Census conversion of stationary window.If mapping window is too little, then signal to noise ratio (S/N ratio) is low excessively, and it is low to cause mating the cost discrimination, easily low texture region is caused the mistake coupling; If mapping window is excessive, then can introduce too much outlier (Outlier), influence matching precision.
In addition, research also shows by the coupling cost of global approach optimization based on the Census conversion, will cause high calculation cost; Utilization will be difficult to obtain high-precision anaglyph based on the partial approach optimization of the fixing support window coupling cost based on the Census conversion, make the parallax discontinuity zone exist tangible prospect to amplify phenomenon.
Summary of the invention
Technical matters to be solved by this invention provides a kind of solid matching method, exists at left and right sides view under the condition of illumination and difference in exposure to obtain high-precision anaglyph.
For solving the problems of the technologies described above, the present invention takes following technical scheme to realize:
A kind of insensitive solid matching method of luminosity based on self-adaptation Census conversion is characterized in that may further comprise the steps:
(1) definite adaptive region based on crossing skeleton obtains the Census mapping window of arbitrary shape and size with this;
(2) utilize Hamming distance after the Census conversion as the coupling cost, adopt local optimization methods (Winner-Take-all) to calculate initial parallax;
(3) behind the acquisition initial parallax, utilize two steps of parallax to put forward smart method and further improve the parallax precision.
The aforementioned insensitive solid matching method of luminosity based on self-adaptation Census conversion is characterized in that the coupling cost in the three-dimensional matching process is calculated and the polymerization of coupling cost is organically merged.
The aforementioned insensitive solid matching method of luminosity based on self-adaptation Census conversion is characterized in that the preparation method of the Census mapping window of described arbitrary shape and size comprises the steps:
(1) makes up in reference picture and the target image each pixel respectively based on the adaptive region of crossing skeleton according to the structure of image and color information;
(2) adaptive region, the adaptive region of corresponding point in target image in reference picture to be matched obtained the Census mapping window by logic and operation.
The aforementioned insensitive solid matching method of luminosity based on self-adaptation Census conversion is characterized in that the specific algorithm of described adaptive region based on crossing skeleton is:
(1) any one pixel p in the image is determined the skeleton of a cross shape, this skeleton has comprised level and vertical both direction, uses H (p) and V (p) expression respectively, and the length of the four direction of skeleton can be expressed as (h p -, h p +, v p -, v p +), pixel p can be expressed as based on the adaptive region of crossing skeleton:
U ( p ) = ∪ q ∈ V ( p ) H ( q ) - - - ( 1 )
(2) according to the hypothesis of the corresponding same structure of similar color in the image, adopt following formula:
r * = max r ∈ [ 1 , L ] ( r Π i ∈ [ 1 , r ] δ ( p , p i ) ) - - - ( 2 )
Determine the length (h of center pixel p crossing skeleton four direction respectively p -, h p +, v p -, v p +), in the formula (2), δ is an indicator function, is used for weighing the heterochromia degree between different pixels, p iBe a pixel on the p cross direction, the coordinate representation of p in image is (x p, y p), r *Be the brachium of certain cross direction, L is along the hunting zone of p cross direction, works as p iAt the level left side of p, then p iCoordinate in image can be expressed as (x p-i, y p), L is the hunting zone along the horizontal left direction of pixel p, then r *Result of calculation be h p -, in like manner determine the length h of other three directions p +, v p -, v p +
The aforementioned insensitive solid matching method of luminosity based on self-adaptation Census conversion is characterized in that described two steps based on parallax statistic histogram and left and right sides consistency desired result carry precision method and comprise the steps:
(1) according to the hypothesis of parallax smooth change in similar color area, in the adaptive region that can reflect picture structure and color information, initial parallax is carried out putting forward essence based on the first step of statistic histogram;
(2) in order to get rid of insincere parallax, the horizontal parallax figure that the first step is carried after the essence carries out left and right sides consistency desired result, set up parallax and put the letter matrix, in adaptive region, credible parallax is adopted optimization based on the parallax statistic histogram subsequently, get rid of the influence of occlusion area and insincere parallax statistics.
The aforementioned insensitive solid matching method of luminosity based on self-adaptation Census conversion, it is characterized in that described parallax statistic histogram optimization specific implementation method is: for any one pixel to be matched, statistics its based on the adaptive region of crossing skeleton in the frequency of occurrence of different parallaxes, select to have the parallax value of maximum generate probability as optimizing the result.
So far, the insensitive solid matching method of luminosity based on self-adaptation Census conversion is finished.
The beneficial effect of patent of the present invention is: the present invention has not only kept the robustness of Census conversion for amplitude distortion, and do not adopting under the prerequisite of global optimization, solved existing solid matching method is difficult to obtain high-precision disparity map under the prerequisite of left and right sides view existence exposure and illumination difference problem, can better adapt to true navigation scenarios based on stereoscopic vision.
Description of drawings
Fig. 1 is an algorithm flow chart of the present invention;
Fig. 2 is the schematic diagram based on the adaptive region of crossing skeleton;
Fig. 3 carries smart process flow diagram for second step of parallax.
Embodiment
Below in conjunction with accompanying drawing and example patent of the present invention is further specified.
As shown in Figure 1, a kind of quick stereo matching process of cutting apart with self-adapting window based on color may further comprise the steps:
The first step: determine a kind of adaptive region, obtain the Census mapping window of arbitrary shape and size with this based on crossing skeleton.
1. as shown in Figure 2, determine that at first each pixel is based on the adaptive region of crossing skeleton in reference picture and the target image.Any one pixel p in the image is determined the skeleton of a cross shape, and this skeleton has comprised level and vertical both direction, uses H (p) and V (p) expression respectively, and pixel p can be expressed as based on the adaptive region of crossing skeleton:
U ( p ) = ∪ q ∈ V ( p ) H ( q ) - - - ( 1 )
The length of the four direction of skeleton can be expressed as (h p -, h p +, v p -, v p +); Hypothesis according to the corresponding same structure of similar color in the image, adopt following formula:
r * = max r ∈ [ 1 , L ] ( r Π i ∈ [ 1 , r ] δ ( p , p i ) ) - - - ( 2 )
Determine the length (h of center pixel p crossing skeleton four direction respectively p -, h p +, v p -, v p +), in the formula (2), δ is an indicator function, is used for weighing the heterochromia degree between different pixels, p iBe a pixel on the p cross direction, the coordinate representation of p in image is (x p, y p), r *Be the brachium of certain cross direction, L is (L=17 in the experiment) along the hunting zone of p cross direction, works as p iAt the level left side of p, then p iCoordinate in image can be expressed as (x p-i, y p), L is the hunting zone along the horizontal left direction of pixel p, then r *Result of calculation be h p -, in like manner determine the length h of other three directions p +, v p -, v p +
2. at last, determine the Census mapping window of any pixel.If U Ref(m) and U Tar(n) be illustrated respectively in corresponding point m under the parallax hypothesis d, n is based on the adaptive region of crossing skeleton, m then, and the Census mapping window of the arbitrary shape of n in correspondence image and size can be expressed as U d(m) and U d(n):
U d(m)={(x,y)|(x,y)∈U ref(m),(x-d,y)∈U tar(n)}(2)
U d(n)={(x,y)|(x,y)∈U tar(n),(x+d,y)∈U ref(m)}(3)
Because m=(x, y), n=(x-d, y), so U d(m) and U d(n) be of similar shape and size, and make N that (m n) represents its size.
Second step: utilize Hamming distance after the Census conversion as the coupling cost, adopt local optimization methods to calculate initial parallax.This process will mate that cost is calculated and the coupling cost is accumulated two steps and organically blended, be because previous step has been selected suitable Census mapping window according to the structure and the color information of image, this window has good adaptive to low texture region, parallax discontinuity zone etc. in the image; If further adopt stationary window that the coupling cost of Census conversion is carried out polymerization, not only can cause facing a difficult choice of low texture region and parallax discontinuity zone coupling again, also can increase suitable calculated amount.
The 3rd step: after obtaining initial parallax, propose a kind of two steps of parallax to put forward smart method and further improve the parallax precision.
1. in the local optimum process, owing to will mate the calculating of cost and the accumulation of coupling cost organically blends, thus can there be certain noise in the initial parallax, and second step was carried left and right sides consistency desired result in the smart process to the suitable sensitivity of noise.For this reason, according to the hypothesis of parallax smooth change in similar color area, employing can reflect the U of picture structure and color information RefInitial parallax is carried out the first step put forward essence.As shown in Figure 1, to any pixel m in the reference picture, set up the statistic histogram of an initial parallax
Figure BDA0000050788970000061
Statistics U Ref(m) frequency that all different initial parallaxes occur in, and select
Figure BDA0000050788970000062
Peak value carry parallax d as a result after the essence as the first step 1, can be expressed as:
d∈[d min,d max] (4)
2. in order further to detect the occlusion area in the anaglyph, get rid of insincere parallax, carried out for second step by Fig. 3 flow process and carry essence, wherein d 1LAnd d 1RRepresent that respectively with left figure and right figure be parallax after the first step that reference picture obtains is put forward essence, U LeftAnd U RightRepresent that respectively each pixel is based on the adaptive region of crossing skeleton among left figure and the right figure.
At first, adopt left and right sides consistency desired result to detect d 1LAnd d 1RThe degree of confidence of middle parallax is set up parallax and is put the letter matrix, if | d 1L(x, y)-d 1R(x+d 1L(x, y), y) |<T (in the experiment, T is 1 pixel), think that then parallax is credible, on the contrary then insincere; Secondly, select d 1LAnd d 1RIn any one further handle and (in the experiment, selected d 1L), processing mode still is based on the parallax statistic histogram, but only adds up U this moment LeftIn credible parallax, thereby can get rid of occlusion area and insincere parallax for the influence of statistics, further improve the parallax precision.
In sum, be difficult under the true vision guided navigation scene of amplitude distortion to obtain the high precision parallax at existing Stereo Matching Algorithm, the present invention proposes and a kind ofly will mate that cost is calculated and the coupling cost is accumulated the local matching process based on self-adaptation Census conversion that organically blends.Experimental result shows that this algorithm can obtain more high-precision disparity map to the left and right sides view that has intensity of illumination and time shutter difference, has taken into account matching precision and to the robustness of amplitude distortion, can better adapt to the application scenarios of unmanned plane vision navigation.
Above-mentioned embodiment does not limit technical scheme of the present invention in any form, and the technical scheme that mode obtained that every employing is equal to replacement or equivalent transformation all drops on protection scope of the present invention.

Claims (5)

1. insensitive solid matching method of luminosity based on self-adaptation Census conversion is characterized in that may further comprise the steps:
(1) determines a kind of adaptive region, obtain the Census mapping window of arbitrary shape and size with this based on crossing skeleton;
(2) utilize Hamming distance after the Census conversion as the coupling cost, adopt local optimization methods to calculate initial parallax;
(3) behind the acquisition initial parallax, utilize two steps of parallax to put forward smart method and further improve the parallax precision.
2. the insensitive solid matching method of luminosity based on self-adaptation Census conversion according to claim 1 is characterized in that: the preparation method of the Census mapping window of described arbitrary shape and size comprises the steps:
(1) makes up in reference picture and the target image each pixel respectively based on the adaptive region of crossing skeleton according to the structure of image and color information;
(2) adaptive region, the adaptive region of corresponding point in target image in reference picture to be matched obtained the Census mapping window by logic and operation.
3. the insensitive solid matching method of luminosity based on self-adaptation Census conversion according to claim 2 is characterized in that: the specific algorithm of described adaptive region based on crossing skeleton is:
(1) any one pixel p in the image is determined the skeleton of a cross shape, this skeleton has comprised level and vertical both direction, uses H (p) and V (p) expression respectively, and the length of the four direction of skeleton can be expressed as (h p -, h p +, v p -, v p +), pixel p can be expressed as based on the adaptive region of crossing skeleton:
U ( p ) = ∪ q ∈ V ( p ) H ( q ) - - - ( 1 )
(2) according to the hypothesis of the corresponding same structure of similar color in the image, adopt following formula:
r * = max r ∈ [ 1 , L ] ( r Π i ∈ [ 1 , r ] δ ( p , p i ) ) - - - ( 2 )
Determine the length (h of center pixel p crossing skeleton four direction respectively p -, h p +, v p -, v p +), in the formula (2), δ is an indicator function, is used for weighing the heterochromia degree between different pixels, p iBe a pixel on the p cross direction, the coordinate representation of p in image is (x p, y p), r *Be the brachium of certain cross direction, L is along the hunting zone of p cross direction, works as p iAt the level left side of p, then p iCoordinate in image can be expressed as (x p-i, y p), L is the hunting zone along the horizontal left direction of pixel p, then r *Result of calculation be h p -, in like manner determine the length h of other three directions p +, v p -, v p +
4. the insensitive solid matching method of luminosity based on self-adaptation Census conversion according to claim 1 is characterized in that: described two steps based on parallax statistic histogram and left and right sides consistency desired result are carried precision method and comprise the steps:
(1) according to the hypothesis of parallax smooth change in similar color area, in the adaptive region that can reflect picture structure and color information, initial parallax is carried out putting forward essence based on the first step of statistic histogram;
(2) in order to get rid of insincere parallax, the horizontal parallax figure that the first step is carried after the essence carries out left and right sides consistency desired result, set up parallax and put the letter matrix, in adaptive region, credible parallax is adopted optimization based on the parallax statistic histogram subsequently, get rid of the influence of occlusion area and insincere parallax statistics.
5. the insensitive solid matching method of luminosity based on self-adaptation Census conversion according to claim 4, it is characterized in that described parallax statistic histogram optimization specific implementation method is: for any one pixel to be matched, statistics its based on the adaptive region of crossing skeleton in the frequency of occurrence of different parallaxes, select to have the parallax value of maximum generate probability as optimizing the result.
CN201110065196A 2011-03-17 2011-03-17 Luminosity insensitivity stereo matching method based on self-adapting Census conversion Expired - Fee Related CN102136136B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110065196A CN102136136B (en) 2011-03-17 2011-03-17 Luminosity insensitivity stereo matching method based on self-adapting Census conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110065196A CN102136136B (en) 2011-03-17 2011-03-17 Luminosity insensitivity stereo matching method based on self-adapting Census conversion

Publications (2)

Publication Number Publication Date
CN102136136A true CN102136136A (en) 2011-07-27
CN102136136B CN102136136B (en) 2012-10-03

Family

ID=44295911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110065196A Expired - Fee Related CN102136136B (en) 2011-03-17 2011-03-17 Luminosity insensitivity stereo matching method based on self-adapting Census conversion

Country Status (1)

Country Link
CN (1) CN102136136B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368826A (en) * 2011-11-07 2012-03-07 天津大学 Real time adaptive generation method from double-viewpoint video to multi-viewpoint video
CN102447933A (en) * 2011-11-01 2012-05-09 浙江捷尚视觉科技有限公司 Depth information acquisition method based on binocular framework
CN102930530A (en) * 2012-09-26 2013-02-13 苏州工业职业技术学院 Stereo matching method of double-viewpoint image
CN103440653A (en) * 2013-08-27 2013-12-11 北京航空航天大学 Binocular vision stereo matching method
CN104427324A (en) * 2013-09-02 2015-03-18 联咏科技股份有限公司 Parallax error calculation method and three-dimensional matching device thereof
CN104867135A (en) * 2015-05-04 2015-08-26 中国科学院上海微***与信息技术研究所 High-precision stereo matching method based on guiding image guidance
CN106131448A (en) * 2016-07-22 2016-11-16 石家庄爱赛科技有限公司 The 3 d stereoscopic vision system of brightness of image can be automatically adjusted
CN106355608A (en) * 2016-09-09 2017-01-25 南京信息工程大学 Stereoscopic matching method on basis of variable-weight cost computation and S-census transformation
CN106651975A (en) * 2016-12-01 2017-05-10 大连理工大学 Census adaptive transformation algorithm based on multiple codes
CN106846290A (en) * 2017-01-19 2017-06-13 西安电子科技大学 Stereoscopic parallax optimization method based on anti-texture cross and weights cross
CN107240083A (en) * 2017-06-29 2017-10-10 海信集团有限公司 The method and device of noise in a kind of repairing disparity map
CN107330932A (en) * 2017-06-16 2017-11-07 海信集团有限公司 The method and device of noise in a kind of repairing disparity map
CN109003295A (en) * 2018-04-11 2018-12-14 中冶沈勘工程技术有限公司 A kind of unmanned plane aviation image fast matching method
CN109724537A (en) * 2019-02-11 2019-05-07 吉林大学 A kind of binocular three-dimensional imaging method and system
CN113808185A (en) * 2021-11-19 2021-12-17 北京的卢深视科技有限公司 Image depth recovery method, electronic device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100007720A1 (en) * 2008-06-27 2010-01-14 Beddhu Murali Method for front matching stereo vision
CN101841730A (en) * 2010-05-28 2010-09-22 浙江大学 Real-time stereoscopic vision implementation method based on FPGA

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100007720A1 (en) * 2008-06-27 2010-01-14 Beddhu Murali Method for front matching stereo vision
CN101841730A (en) * 2010-05-28 2010-09-22 浙江大学 Real-time stereoscopic vision implementation method based on FPGA

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《Field-Programmable Custom Computing Machines,2007.FCCM 2007.15th Annual IEEE Symposium on》 20070425 Chris Murphy et al. Low-Cost Stereo Vision on an FPGA 全文 1-5 , *
《IEEE Transactions on Pattern Analysis and Machine Intelligence》 19940930 Takeo Kanade et al. A Stereo Matching Algorithm with an Adaptive Window: Theory and Experiment 第496页第1段、图3 1 第16卷, 第9期 *
《控制与决策》 20080715 白明 等 双目立体匹配算法的研究与进展 全文 1-5 第23卷, 第7期 *
《电子与信息学报》 20110315 丁菁汀 等 基于FPGA的立体视觉匹配的高性能实现 第598页第2节 1 第33卷, 第3期 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102447933A (en) * 2011-11-01 2012-05-09 浙江捷尚视觉科技有限公司 Depth information acquisition method based on binocular framework
CN102368826A (en) * 2011-11-07 2012-03-07 天津大学 Real time adaptive generation method from double-viewpoint video to multi-viewpoint video
CN102930530A (en) * 2012-09-26 2013-02-13 苏州工业职业技术学院 Stereo matching method of double-viewpoint image
CN103440653A (en) * 2013-08-27 2013-12-11 北京航空航天大学 Binocular vision stereo matching method
CN104427324A (en) * 2013-09-02 2015-03-18 联咏科技股份有限公司 Parallax error calculation method and three-dimensional matching device thereof
CN104867135B (en) * 2015-05-04 2017-08-25 中国科学院上海微***与信息技术研究所 A kind of High Precision Stereo matching process guided based on guide image
CN104867135A (en) * 2015-05-04 2015-08-26 中国科学院上海微***与信息技术研究所 High-precision stereo matching method based on guiding image guidance
CN106131448B (en) * 2016-07-22 2019-05-10 石家庄爱赛科技有限公司 The three-dimensional stereoscopic visual system of brightness of image can be automatically adjusted
CN106131448A (en) * 2016-07-22 2016-11-16 石家庄爱赛科技有限公司 The 3 d stereoscopic vision system of brightness of image can be automatically adjusted
CN106355608B (en) * 2016-09-09 2019-03-26 南京信息工程大学 The solid matching method with S-census transformation is calculated based on Changeable weight cost
CN106355608A (en) * 2016-09-09 2017-01-25 南京信息工程大学 Stereoscopic matching method on basis of variable-weight cost computation and S-census transformation
CN106651975A (en) * 2016-12-01 2017-05-10 大连理工大学 Census adaptive transformation algorithm based on multiple codes
CN106651975B (en) * 2016-12-01 2019-08-13 大连理工大学 A kind of Census adaptive transformation method based on odd encoder
CN106846290A (en) * 2017-01-19 2017-06-13 西安电子科技大学 Stereoscopic parallax optimization method based on anti-texture cross and weights cross
CN106846290B (en) * 2017-01-19 2019-10-11 西安电子科技大学 Stereoscopic parallax optimization method based on anti-texture cross and weight cross
CN107330932A (en) * 2017-06-16 2017-11-07 海信集团有限公司 The method and device of noise in a kind of repairing disparity map
CN107240083A (en) * 2017-06-29 2017-10-10 海信集团有限公司 The method and device of noise in a kind of repairing disparity map
CN109003295A (en) * 2018-04-11 2018-12-14 中冶沈勘工程技术有限公司 A kind of unmanned plane aviation image fast matching method
CN109003295B (en) * 2018-04-11 2021-07-23 中冶沈勘工程技术有限公司 Rapid matching method for aerial images of unmanned aerial vehicle
CN109724537A (en) * 2019-02-11 2019-05-07 吉林大学 A kind of binocular three-dimensional imaging method and system
CN113808185A (en) * 2021-11-19 2021-12-17 北京的卢深视科技有限公司 Image depth recovery method, electronic device and storage medium
CN113808185B (en) * 2021-11-19 2022-03-25 北京的卢深视科技有限公司 Image depth recovery method, electronic device and storage medium

Also Published As

Publication number Publication date
CN102136136B (en) 2012-10-03

Similar Documents

Publication Publication Date Title
CN102136136B (en) Luminosity insensitivity stereo matching method based on self-adapting Census conversion
CN102903096B (en) Monocular video based object depth extraction method
CN102074014B (en) Stereo matching method by utilizing graph theory-based image segmentation algorithm
CN102930530B (en) Stereo matching method of double-viewpoint image
CN101933335A (en) Method and system for converting 2d image data to stereoscopic image data
CN101610425B (en) Method for evaluating stereo image quality and device
CN103440653A (en) Binocular vision stereo matching method
CN103996201A (en) Stereo matching method based on improved gradient and adaptive window
CN102665086A (en) Method for obtaining parallax by using region-based local stereo matching
CN110688905A (en) Three-dimensional object detection and tracking method based on key frame
CN102036094B (en) Stereo matching method based on digital fractional delay technology
CN102098526A (en) Depth map calculating method and device
CN103985128A (en) Three-dimensional matching method based on color intercorrelation and self-adaptive supporting weight
CN105277169A (en) Image segmentation-based binocular range finding method
CN103106651A (en) Method for obtaining parallax error plane based on three-dimensional hough
CN104639933A (en) Real-time acquisition method and real-time acquisition system for depth maps of three-dimensional views
CN103269435A (en) Binocular to multi-view virtual viewpoint synthetic method
CN102447917A (en) Three-dimensional image matching method and equipment thereof
CN104200453A (en) Parallax image correcting method based on image segmentation and credibility
CN104318576A (en) Super-pixel-level image global matching method
CN113920183A (en) Monocular vision-based vehicle front obstacle distance measurement method
CN103679739A (en) Virtual view generating method based on shielding region detection
CN105335934A (en) Disparity map calculating method and apparatus
CN103489183A (en) Local stereo matching method based on edge segmentation and seed point
CN101945299A (en) Camera-equipment-array based dynamic scene depth restoring method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121003

Termination date: 20150317

EXPY Termination of patent right or utility model