CN104616304A - Self-adapting support weight stereo matching method based on field programmable gate array (FPGA) - Google Patents

Self-adapting support weight stereo matching method based on field programmable gate array (FPGA) Download PDF

Info

Publication number
CN104616304A
CN104616304A CN201510072013.4A CN201510072013A CN104616304A CN 104616304 A CN104616304 A CN 104616304A CN 201510072013 A CN201510072013 A CN 201510072013A CN 104616304 A CN104616304 A CN 104616304A
Authority
CN
China
Prior art keywords
point
fpga
value
overbar
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510072013.4A
Other languages
Chinese (zh)
Inventor
顾国华
龚文彪
吕芳
任建乐
钱惟贤
路东明
任侃
于雪莲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201510072013.4A priority Critical patent/CN104616304A/en
Publication of CN104616304A publication Critical patent/CN104616304A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a self-adapting support weight stereo matching method based on a field programmable gate array (FPGA). The method includes building a local stereo matching window for a left image and a right image in the FPGA, then calculating a gray level similarity function value and a Manhattan distance similarity function value according to the gray level similarity and the Manhattan distance similarity between local window inner matching points and local window center points to further acquire the weight price relation value of the local window matching points; finally calculating the price paradigmatic relation of each matching point, and utilizing the rule that the winner takes it all to acquire a parallax result of each pixel point. The method improves the integral matching method, can acquire the real-time dense parallax result and has high robustness.

Description

A kind of self-adaptation based on FPGA supports the solid matching method of weight
Technical field
The invention belongs to binocular stereo vision field, be specifically related to the solid matching method that a kind of self-adaptation based on FPGA supports weight.
Background technology
Binocular stereo vision is the physiological structure of direct modeling mankind binocular vision, it is the important technical of extraction of depth information under three-dimensional scenic, have in robot navigation, unmanned plane location and three-dimensional measurement and use widely, in stereoscopic vision, most critical and difficult point are the Stereo matching of binocular image, owing to needing the repetitive operation carrying out mass data in matching process, therefore on CPU, carry out Stereo matching to binocular image to need to spend a large amount of operation time, be difficult to the requirement meeting real-time.Such as, be the CPU of 1GHZ for dominant frequency, region Stereo Matching Algorithm is utilized to carry out dense disparity map calculating to the picture of two medium-sized sizes, need the time spending several seconds, the result of such low rate significantly limit the development of stereoscopic vision, more particularly needs the practice occasion obtaining real-time matching parallax result.Therefore, under the condition ensureing matching precision, how to improve the real-time of Stereo Matching Algorithm in recent years, become the focus of people's research.
Region Stereo Matching Algorithm comprises global area Stereo Matching Algorithm and regional area Stereo Matching Algorithm, global area Stereo Matching Algorithm matching accuracy is high, but calculation of complex, operand are large, GPU can be adopted realize in raising real-time, but in GPU processing procedure, power consumption is very large, the therefore application occasion of limiter reality.Corresponding regional area Stereo Matching Algorithm, the method is the parallax value being obtained each point to be matched by the similarity relationships of the local window on left images, calculated amount is little, and if a suitable Matching power flow relation can be found in matching process, also a good matching result can be obtained, and FPGA is as the programmable logic gate array of one, dirigibility is strong, and what utilize that the parallel pipelining process line technology of its inside can be real-time realizes regional area Stereo Matching Algorithm.
At present, FPGA is utilized to realize the real-time volume matching process (referring to document A Parallel reconfigurable architecture for real-time stereo vision) realized based on FPGA of the people such as SAD solid matching method (referring to document Hardware implementation of an SAD basedstereo vision algorithm), the Chen L proposition based on FPGA realization that real-time regional area solid matching method has the people such as Ambrosch K to propose.These class methods can be real-time acquire real-time dense parallax result, but the local window in matching process is fixed value, and do not distribute suitable weight information in the matching process to the match point in window, therefore at degree of depth point of discontinuity place, low texture place and scene repeat region place easily mate and make mistakes, matching accuracy rate is not high.
Summary of the invention
The deficiency that object of the present invention exists in matching accuracy rate and real-time for existing solid matching method, propose the solid matching method that a kind of self-adaptation based on FPGA supports weight, the method can solve degree of depth point of discontinuity place, low texture place and scene repeat region place and easily mate the problem of makeing mistakes, thus overall matching effect can be improved, simultaneously, the method utilizes the parallel pipelining process line technology of FPGA inside, the regional area Stereo matching walked abreast can be carried out in FPGA, obtain real-time dense parallax result, there is stronger robustness.
In order to solve the problems of the technologies described above, the invention provides the solid matching method that a kind of self-adaptation based on FPGA supports weight, left images is set up to the sectional perspective match window of m × n size in FPGA, the input data of video input mouth are pretreated left images, the clock of pixel is the system synchronization clock that each pixel of left images enters FPGA inside, with first in first out buffer, row cache is carried out to match point, with the d type flip flop of FPGA inside, row buffer memory is carried out to match point; Then according to match point in local window and the grey similarity between local window central point and manhatton distance similarity, ask for grey similarity functional value and the manhatton distance similarity function value of point to be matched, thus obtain the weight cost relation value w (p of local window match point, q), weight cost relation value w (p, q) such as formula shown in (1)
w(p,q)=wd k·wR l(1)
In formula (1), wd krepresent the respective value of the manhatton distance similarity function value of pixel p and q obtained by look-up table in FPGA inside, R lrepresent that the gray-scale value of pixel p and q is in the grey similarity relation after rank conversion, wR lrepresent the respective value of the grey similarity functional value of pixel p and q obtained by look-up table in FPGA inside; Finally according to the Matching power flow weight relationship value of described local matching window, calculate the cost paradigmatic relation of each match point, the criterion that then utilizes that the victor is a king, ask for the parallax result of each pixel; The cost paradigmatic relation of described each match point shown in (2),
E ( p , p d ‾ ) = Σ q ∈ N P , q d ‾ ∈ N Pd ‾ w ( p , q ) w ( p d ‾ , q d ‾ ) e m ( q , q d ‾ ) Σ q ∈ N P , q d ‾ ∈ N Pd ‾ w ( p , q ) w ( p d ‾ , q d ‾ ) - - - ( 2 )
In formula (2), N prepresent the local matching window of left image, represent the local matching window of right image, w (p, q) is the weight relationship value of match point q in left image, for the match point in right image weight relationship value, for the local window of left images, p point and point is corresponding, q point with point is corresponding, for the fiducial value of match point after rank conversion, expression formula is such as formula shown in (3):
e m ( q , q d ‾ ) = 0 R pq = R p d ‾ q d ‾ 1 otherwise - - - ( 3 )
In formula (3), R pqrepresent the rank transformed value of q point, represent the rank transformed value of point.
Preferably, Gaussian function is selected to carry out pre-service to video image, filtering noise.
Compared with prior art, its remarkable advantage is in the present invention, and (1) the present invention carries out regional area Stereo matching to binocular image in hardware FPGA, thus can obtain real-time dense parallax result; (2) the present invention utilizes the grey similarity in local window between match point and window center point, manhatton distance similarity sets up adaptive weighting relation, thus can improve the accuracy rate in matching process; (3) the present invention is in the matching process of local window, substitutes original grey similarity functional relationships set occurrence by the value after rank conversion, thus can set up a fixing look-up table in FPGA inside, and the real-time being conducive to FPGA calculates; (4) the present invention is in cost polymerization process, match point is first carried out parallax value calculating again after rank conversion, thus elimination illumination and noise is on the impact of left images matching result.
Accompanying drawing explanation
Fig. 1 is the inventive method process flow diagram.
Fig. 2 is the present invention sets up m × n size in FPGA regional area Stereo matching window to left images.
Fig. 3 is the present invention carries out rank conversion in FPGA process schematic to pixel.
Fig. 4 is the similarity function look-up table that the present invention sets up in FPGA, and wherein (a) is manhatton distance similarity search table, and (b) is grey similarity look-up table.
Fig. 5 is that the present invention carries out cost aminated polyepichlorohydrin schematic diagram to corresponding match point in the local window of left images, and wherein (a) is the schematic diagram of left image window, and (b) is the schematic diagram of right image window.
Fig. 6 is that cost polymerization process of the present invention realizes schematic diagram in FPGA.
Embodiment
Composition graphs 1, the self-adaptation that the present invention is based on FPGA supports the solid matching method of weight, and step is as follows:
Step 1: utilize the left images that the left and right cameras acquisition polar curve demarcated is corrected, the left images collected is carried out filtering process and remove noise, pretreated left image is I l(i, j) and right image are I r(i, j);
The present invention selects Gaussian function to carry out filtering to image, thus the effective white Gaussian noise eliminated sensor and introduce.Filtering Template adopt (2k+1) × (2k+1) dimension (wherein k=1,2,3 ...) and discrete Gaussian convolution core, account form is as shown in formula (1):
I l ( i , j ) = I r 0 ( i , j ) * G ( u , v ) = Σ u = - k k Σ v = - k k I r 0 ( i + u , j + v ) · G ( u , v ) I r ( i , j ) = I r 0 ( i , j ) * G ( u , v ) = Σ u = - k k Σ v = - k k I r 0 ( i + u , j + v ) · G ( u , v ) G ( u , v ) = 1 2 πσ 2 e - u 2 + v 2 2 σ 2 - - - ( 1 )
In formula (1), (i, j) is the coordinate figure of image slices vegetarian refreshments, I l0(i, j) is the original left image of input, I r0(i, j) is the original right image of input, and (u, v) is discrete Gauss point coordinate, and G (u, v) is for discrete gaussian kernel function is at the normalized value at (u, v) place, and σ is Gaussian function scale-value.
Step 2: composition graphs 2, left images is set up to the sectional perspective match window of m × n size in FPGA, the input data of video input mouth are pretreated left images, the clock of pixel is the system synchronization clock that each pixel of left images enters FPGA inside, FIFO is first in first out buffer, row cache is carried out to match point, row buffer memory carries out buffer memory by the d type flip flop of FPGA inside, win is the local window set up, a parallel local window can be obtained thus, for the regional area Stereo matching of left images in FPGA.
Step 3: according to match point in local window and the grey similarity between local window central point and manhatton distance similarity, ask for grey similarity functional value f (the Δ c of point to be matched pq) and manhatton distance similarity function value f (Δ d pq), thus obtain weight cost relation value w (p, q) of local window match point, shown in (2):
w(p,q)=f(Δc pq)·f(Δd pq) (2)
In formula (2), p and q is respectively the pixel in the central pixel point of images match window and window area, Δ c pqrepresent the grey similarity relation of p and q point, Δ d pqrepresent the manhatton distance similarity relationships of p and q point.Wherein Δ c pq, Δ d pqrelation such as formula shown in (3), f (Δ c pq), f (Δ d pq) function expression relation is such as formula shown in (4).
Δc pq = I p - I q Δd pq = | x p - x q | + | y p - y q | - - - ( 3 )
f ( Δc pq ) = exp ( - Δc pq τ c ) f ( Δd pq ) = exp ( - Δd pq τ d ) - - - ( 4 )
In formula (3), I prepresent the gray-scale value of p point, I qrepresent the gray-scale value of q point, (x p, y p) represent the ranks coordinate figure of p point, (x q, y q) representing the ranks coordinate figure of q point, the middle exp of formula (4) represents exponential function, τ crepresent the weight proportion constant under color similarity function, τ drepresent the weight proportion constant under manhatton distance similarity function.
Further, in the present invention, in order to make, grey similarity function is convenient to be calculated in FPGA, first carries out rank conversion, then utilize new grey similarity relation R to the gray-scale value of match point pqreplace original Δ c pq, the expression formula of rank conversion is as follows:
R pq = - 2 I p - I q < - &tau; 1 - 1 - &tau; 1 &le; I p - I q &le; - &tau; 2 0 - &tau; 2 < I p - I q &le; &tau; 2 1 &tau; 2 < I p - I q &le; &tau; 1 2 I p - I q > &tau; 1 - - - ( 5 )
τ in formula (5) 1, τ 2for the class condition of rank conversion, as fixed constant in computation process, I prepresent the gray-scale value of p point, I qrepresent the gray-scale value of q point.
Therefore according to the local window set up in formula (5) and Fig. 2, composition graphs 3, the gray-scale value of the subtracter of FPGA inside to match point in window and window center point is utilized to subtract each other and judge, thus determine that in FPGA inside in window, each match point rank converts grade, obtain because point each in local window is parallel, therefore the rank pixel after conversion is also occur so that the form of local window is parallel, thus does not affect the real-time of algorithm.
Composition graphs 4, according to new grey similarity functional value f (R pq) and manhatton distance similarity function value f (Δ d pq) in FPGA, set up look-up table, thus determine Matching power flow relation weighted value w (p, q) of each pixel, weight w (p, the q) expression formula of the local window match point therefore calculated in FPGA is such as formula shown in (6):
w(p,q)=wd k·wR l(6)
In Fig. 4, Δ d krepresent the manhatton distance of p and q point, wd krepresent f (the Δ d obtained by look-up table in FPGA inside pq) respective value, R lrepresent the grey similarity relation of p and q point after rank conversion, wR lrepresent the f (R obtained by look-up table in FPGA inside pq) respective value.
Due to after above-mentioned steps conversion, for the regional area match window that size is certain, grey similarity functional value after its manhatton distance similarity function value and rank convert is all the discrete transform values in certain limit, therefore the foundation of look-up table can be mapped data by finite state machine in FPGA, the wd after wherein mapping k, wR ldata are all integer, and these data after normalization, can't affect the conversion of its match point weight, are so conducive to the calculating carrying out fixed-point integer in FPGA.
Step 4: composition graphs 5, according to the Matching power flow weight relationship value of local matching window, calculates the cost paradigmatic relation of each match point shown in (7), then utilize the criterion that the victor is a king (WTA), ask for the parallax result d of each pixel q, shown in (9).
E ( p , p d &OverBar; ) = &Sigma; q &Element; N P , q d &OverBar; &Element; N Pd &OverBar; w ( p , q ) w ( p d &OverBar; , q d &OverBar; ) e m ( q , q d &OverBar; ) &Sigma; q &Element; N P , q d &OverBar; &Element; N Pd &OverBar; w ( p , q ) w ( p d &OverBar; , q d &OverBar; ) - - - ( 7 )
In formula (7), N prepresent the local matching window of left image, represent the local matching window of right image, w (p, q) is the weight relationship value of match point q in left image, for the match point in right image weight relationship value, for the local window of left images, p point and point is corresponding, q point with point is corresponding, for the fiducial value after match point rank conversion, expression formula is such as formula shown in (8):
e m ( q , q d &OverBar; ) = 0 R pq = R p d &OverBar; q d &OverBar; 1 otherwise - - - ( 8 )
In formula (8), R pqrepresent the rank transformed value of q point, represent the rank transformed value of point.
d q = arg min d &Element; S d E ( p , p d &OverBar; ) - - - ( 9 )
In formula (9), S d={ d min, d min+ 1 ..., d max, d minfor minimum parallax value, d maxfor maximum disparity value.
Composition graphs 6, in left images, partial weight window and local pixel window is set up according to FPGA, computing is carried out according to formula (7), inner at FPGA, in order to reduce the use of multiplier resources, the weight relationship value of left images is carried out in the process be multiplied, and utilizes the operation of displacement to replace the function (the n power form splitting into multiple 2 by data is sued for peace) of multiplier, the w in Fig. 6 lrepresent the weight w (p, q) of left image, w rrepresent the weight in right image the weight that w is the left images calculated with shifting function is amassed, e in Fig. 6 mvalue determined by formula (8), its value can only be ' 1 ' or ' 0 ', therefore formula (7) cost polymerization process in, in FPGA, only need e mlowest order and weighted value w in everybody carry out logical and (&) and operate, therefore according to formula (7), cost paradigmatic relation value can be obtained to after the normalization of weight relationship value
According to formula (9), what walk abreast to each matched pixel point in FPGA calculates (d max-d min+ 1) individual Matching power flow polymerizing value, and utilize the comparer of FPGA inside to calculate cost paradigmatic relation to be worth side-play amount corresponding to minimum pixel, to be the parallax value of this point.
Therefore, the present invention circulates above-mentioned steps 1 ~ step 6, can carry out the calculating of dense disparity map to binocular video stream.
In order to the present invention further by Simulation experiments validate beneficial effect of the present invention, emulation experiment is that the CYCLONE III EP3C120F780C8 fpga chip that provides in altera corp realizes, the size of the local matching window adopted is 11 × 11, the frame frequency of the binocular video stream collected is 60fps, image size resolution is 640 × 480, table 1 is the form of the hardware resource use that the inventive method utilizes quartus2 composing software to generate, table 2 is that the real-time performance of the inventive method and existing Stereo Matching Algorithm contrasts, frame frequency presses 60MHZ clock normalization.Experiment shows that the disparity range of the method search that the inventive method proposes than Chen is larger, and frame frequency is also higher, can obtain real-time, matching accuracy rate is a higher dense parallax result.
Table 1
Table 2

Claims (5)

1. the solid matching method of the support of the self-adaptation based on a FPGA weight, it is characterized in that, left images is set up to the sectional perspective match window of m × n size in FPGA, the input data of video input mouth are pretreated left images, the clock of pixel is the system synchronization clock that each pixel of left images enters FPGA inside, with first in first out buffer, row cache is carried out to match point, with the d type flip flop of FPGA inside, row buffer memory is carried out to match point.
2. as claimed in claim 1 based on the solid matching method of the self-adaptation support weight of FPGA, it is characterized in that, according to match point in local window and the grey similarity between local window central point and manhatton distance similarity, ask for grey similarity functional value and the manhatton distance similarity function value of point to be matched, thus obtain weight cost relation value w (p, q) of local window match point, weight cost relation value w (p, q) such as formula shown in (1)
w(p,q)=wd k·wR l(1)
In formula (1), wd krepresent the respective value of the manhatton distance similarity function value of pixel p and q obtained by look-up table in FPGA inside, R lrepresent that the gray-scale value of pixel p and q is in the grey similarity relation after rank conversion, wR lrepresent the respective value of the grey similarity functional value of pixel p and q obtained by look-up table in FPGA inside.
3. as claimed in claim 2 based on the solid matching method of the self-adaptation support weight of FPGA, it is characterized in that, according to the Matching power flow weight relationship value of described local matching window, calculate the cost paradigmatic relation of each match point, then utilize the victor is a king criterion, ask for the parallax result of each pixel.
4., as claimed in claim 3 based on the solid matching method of the self-adaptation support weight of FPGA, it is characterized in that,
The cost paradigmatic relation of described each match point shown in (2),
E ( p , p d &OverBar; ) = &Sigma; q &Element; N P , q d &OverBar; &Element; N P d &OverBar; w ( p , q ) w ( p d &OverBar; , q d &OverBar; ) e m ( q , q d &OverBar; ) &Sigma; q &Element; N P , q d &OverBar; &Element; N P d &OverBar; w ( p , q ) w ( p d &OverBar; , q d &OverBar; ) - - - ( 2 )
In formula (2), N prepresent the local matching window of left image, represent the local matching window of right image, w (p, q) is the weight relationship value of match point q in left image, for the match point in right image weight relationship value, for the local window of left images, p point and point is corresponding, q point with point is corresponding, for the fiducial value of match point after rank conversion, expression formula is such as formula shown in (3):
e m ( q , q d &OverBar; ) = 0 R pq = R p d &OverBar; q d &OverBar; 1 otherwise - - - ( 3 )
In formula (3), R pqrepresent the rank transformed value of q point, represent the rank transformed value of point.
5., as claimed in claim 1 based on the solid matching method of the self-adaptation support weight of FPGA, it is characterized in that, select Gaussian function to carry out pre-service to video image, filtering noise.
CN201510072013.4A 2015-02-11 2015-02-11 Self-adapting support weight stereo matching method based on field programmable gate array (FPGA) Pending CN104616304A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510072013.4A CN104616304A (en) 2015-02-11 2015-02-11 Self-adapting support weight stereo matching method based on field programmable gate array (FPGA)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510072013.4A CN104616304A (en) 2015-02-11 2015-02-11 Self-adapting support weight stereo matching method based on field programmable gate array (FPGA)

Publications (1)

Publication Number Publication Date
CN104616304A true CN104616304A (en) 2015-05-13

Family

ID=53150737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510072013.4A Pending CN104616304A (en) 2015-02-11 2015-02-11 Self-adapting support weight stereo matching method based on field programmable gate array (FPGA)

Country Status (1)

Country Link
CN (1) CN104616304A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105807786A (en) * 2016-03-04 2016-07-27 深圳市道通智能航空技术有限公司 UAV automatic obstacle avoidance method and system
CN108038874A (en) * 2017-12-01 2018-05-15 中国科学院自动化研究所 Towards the real-time registration apparatus of scanning electron microscope image and method of sequence section
WO2018086348A1 (en) * 2016-11-09 2018-05-17 人加智能机器人技术(北京)有限公司 Binocular stereo vision system and depth measurement method
CN111553296A (en) * 2020-04-30 2020-08-18 中山大学 Two-value neural network stereo vision matching method based on FPGA

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101557534A (en) * 2009-05-19 2009-10-14 无锡景象数字技术有限公司 Method for generating disparity map from video close frames
CN103778632A (en) * 2014-01-18 2014-05-07 南京理工大学 Method for stereo matching based on FPGA
US20140177927A1 (en) * 2012-12-26 2014-06-26 Himax Technologies Limited System of image stereo matching
CN103985128A (en) * 2014-05-23 2014-08-13 南京理工大学 Three-dimensional matching method based on color intercorrelation and self-adaptive supporting weight
CN104123727A (en) * 2014-07-26 2014-10-29 福州大学 Stereo matching method based on self-adaptation Gaussian weighting

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101557534A (en) * 2009-05-19 2009-10-14 无锡景象数字技术有限公司 Method for generating disparity map from video close frames
US20140177927A1 (en) * 2012-12-26 2014-06-26 Himax Technologies Limited System of image stereo matching
CN103778632A (en) * 2014-01-18 2014-05-07 南京理工大学 Method for stereo matching based on FPGA
CN103985128A (en) * 2014-05-23 2014-08-13 南京理工大学 Three-dimensional matching method based on color intercorrelation and self-adaptive supporting weight
CN104123727A (en) * 2014-07-26 2014-10-29 福州大学 Stereo matching method based on self-adaptation Gaussian weighting

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
C. TTOFIS ET AL: "Towards accurate hardware stereo correspondence a real-time FPGA implementation of a segmentation-based adaptive support weight algorithm", 《PROCESSING OF THE CONFERENCE ON DESIGN,AUTOMATIC AND TEST IN EUROPE》 *
龚文彪等: "基于颜色内相关和自适应支撑权重的立体匹配算法", 《中国激光》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105807786A (en) * 2016-03-04 2016-07-27 深圳市道通智能航空技术有限公司 UAV automatic obstacle avoidance method and system
WO2018086348A1 (en) * 2016-11-09 2018-05-17 人加智能机器人技术(北京)有限公司 Binocular stereo vision system and depth measurement method
CN108038874A (en) * 2017-12-01 2018-05-15 中国科学院自动化研究所 Towards the real-time registration apparatus of scanning electron microscope image and method of sequence section
CN111553296A (en) * 2020-04-30 2020-08-18 中山大学 Two-value neural network stereo vision matching method based on FPGA

Similar Documents

Publication Publication Date Title
US11238602B2 (en) Method for estimating high-quality depth maps based on depth prediction and enhancement subnetworks
KR102319177B1 (en) Method and apparatus, equipment, and storage medium for determining object pose in an image
CN107220997B (en) Stereo matching method and system
Lu et al. A resource-efficient pipelined architecture for real-time semi-global stereo matching
CN105869167A (en) High-resolution depth map acquisition method based on active and passive fusion
CN103136750B (en) The Stereo matching optimization method of binocular vision system
CN110060286B (en) Monocular depth estimation method
CN110120049B (en) Method for jointly estimating scene depth and semantics by single image
Chen et al. StereoEngine: An FPGA-based accelerator for real-time high-quality stereo estimation with binary neural network
CN104616304A (en) Self-adapting support weight stereo matching method based on field programmable gate array (FPGA)
CN108460794B (en) Binocular three-dimensional infrared salient target detection method and system
Ding et al. Real-time stereo vision system using adaptive weight cost aggregation approach
CN111583313A (en) Improved binocular stereo matching method based on PSmNet
CN103985128A (en) Three-dimensional matching method based on color intercorrelation and self-adaptive supporting weight
Min et al. Dadu-eye: A 5.3 TOPS/W, 30 fps/1080p high accuracy stereo vision accelerator
Su et al. Pedestrian detection system with edge computing integration on embedded vehicle
CN111462211A (en) Binocular parallax calculation method based on convolutional neural network
Vázquez‐Delgado et al. Real‐time multi‐window stereo matching algorithm with fuzzy logic
CN214587004U (en) Stereo matching acceleration circuit, image processor and three-dimensional imaging electronic equipment
WO2022120988A1 (en) Stereo matching method based on hybrid 2d convolution and pseudo 3d convolution
Liang et al. Real-time hardware accelerator for single image haze removal using dark channel prior and guided filter
Isakova et al. FPGA design and implementation of a real-time stereo vision system
CN117152580A (en) Binocular stereoscopic vision matching network construction method and binocular stereoscopic vision matching method
CN116703996A (en) Monocular three-dimensional target detection algorithm based on instance-level self-adaptive depth estimation
Li et al. Graph-based saliency fusion with superpixel-level belief propagation for 3D fixation prediction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150513