CN103955920B - Binocular vision obstacle detection method based on three-dimensional point cloud segmentation - Google Patents

Binocular vision obstacle detection method based on three-dimensional point cloud segmentation Download PDF

Info

Publication number
CN103955920B
CN103955920B CN201410146864.4A CN201410146864A CN103955920B CN 103955920 B CN103955920 B CN 103955920B CN 201410146864 A CN201410146864 A CN 201410146864A CN 103955920 B CN103955920 B CN 103955920B
Authority
CN
China
Prior art keywords
point cloud
segmentation
road surface
dimensional
height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410146864.4A
Other languages
Chinese (zh)
Other versions
CN103955920A (en
Inventor
袁华
曾日金
莫建文
陈利霞
张彤
首照宇
欧阳宁
赵晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUILIN YUHUI INFORMATION TECHNOLOGY Co Ltd
Guilin University of Electronic Technology
Original Assignee
GUILIN YUHUI INFORMATION TECHNOLOGY Co Ltd
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUILIN YUHUI INFORMATION TECHNOLOGY Co Ltd, Guilin University of Electronic Technology filed Critical GUILIN YUHUI INFORMATION TECHNOLOGY Co Ltd
Priority to CN201410146864.4A priority Critical patent/CN103955920B/en
Publication of CN103955920A publication Critical patent/CN103955920A/en
Application granted granted Critical
Publication of CN103955920B publication Critical patent/CN103955920B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a binocular vision obstacle detection method based on three-dimensional point cloud segmentation. The method comprises the steps of synchronously collecting two camera images of the same specification, conducting calibration and correction on a binocular camera, and calculating a three-dimensional point cloud segmentation threshold value; using a three-dimensional matching algorithm and three-dimensional reconstruction calculation for obtaining a three-dimensional point cloud, and conducting image segmentation on a reference map to obtain image blocks; automatically detecting the height of a road surface of the three-dimensional point cloud, and utilizing the three-dimensional point cloud segmentation threshold value for conducting segmentation to obtain a road surface point cloud, obstacle point clouds at different positions and unknown region point clouds; utilizing the point clouds obtained through segmentation for being combined with the segmented image blocks, determining the correctness of obstacles and the road surface, and determining position ranges of the obstacles, the road surface and unknown regions. According to the binocular vision obstacle detection method, the camera and the height of the road surface can be still detected under the complex environment, the three-dimensional segmentation threshold value is automatically estimated, the obstacle point clouds, the road surface point cloud and the unknown region point clouds can be obtained through segmentation, the color image segmentation technology is ended, color information is integrated, correctness of the obstacles and the road surface is determined, the position ranges of the obstacles, the road surface and the unknown regions are determined, the high-robustness obstacle detection is achieved, and the binocular vision obstacle detection method has higher reliability and practicability.

Description

Based on the binocular vision obstacle detection method that three-dimensional point cloud is split
Technical field
The present invention relates to autonomous mobile robot and automatic Pilot etc. are led based on binocular stereo vision barrier automatic detection Domain, the binocular vision obstacle detection method specifically split based on three-dimensional point cloud.
Background technology
Binocular stereo vision is an important branch of computer vision, and this vision directly simulates mankind's eyes and processes scape The mode of thing, easy to be reliable, in the great using value in many fields, such as robot navigation hands over aerial survey, three-dimensional measurement, intelligence Logical and virtual reality etc..Binocular stereo vision is by two or the movement of a video camera process or rotary taking of diverse location Same width scene, by calculating spatial point parallax in two images, obtains the D coordinates value of the point.In autonomous machine In the research in the field such as device people and automatic Pilot, fundamental prerequisite is that requirement robot or automatic Pilot car are filled equipped with vision Put.The vision system should be able to complete the basic function of similar human eye, and the planning for path provides reliable visual information.Barrier The acquisition of target detection and range information be key component in autonomous mobile robot and automatic Pilot.Based on binocular solid The detection of obstacles of vision it is critical only that:(1)The Stereo matching of binocular image pair, calculates three-dimensional point cloud;(2)Obstacle target Extraction, that is, identify Obstacle Position and size.Generally the applied environment such as autonomous mobile robot and automatic Pilot is complicated, receives Optical distortion and noise, the mirror-reflection of smooth surface, foreshortening, perspective distortion, low texture, repetition texture, transparent substance With the impact of overlap and discrete regions factor, it is impossible to which guarantee can be calculated dense disparity map by Stereo matching.In addition, Under complicated road environment, only rely on road color empirical value or highway sideline detection comes to single image Road Detection just Really property is not high, it is impossible in being applied directly to practical matter.
With the research that deepens continuously to machine vision, have at present many efficient and can obtain vertical compared with dense disparity map Body matching algorithm is applied in the middle of actual scene, such as three-dimensional reconstruction, intelligent transportation, Intelligent Measurement.Image segmentation is image It is divided into several regions specific, with unique properties and proposes the technology and process of interesting target, it is by image Manage the committed step of graphical analysis.Researcher has updated original color image segmentation method and other subject Some new theories and new method are used for image segmentation, it is proposed that the dividing method of many new adaptation specific occasions.
The content of the invention
The invention aims to provide a kind of binocular vision obstacle detection method split based on three-dimensional point cloud, the party Method primary segmentation outlet millet cake cloud and the obstacle object point cloud being above the ground level on three-dimensional point cloud, in conjunction with Segmentation of Color Image Amalgamation judging, finally determines barrier.
Realizing the technical scheme of the object of the invention is:
A kind of barrier automatic testing method of the binocular stereo vision split based on three-dimensional point cloud, is comprised the steps:
(1)Two-way coloured image is obtained by the two of diverse location video cameras, binocular is taken the photograph using stereo calibration method Camera is demarcated, and calculates the inside and outside parameter and relative position relation of two video cameras, according to these parameters two is taken the photograph respectively Camera carries out eliminating distortion and row(Or row)Alignment, makes the imaging origin of two-way coloured image consistent, after being corrected Binocular colored views.Obtained by sensor or predefined the angle of pitch and height on the relative road surface of video camera;
(2)To step(1)The binocular colored views obtained after correction, select a wherein width view as reference map.Utilize Stereo Matching Algorithm carries out matching primitives to reference map, obtains preliminary disparity map.Color images are carried out to reference map, is obtained To the close segment of color.According to step(1)The video camera of acquisition calculates the threshold of point cloud segmentation with respect to road surface position relation data Value, is included under certain precision and can detect that minimum distance, maximum distance, maximum height and the Breadth Maximum of barrier.
(3)In step(2)After calculated disparity map, using step(1)It is calculated two video camera relative positions Relation data(Re-projection matrix)Disparity map three-dimensionalreconstruction is carried out, three-dimensional point cloud is calculated.
(4)According to step(1)The video camera of acquisition is corrected by step with respect to the position data on ground(3)Calculate Three-dimensional point cloud, makes road surface point cloud relative to the highly consistent of video camera.
(5)To step(4)Three-dimensional point cloud after correction carries out detecting pavement-height(The height of road surface relative camera), By step(2)The segmentation threshold of calculating is partitioned into barrier, road surface and unknown point cloud.
(6)Fusion steps(5)The point cloud of segmentation and step(2)Segment to reference map image segmentation, determines barrier, road Face and zone of ignorance, and mark show.
Above-mentioned whole process, two described video cameras remain the property such as same specification, i.e. pixel, focal length, aperture Can be identical or close, it is contemplated that the factor for affecting Stereo matching to calculate, the video camera of fine definition is selected as far as possible.Two video cameras Relative position and focal length be fixed, i.e., no longer change the relative position and focal length of two video cameras once demarcating.Two shootings Machine phase can be fixed with the luffing angle and height on road surface, it is also possible to be changed.Step(1)In two cameras resolution ratio synthesis The calculating performance of equipment and determine, generally can be using the resolution ratio such as 640x480 or 1027x768.Calibrating camera can be using artificial Demarcation, plane chessboard calibration or circle(Flat circle or parallel circle)Demarcate etc., calculate tangential distortion, radial distortion, focal length, imaging Origin, correction maps matrix and re-projection matrix.Correction maps are used for the colored views of two video cameras of correction, make two width figures As imaging origin is consistent, two camera optical axises are parallel, left and right imaging plane is coplanar, to polar curve row or column alignment.Video camera Include the angle of pitch and height with respect to the position on road surface, wherein height is option.
Step(2)Described in the reference map that selects be to determine the follow-up master image for calculating reference.Current Stereo Matching Algorithm It is a difficult point and the focus in computer vision, algorithm has many kinds, and general step has Matching power flow to calculate, Matching power flow Superposition, parallax are obtained, parallax refinement(Sub-pixel), matching strategy has local algorithm and and Global Algorithm.The meter of package Performance and precise requirements are calculated, suitable Stereo Matching Algorithm is selected, denser disparity map is efficiently and rapidly calculated.It is colored Image segmentation carries out image segmentation to coloured image by colouring information.Imaged according to the relative distance and benchmark of two video cameras The focal length of machine can determine that detectable barrier is from video camera minimum distance, from video camera most long distance under acceptable accuracy From, maximum height and Breadth Maximum.
Step(3)Described three-dimensional point cloud utilizes step by disparity map(1)Calculated re-projection matrix three-dimensionalreconstruction Obtain after calculating.Disparity map be piece image is appointed by image pair on the basis of, its size is the size of the benchmark image, element value For the image of parallax value, the range information of scene is contained.
Step(4)The luffing angle rotated three dimensional point cloud according to the relative ground of video camera, make road surface point cloud relative to Video camera it is highly consistent.
Step(5)It is described according to step(2)The maximum height or step of detection(1)The height segmentation three-dimensional point cloud of acquisition, Obtain road surface point cloud;According to step(2)The barrier of detection is from video camera minimum distance and from video camera maximum distance, segmentation three Dimension point cloud obtains the obstacle object point cloud and zone of ignorance point cloud from video camera different distance scope;According to step(2)Detection is most Big width splits the obstacle object point cloud from video camera different distance scope, obtains the obstacle object point cloud of diverse location.The step for Obstacle object point cloud, road surface point cloud and zone of ignorance point cloud are partitioned into, non-region point cloud is by the point of some matching primitives mistakes The poor point cloud of cloud and precision is constituted.
Step(6)The fusion steps(5)The point cloud of segmentation and step(2)Segment to reference map image segmentation, will Step(2)The image block that color images are obtained compares with obstacle object point cloud and road surface point cloud, adjudicates barrier and road surface Correctness and regional extent.
The binocular camera is demarcated:Demarcation includes camera calibration and stereo calibration.Camera calibration is mainly calculated Tangential distortion, radial distortion, focal length and optical centre.Stereo calibration is the mistake for calculating spatially two video camera geometrical relationships Journey, depends on the spin matrix and translation vector searched between two video cameras;The binocular correction:Video camera need correction and Three-dimensional correction, correction is to eliminate tangentially and radially to distort.Three-dimensional correction is that have because two video cameras are hardly possible accurately The imaging plane of coplanar and capable alignment, needs by the spin matrix between three-dimensional two video cameras of demarcation and translation vector school Just;
The binocular ranging and disparity computation:Stereo matching is mainly by finding out the corresponding relation between each pair image, root According to principle of triangulation, disparity map is obtained;After parallax information is obtained, can easily obtain original according to projection model The depth information and three-dimensional information of image.It is also most critical that Stereo Matching Technology is generally considered in stereoscopic vision most difficult Problem, current Stereo Matching Algorithm is a difficult point and focus in computer vision, and algorithm is a lot, the step of typically calculating For:Matching power flow is calculated, and Matching power flow superposition, parallax is obtained, parallax refinement;
The automatic estimation segmentation threshold:In the minimum-depth accuracy rating for allowing, according to the resolution ratio of video camera and Distance, focal length calculate barrier from the minimum distance of video camera, farthest from video camera between two video cameras that stereo calibration is obtained Distance, maximum height and Breadth Maximum.And then between calculating in the accuracy rating for allowing before and after barrier and left and right between Every, obstacle height error range.
The three-dimensional point cloud pavement-height detection:The three-dimensional point cloud obtained to three-dimensionalreconstruction carries out pavement-height detection, full The height of foot setting error is considered as pavement-height.May be larger in first time detection pavement-height time error, repeatedly changed After generation detection, the detection accuracy of pavement-height tends to stable, and updates as height detection value closely several times updates.
The three-dimensional point cloud segmentation:After the three-dimensional point cloud that three-dimensionalreconstruction obtains denser, according to the relative ground of video camera Luffing angle rotated three dimensional point cloud, make road surface point cloud and video camera on three-dimensional point cloud be in sustained height.By barrier from taking the photograph The minimum distance of camera, from video camera maximum distance, maximum height, Breadth Maximum, road surface point cloud level degree and obstacle height miss Difference, segmentation three-dimensional point cloud obtains road surface point cloud, obstacle object point cloud and zone of ignorance point cloud;By before and after barrier and left and right between Interval, splits the obstacle object point cloud that obstacle object point cloud obtains each position;Zone of ignorance point cloud is by some matching primitives mistakes Point cloud and the poor point cloud of precision constitute.
The three-dimensional point cloud Fusion of Color infomation detection barrier:Three-dimensional point cloud Fusion of Color information is i.e. to three-dimensional point cloud point Obstacle object point cloud, road surface point cloud and the zone of ignorance point cloud for obtaining diverse location is cut, colour reference figure is carried out by color image Segmentation obtains image block, will split the fragments compositing contrast that cloud is obtained with segmentation of putting for obtaining, judgement barrier and road surface Correctness, determines the position range of barrier, road surface and zone of ignorance, realizes the detection of obstacles of high reliability.
It is an advantage of the invention that:Compared with prior art, present invention three-dimensionalreconstruction in compared with complex environment is denser Three-dimensional point cloud, still can detect video camera under certain angle of pitch with pavement-height and estimates three-dimensional point automatically on video camera and road surface The threshold value cut, is partitioned into obstacle object point cloud, road surface point cloud and zone of ignorance point cloud;With reference to color Image Segmentation, merge face Color information adjudicates the correctness on barrier and road surface, determines the position range of barrier, road surface and zone of ignorance;Compare single image By the method that experience detects road obstacle, with higher dependable with function.
Description of the drawings
Fig. 1 is the binocular vision obstacle detection method schematic diagram split based on three-dimensional point cloud.
Fig. 2 is binocular stereo imaging schematic diagram.
Specific embodiment
The barrier automatic testing method of simultaneously Fusion of Color information is split based on three-dimensional point cloud, as shown in figure 1, including following Step:
Step one:Two-way coloured image is obtained by the two of diverse location video cameras, using stereo calibration method to double Lens camera is demarcated, and the inside and outside parameter and relative position relation of two video cameras is calculated, according to these parameters respectively to two Individual video camera carries out eliminating distortion and row(Or row)Alignment, makes the imaging origin of two-way coloured image consistent, is corrected Binocular colored views afterwards.Obtained by sensor or predefined the angle of pitch and height on the relative road surface of video camera.Two shootings The relative position and focal length of machine is fixed, i.e., no longer change the relative position and focal length of two video cameras once demarcation.Two are taken the photograph Camera phase can be fixed with the luffing angle and height on road surface, it is also possible to be changed.
The calculating performance of the resolution ratio package of two cameras and determine, generally can using 640x480 or The resolution ratio such as 1027x768.Calibrating camera can be using artificial demarcation, plane chessboard calibration or circle(Flat circle or parallel circle)Mark It is fixed etc., calculate tangential distortion, radial distortion, focal length, imaging origin, correction maps matrix and re-projection matrix.Correction maps For correcting the colored views of two video cameras, two width images imaging origin is made unanimously, two camera optical axises are parallel, left and right Imaging plane is coplanar, to polar curve row or column alignment.Video camera includes the angle of pitch and height with respect to the position on road surface, wherein being highly Option.
In the present invention, two video cameras remain the performance such as same specification, i.e. pixel, focal length, aperture it is identical or It is close to, it is contemplated that the factor for affecting Stereo matching to calculate, the video camera of fine definition is selected as far as possible.
Now to illustrate and demarcate the binocular camera that left and right horizontal is placed using plane checkerboard calibration method.Adjust binocular to take the photograph Camera is simultaneously collected after two-way RGB image, and it is demarcated using n row m Lie Heibaige chessboards(N and m is more than 4), camera due to The characteristic of optical lens causes imaging to there is radial distortion:
xcorrected=x(1+k1r2+k2r4+k3r6) (1)
ycorrected=y(1+k1r2+k2r4+k3r6)
In formula (x, y) be obtain original image pixels, (xcorrected,ycorrected) for correct radial distortion image Pixel, parameter k1, k2,k3For radial distortion parameter;
Due in technique, there is error when assembling imaging sensor and camera lens, between sensor and optical lens not It is substantially parallel, therefore there is tangential distortion in imaging:
xcorrected=x+[2p1y+p2(r2+2x2)] (2)
ycorrected=y+[p1(r2+2y2)+2p2x]
In formula (x, y) be obtain original image pixels, (xcorrected,ycorrected) it is the image for correcting tangential distortion Pixel, parameter p1, p2For tangential distortion parameter;The calibration of single camera mainly calculates the internal reference of video camera and outer Ginseng.Binocular camera calibration will not only draw the inner parameter of each video camera, in addition it is also necessary to two are measured by stereo calibration Relative position between video camera(D translation vector sum spin matrix).
Binocular correction is according to the monocular internal reference data obtained after camera calibration(Focal length, imaging origin, distortion factor) With binocular relative position relation(Spin matrix and translation vector), left and right view is carried out eliminating distortion and row alignment respectively, make Left and right view imaging origin it is consistent, two camera optical axises are parallel, left and right imaging plane is coplanar, polar curve row is alignd, It is easy to the calculating of Stereo matching.
Step 2:To the binocular colored views obtained after step one correction, a wherein width view is selected as reference map.Profit Matching primitives are carried out to reference map with Stereo Matching Algorithm, preliminary disparity map is obtained.Color images are carried out to reference map, Obtain the close segment of color.According to the relative road surface position relation data of video camera that step one is obtained, point cloud segmentation is calculated Threshold value, is included in detectable minimum distance, maximum distance, maximum height and Breadth Maximum under acceptable accuracy.
Now utilize the SGBM Stereo Matching Algorithms of Heiko Hirschm ü ller propositions and optimized, with left view as base Figure is matched, and is calculated preliminary disparity map.Stereo Matching Algorithm includes following step:Matching power flow is carried out first Calculate,
C(p,d)=C(x,y,d)=CBT(pLi,pLi+d,I′L,I′R)
=max{0,IL(pLi)-I′max,I′min-IL(pLi)} (3)
Equation(3)In, d is parallax.
Matching power flow superposition such as following formula:
D is disparity map in formula (4), and right side Section 1 represents the sum of the Matching power flow of all pixels point.
Image partition method has many kinds, here with the mean shift clustering Similarity algorithm based on color.Here average drift Moving segmentation can find the peak value of spatially distribution of color, and by giving to data more than one group, its dimension is(X, y, it is blue, it is green, It is red), average drifting can find packing density highest " poly- block " with a window scanning space.One space of setting half Footpath, according to color variance a color radius is set.When average drifting window is moved, data are converged to after window transform The institute of peak value a little all can connect, and belong to the peak value.This belonging relation, is, from intensive surge and noise, to be formed The segmentation of image.Segmentation is completed by ratio pyramid, and pyramid color cluster on the middle and senior level possesses the border of oneself, these sides Boundary is precisely defined pyramidal low one layer in pyramid.
Minimum distance, maximum distance, maximum height, Breadth Maximum and really are calculated with following formula according to resolution of video camera Determine obstacle height error range:
Wherein Δ Z is the minimum trial range accuracy for obtaining, and Z is barrier with a distance from video camera, and f is Jiao of video camera Away from T is the distance between two video cameras, and Vd is parallax value.
Step 3:After the calculated disparity map of step 2, using step one the relative position of two video cameras is calculated Put relation data(Re-projection matrix)Disparity map three-dimensionalreconstruction is carried out, three-dimensional point cloud is calculated.
According to binocular stereo imaging principle, such as Fig. 2:
Wherein baseline distance T is the distance of the projection centre line of two video cameras;Camera focus are f.
After three-dimensional correction, in approximately the same plane, the imaging center of benchmark image is (c to the image of two video camerasx, cy), this point x coordinate in right image is c 'x, the distance in two video camera x directions is Tx, re-projection matrix is as follows:
The parallax d associated with it of the homogeneous point of a given two dimension, can be by this spot projection to three-dimensional:
Therefore, (X/W, Y/W, Z/W) is exactly the corresponding three-dimensional coordinates of parallax point d.
Step 4:The position data on the relative ground of video camera obtained according to step one, correction is calculated by step 3 Three-dimensional point cloud, make three-dimensional point cloud Road millet cake cloud consistent relative to camera height.Obtain pitching of the video camera with respect to ground Behind angle, just rotatable three-dimensional point cloud, makes the pavement-height value of three-dimensional point cloud consistent.Here arrange binocular camera to put down with road surface OK.
Step 5:Three-dimensional point cloud after step 4 correction is carried out detecting pavement-height(The height of road surface relative camera Degree), barrier, road surface and unknown point cloud are partitioned into by the segmentation threshold of step 2 calculating.
It without recessed and object higher than road plane is barrier to assume that road surface is.Three after finding step four is corrected The one hour value of dimension point cloud, takes the mean value of all values of the value in obstacle height error range, is currently detected road Face height, by the pavement-height weighted calculation with front one-time detection current final pavement-height is drawn.On repeated detection road After the height of face, pavement-height value accuracy is improved.
Step 6:The point segment to reference map image segmentation of cloud and step 2 of the segmentation of fusion steps five, determine barrier, Road surface and zone of ignorance, and mark show.
Three-dimensional point cloud Fusion of Color information obtains obstacle object point cloud, the road surface point cloud of diverse location to three-dimensional point cloud segmentation With zone of ignorance point cloud, colour reference figure is carried out to obtain image block by color image segmentation, the point Yun Yufen that segmentation is obtained The fragments compositing contrast for obtaining is cut, is occurred accounting on segment and is reached after certain value, determine whether the segment is barrier, road Face, determines the position range of barrier, road surface and zone of ignorance.
Using the barrier automatic detection side for splitting simultaneously Fusion of Color information based on three-dimensional point cloud designed by said method Method, as shown in Fig. 1, including binocular camera is demarcated, binocular is corrected, binocular ranging and disparity computation, automatic estimation split threshold Value, the detection of three-dimensional point cloud pavement-height, three-dimensional point cloud segmentation, three-dimensional point cloud Fusion of Color infomation detection barrier.Wherein
Binocular camera is demarcated:Demarcation includes camera calibration and stereo calibration.Camera calibration is mainly calculated tangentially Distortion, radial distortion, focal length and optical centre.Stereo calibration is the process for calculating spatially two video camera geometrical relationships, according to Lai Yu searches the spin matrix and translation vector between two video cameras.
Binocular is corrected:Video camera needs correction and three-dimensional correction, and correction is to eliminate tangentially and radially to distort.Three-dimensional correction is There is the imaging plane of accurate coplanar and capable alignment because two video cameras are hardly possible, needs pass through the demarcation two of solid Spin matrix and translation vector correction between video camera;
Binocular ranging and disparity computation:Stereo matching is mainly by finding out the corresponding relation between each pair image, according to three Angular measurement principle, obtains disparity map;After parallax information is obtained, original image can easily be obtained according to projection model Depth information and three-dimensional information.It is also asking for most critical that Stereo Matching Technology is generally considered in stereoscopic vision most difficult Topic, current Stereo Matching Algorithm is a difficult point and focus in computer vision, and algorithm is a lot, but general step is: Matching power flow is calculated, and Matching power flow superposition, parallax is obtained, parallax refinement;
Automatically segmentation threshold is estimated:In the minimum-depth range accuracy for allowing, according to the resolution ratio and solid of video camera Between two video camera obtained by calibrating distance, focal length calculate minimum distance from video camera, from video camera maximum distance, maximum high Degree and Breadth Maximum.Interval, the barrier height before and after barrier and left and right between is further calculated in the accuracy rating for allowing Degree error range.
Three-dimensional point cloud pavement-height is detected:The three-dimensional point cloud obtained to three-dimensionalreconstruction carries out pavement-height detection, and satisfaction sets The height for determining error is considered as pavement-height.May be larger in first time detection pavement-height time error, carry out successive ignition inspection After survey, the detection accuracy of pavement-height tends to stable, and updates as height detection value closely several times updates.
Three-dimensional point cloud is split:After the three-dimensional point cloud that three-dimensionalreconstruction obtains denser, according to bowing for the relative ground of video camera Elevation angle degree rotated three dimensional point cloud, makes road surface point cloud on three-dimensional point cloud be in sustained height with video camera.By from the nearest of video camera Distance, from video camera maximum distance, maximum height, Breadth Maximum road surface, point cloud level degree and obstacle height error, segmentation is three-dimensional Point cloud obtains road surface point cloud, obstacle object point cloud and zone of ignorance point cloud;By the interval before and after barrier and left and right between, segmentation barrier Object point cloud is hindered to obtain the obstacle object point cloud of each position;Zone of ignorance point cloud is by the point cloud and precision of some matching primitives mistakes Poor point cloud is constituted.
Three-dimensional point cloud Fusion of Color infomation detection barrier:Three-dimensional point cloud Fusion of Color information is split to three-dimensional point cloud Obstacle object point cloud, road surface point cloud and zone of ignorance point cloud to diverse location, carries out splitting by color image to colour reference figure Image block is obtained, the correct of the fragments compositing contrast that cloud is obtained with segmentation of putting for obtaining, judgement barrier and road surface will be split Property, determine the position range of barrier, road surface and zone of ignorance, realize the detection of obstacles of high reliability.

Claims (8)

1. the binocular vision obstacle detection method based on three-dimensional point cloud segmentation, is characterized in that:Comprise the steps:
(1)Two-way coloured image is obtained by the two of diverse location video cameras, using stereo calibration method to binocular camera Demarcated, the inside and outside parameter and relative position relation of two video cameras is calculated, according to these parameters respectively to two video cameras Carry out eliminating distortion and row or column alignment, make the imaging origin of two-way coloured image consistent, the binocular after being corrected is color Color view, is obtained by sensor or is predefined the angle of pitch and height on the relative road surface of video camera;
(2)To step(1)The binocular colored views obtained after correction, select a wherein width view as reference map, using solid Matching algorithm carries out matching primitives to reference map, obtains preliminary disparity map;Color images are carried out to reference map, face is obtained The near segment of form and aspect;According to step(1)The video camera of acquisition calculates the threshold value of point cloud segmentation with respect to road surface position relation data, It is included under certain precision and can detect minimum distance, maximum distance, maximum height and the Breadth Maximum of barrier;
(3)In step(2)After calculated disparity map, using step(1)It is calculated two video camera relative position relations Data re-projection matrix carries out disparity map three-dimensionalreconstruction, calculates three-dimensional point cloud;
(4)According to step(1)The video camera of acquisition is corrected by step with respect to the position data on ground(3)The three-dimensional for calculating Point cloud, makes three-dimensional point cloud Road millet cake cloud be in sustained height relative to video camera;
(5)To step(4)Three-dimensional point cloud after correction carries out detecting pavement-height:That is the height of road surface relative camera, by Step(2)The segmentation threshold of calculating is partitioned into obstacle object point cloud, road surface point cloud and zone of ignorance point cloud;
(6)Fusion steps(5)The point cloud of segmentation and step(2)Segment to reference map image segmentation, determine barrier, road surface and Zone of ignorance, and mark show.
2. the binocular vision obstacle detection method split based on three-dimensional point cloud according to claim 1, is characterized in that:Institute It is the two-way video equipment combined by the camera of two same specifications to state binocular camera, can as needed obtain different Picture pixels, processing method is original RGB Three Channel Colors picture.
3. binocular vision obstacle detection method according to claim 1 based on three-dimensional point cloud segmentation, is characterized in that:It is described Carry out detecting pavement-height using three-dimensional point cloud, be the accuracy that detection pavement-height is improved using the method for loop iteration, Under being simultaneously barrier higher than the object on road surface in hypothesis, diverse location is partitioned into by pavement-height and three-dimensional point cloud segmentation threshold Obstacle object point cloud, road surface point cloud and zone of ignorance point cloud.
4. binocular vision obstacle detection method according to claim 1 based on three-dimensional point cloud segmentation, is characterized in that:It is described Barrier, road surface and zone of ignorance acquisition, be to utilize step(5)The three-dimensional point cloud that segmentation is obtained is combined after image segmentation Image block determines, and mark shows.
5. the binocular vision obstacle detection method split based on three-dimensional point cloud according to claim 1, is characterized in that:Institute State binocular camera demarcation:Demarcation includes camera calibration and stereo calibration;
Camera calibration mainly calculates tangential distortion, radial distortion, focal length and optical centre;
Stereo calibration is the process for calculating spatially two video camera geometrical relationships, depends on the rotation searched between two video cameras Torque battle array and translation vector;
Binocular is corrected:Video camera needs correction and three-dimensional correction, and correction is to eliminate tangentially and radially to distort;
Three-dimensional correction is the imaging plane for having accurate coplanar and capable alignment because two video cameras are hardly possible, needs to pass through Spin matrix and translation vector correction between three-dimensional two video cameras of demarcation.
6. the binocular vision obstacle detection method split based on three-dimensional point cloud according to claim 1, is characterized in that:Institute State Stereo matching and disparity computation:Stereo matching is mainly by finding out the corresponding relation between each pair image, according to triangulation Principle, obtains disparity map;After parallax information is obtained, the depth information and three-dimensional letter of original image are obtained according to projection model Breath.
7. the binocular vision obstacle detection method split based on three-dimensional point cloud according to claim 1, is characterized in that:Institute State the detection of three-dimensional point cloud pavement-height:The three-dimensional point cloud obtained to three-dimensionalreconstruction carries out pavement-height detection, meets setting error Height be considered as pavement-height;May be larger in first time detection pavement-height time error, after carrying out successive ignition detection, road The detection accuracy of face height tends to stable, and updates as height detection value closely several times updates.
8. the binocular vision obstacle detection method split based on three-dimensional point cloud according to claim 1, is characterized in that:
The three-dimensional point cloud segmentation:After the three-dimensional point cloud that three-dimensionalreconstruction obtains denser, according to bowing for the relative ground of video camera Elevation angle degree rotated three dimensional point cloud, makes road surface point cloud on three-dimensional point cloud be in sustained height with video camera;
By barrier from the minimum distance of video camera, from video camera maximum distance, maximum height, Breadth Maximum road surface, the point cloud level Degree and obstacle height error, segmentation three-dimensional point cloud obtains road surface point cloud, obstacle object point cloud and zone of ignorance point cloud;By barrier In front and back the interval and left and right between, splits the obstacle object point cloud that obstacle object point cloud obtains each position;Zone of ignorance point Yun Shiyou The poor point cloud of the point cloud of some matching primitives mistakes and precision is constituted;
Three-dimensional point cloud Fusion of Color infomation detection barrier:Three-dimensional point cloud Fusion of Color information is obtained not to three-dimensional point cloud segmentation With the obstacle object point cloud of position, road surface point cloud and zone of ignorance point cloud, colour reference figure is carried out to obtain by color image segmentation Image block, the fragments compositing contrast that the point cloud that segmentation is obtained is obtained with segmentation, adjudicates the correctness on barrier and road surface, really Determine the position range of barrier, road surface and zone of ignorance, realize the detection of obstacles of high reliability.
CN201410146864.4A 2014-04-14 2014-04-14 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation Expired - Fee Related CN103955920B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410146864.4A CN103955920B (en) 2014-04-14 2014-04-14 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410146864.4A CN103955920B (en) 2014-04-14 2014-04-14 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation

Publications (2)

Publication Number Publication Date
CN103955920A CN103955920A (en) 2014-07-30
CN103955920B true CN103955920B (en) 2017-04-12

Family

ID=51333189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410146864.4A Expired - Fee Related CN103955920B (en) 2014-04-14 2014-04-14 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation

Country Status (1)

Country Link
CN (1) CN103955920B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909009A (en) * 2017-10-27 2018-04-13 北京中科慧眼科技有限公司 A kind of obstacle detection method and device based on road surface study

Families Citing this family (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701961A (en) * 2014-11-24 2016-06-22 南京酷派软件技术有限公司 Walking safety prompting method, system and terminal
CN104392486B (en) * 2014-11-25 2017-07-28 西安理工大学 One kind point cloud scene reconstruction method
US9616569B2 (en) * 2015-01-22 2017-04-11 GM Global Technology Operations LLC Method for calibrating an articulated end effector employing a remote digital camera
CN104794713B (en) * 2015-04-15 2017-07-11 同济大学 Chamber crop digitalized image method based on ARM and binocular vision
JP6358160B2 (en) * 2015-04-17 2018-07-18 トヨタ自動車株式会社 Traveling road surface detection device and traveling road surface detection method
JP6545065B2 (en) * 2015-09-30 2019-07-17 株式会社日立ビルシステム 3D point cloud data modeling device, modeling program and elevator guide rail modeling method
KR101756650B1 (en) * 2015-10-27 2017-07-26 광주과학기술원 Installation space exploration method for a substance and readable media having the program therefor
WO2017080451A1 (en) 2015-11-11 2017-05-18 Zhejiang Dahua Technology Co., Ltd. Methods and systems for binocular stereo vision
CN105955275B (en) * 2016-05-26 2021-07-13 华讯方舟科技有限公司 Robot path planning method and system
CN106197452A (en) * 2016-07-21 2016-12-07 触景无限科技(北京)有限公司 A kind of visual pattern processing equipment and system
CN106197429A (en) * 2016-07-21 2016-12-07 触景无限科技(北京)有限公司 A kind of Multi-information acquisition location equipment and system
CN106225723B (en) * 2016-07-25 2019-03-29 浙江零跑科技有限公司 A kind of hinged angle measuring method of multiple row vehicle based on backsight binocular camera
CN107818293A (en) 2016-09-14 2018-03-20 北京百度网讯科技有限公司 Method and apparatus for handling cloud data
US10315866B2 (en) * 2016-10-20 2019-06-11 Intelligrated Headquarters, Llc 3D-2D vision system for robotic carton unloading
DE102016223180A1 (en) * 2016-11-23 2018-05-24 Robert Bosch Gmbh Method and system for detecting a raised object located within a parking lot
CN106650640B (en) * 2016-12-05 2020-03-03 浙江大学 Negative obstacle detection method based on laser radar point cloud local structure characteristics
JP6712545B2 (en) 2016-12-19 2020-06-24 日立オートモティブシステムズ株式会社 Electronic control device, electronic control system, and electronic control method
CN108269281B (en) * 2016-12-30 2023-06-13 上海安维尔信息科技股份有限公司 Obstacle avoidance technical method based on binocular vision
CN106875472A (en) * 2017-01-16 2017-06-20 成都信息工程大学 A kind of 3D dental imagings modeling method
CN106650708B (en) * 2017-01-19 2023-08-11 南京航空航天大学 Automatic driving obstacle vision detection method and system
US11366450B2 (en) * 2017-03-23 2022-06-21 Abb Schweiz Ag Robot localization in a workspace via detection of a datum
CN107169986B (en) * 2017-05-23 2019-09-17 北京理工大学 A kind of obstacle detection method and system
CN107367767A (en) * 2017-06-27 2017-11-21 上海卓思智能科技股份有限公司 A kind of vent cabinet window foreign matter detecting method and system and a kind of controller
CN107909010B (en) * 2017-10-27 2022-03-18 北京中科慧眼科技有限公司 Road obstacle detection method and device
CN108416834A (en) * 2018-01-08 2018-08-17 长春理工大学 Transparent objects surface three dimension reconstructing method, device and system
CN108245385B (en) * 2018-01-16 2019-10-29 曹醒龙 A kind of device helping visually impaired people's trip
CN110197097B (en) * 2018-02-24 2024-04-19 北京图森智途科技有限公司 Harbor district monitoring method and system and central control system
CN108573215B (en) * 2018-03-16 2021-08-03 海信集团有限公司 Road reflective area detection method and device and terminal
CN108489454A (en) * 2018-03-22 2018-09-04 沈阳上博智像科技有限公司 Depth distance measurement method, device, computer readable storage medium and electronic equipment
CN108648219B (en) * 2018-04-08 2022-02-22 浙江大承机器人科技有限公司 Binocular-based obstacle and feasible area detection method
CN110488805A (en) * 2018-05-15 2019-11-22 武汉小狮科技有限公司 A kind of unmanned vehicle obstacle avoidance system and method based on 3D stereoscopic vision
CN108986162B (en) * 2018-06-28 2022-02-22 杭州吉吉知识产权运营有限公司 Dish and background segmentation method based on inertial measurement unit and visual information
CN110667474B (en) * 2018-07-02 2021-02-26 北京四维图新科技股份有限公司 General obstacle detection method and device and automatic driving system
WO2020006765A1 (en) * 2018-07-06 2020-01-09 深圳前海达闼云端智能科技有限公司 Ground detection method, related device, and computer readable storage medium
CN109141364B (en) * 2018-08-01 2020-11-03 北京进化者机器人科技有限公司 Obstacle detection method and system and robot
CN109060839B (en) * 2018-08-08 2021-01-29 广州爱迪绿色印刷科技有限公司 Three-dimensional detection method and device for paper surface morphology quality
CN109143215A (en) * 2018-08-28 2019-01-04 重庆邮电大学 It is a kind of that source of early warning and method are cooperateed with what V2X was communicated based on binocular vision
CN109190704A (en) * 2018-09-06 2019-01-11 中国科学院深圳先进技术研究院 The method and robot of detection of obstacles
CN110893617B (en) * 2018-09-13 2021-09-17 深圳市优必选科技有限公司 Obstacle detection method and device and storage device
CN109598896A (en) * 2018-11-16 2019-04-09 佛山市第人民医院(中山大学附属佛山医院) Sick bed collision early warning method, system and device
CN109522847A (en) * 2018-11-20 2019-03-26 中车株洲电力机车有限公司 A kind of track and road barricade object detecting method based on depth map
CN113227713A (en) * 2018-12-13 2021-08-06 大陆汽车有限责任公司 Method and system for generating environment model for positioning
CN111368607B (en) * 2018-12-26 2023-06-02 北京欣奕华科技有限公司 Robot, obstacle detection method and detection device
CN111374597B (en) * 2018-12-28 2021-08-24 珠海市一微半导体有限公司 Method and device for avoiding line of cleaning robot, storage medium and cleaning robot
CN109700465A (en) * 2019-01-07 2019-05-03 广东体达康医疗科技有限公司 A kind of mobile three-dimensional wound scanning device and its workflow
CN111735433B (en) * 2019-03-25 2022-05-20 杭州海康威视数字技术股份有限公司 Method and device for establishing two-dimensional map
CN109947108B (en) * 2019-04-01 2021-11-26 深圳市祺丰智能机器人科技有限公司 Method for predicting road condition in front of mobile robot
CN110109457A (en) * 2019-04-29 2019-08-09 北方民族大学 A kind of intelligent sound blind-guidance robot control method and control system
CN111742242A (en) * 2019-06-11 2020-10-02 深圳市大疆创新科技有限公司 Point cloud processing method, system, device and storage medium
CN110610501B (en) * 2019-09-18 2022-04-29 中国大恒(集团)有限公司北京图像视觉技术分公司 Point cloud segmentation method and device
CN110703753B (en) * 2019-10-16 2022-11-08 北京京东乾石科技有限公司 Path planning method and device, electronic equipment and storage medium
US10882468B1 (en) 2019-10-29 2021-01-05 Deere & Company Work vehicle composite panoramic vision systems
CN111047636B (en) * 2019-10-29 2024-04-09 轻客小觅机器人科技(成都)有限公司 Obstacle avoidance system and obstacle avoidance method based on active infrared binocular vision
CN110928301B (en) * 2019-11-19 2023-06-30 北京小米智能科技有限公司 Method, device and medium for detecting tiny obstacle
CN111160233B (en) * 2019-12-27 2023-04-18 中国科学院苏州纳米技术与纳米仿生研究所 Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
CN111260773B (en) * 2020-01-20 2023-10-13 深圳市普渡科技有限公司 Three-dimensional reconstruction method, detection method and detection system for small obstacle
CN111260715B (en) * 2020-01-20 2023-09-08 深圳市普渡科技有限公司 Depth map processing method, small obstacle detection method and system
CN111366917B (en) * 2020-03-13 2022-07-15 北京百度网讯科技有限公司 Method, device and equipment for detecting travelable area and computer readable storage medium
CN111667540B (en) * 2020-06-09 2023-04-18 中国电子科技集团公司第五十四研究所 Multi-camera system calibration method based on pedestrian head recognition
CN111890358B (en) * 2020-07-01 2022-06-14 浙江大华技术股份有限公司 Binocular obstacle avoidance method and device, storage medium and electronic device
CN111797929B (en) * 2020-07-07 2023-08-22 金陵科技学院 Binocular robot obstacle feature detection method based on CNN and PSO
CN111862234B (en) * 2020-07-22 2023-10-20 中国科学院上海微***与信息技术研究所 Binocular camera self-calibration method and system
CN112418103B (en) * 2020-11-24 2022-10-11 中国人民解放***箭军工程大学 Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision
CN112487916B (en) * 2020-11-25 2022-08-23 上海电力大学 Binocular three-dimensional identification system for electrical equipment
CN112884845B (en) * 2021-01-14 2023-03-31 山东浪潮科学研究院有限公司 Indoor robot obstacle positioning method based on single camera
CN112784799A (en) * 2021-02-01 2021-05-11 三一机器人科技有限公司 AGV (automatic guided vehicle) backward pallet and obstacle identification method and device and AGV
CN112818999B (en) * 2021-02-10 2022-10-28 桂林电子科技大学 Complex scene 3D point cloud semantic segmentation method based on convolutional neural network
CN113409242A (en) * 2021-02-23 2021-09-17 杭州哲域智能科技有限公司 Intelligent monitoring method for point cloud of rail intersection bow net
CN113343745B (en) * 2021-02-26 2024-06-18 北京中科慧眼科技有限公司 Remote target detection method and system based on binocular camera and intelligent terminal
CN113470049B (en) * 2021-07-06 2022-05-20 吉林省田车科技有限公司 Complete target extraction method based on structured color point cloud segmentation
CN113610741A (en) * 2021-08-18 2021-11-05 梅卡曼德(北京)机器人科技有限公司 Point cloud processing method and device based on laser line scanning
CN114418851A (en) * 2022-01-18 2022-04-29 长沙慧联智能科技有限公司 Multi-view 3D panoramic all-around viewing system and splicing method
CN114723830B (en) * 2022-03-21 2023-04-18 深圳市正浩创新科技股份有限公司 Obstacle recognition method, device and storage medium
CN114898072B (en) * 2022-06-14 2024-05-31 中国农业大学 Crop water stress detection method based on 3D temperature characteristics
CN116912403B (en) * 2023-07-03 2024-05-10 玩出梦想(上海)科技有限公司 XR equipment and obstacle information sensing method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887589A (en) * 2010-06-13 2010-11-17 东南大学 Stereoscopic vision-based real low-texture image reconstruction method
CN101976455A (en) * 2010-10-08 2011-02-16 东南大学 Color image three-dimensional reconstruction method based on three-dimensional matching

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7587081B2 (en) * 2005-09-28 2009-09-08 Deere & Company Method for processing stereo vision data using image density

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887589A (en) * 2010-06-13 2010-11-17 东南大学 Stereoscopic vision-based real low-texture image reconstruction method
CN101976455A (en) * 2010-10-08 2011-02-16 东南大学 Color image three-dimensional reconstruction method based on three-dimensional matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Real-time Obstacle Detection in Complex Scenarios Using Dense";Cosmin D. Pantilie etal;《2010 13th International IEEE Annual Conference on Intelligent Transportation Systems Madeira Island, Portugal, September 19-22,2010》;20100922;第439-444页 *
"双目立体视觉的障碍物检测方法综述";曾日金,袁华;《大众科技》;20140331;第16卷(第175期);第37-38页 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909009A (en) * 2017-10-27 2018-04-13 北京中科慧眼科技有限公司 A kind of obstacle detection method and device based on road surface study
CN107909009B (en) * 2017-10-27 2021-09-17 北京中科慧眼科技有限公司 Obstacle detection method and device based on road surface learning

Also Published As

Publication number Publication date
CN103955920A (en) 2014-07-30

Similar Documents

Publication Publication Date Title
CN103955920B (en) Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
WO2021004548A1 (en) Vehicle speed intelligent measurement method based on binocular stereo vision system
WO2021004312A1 (en) Intelligent vehicle trajectory measurement method based on binocular stereo vision system
CN106681353B (en) The unmanned plane barrier-avoiding method and system merged based on binocular vision with light stream
CN111260597B (en) Parallax image fusion method of multiband stereo camera
Broggi et al. The single frame stereo vision system for reliable obstacle detection used during the 2005 DARPA grand challenge on TerraMax
CN106228538B (en) Binocular vision indoor orientation method based on logo
CN106204595B (en) A kind of airdrome scene three-dimensional panorama monitoring method based on binocular camera
CN103971404B (en) 3D real-scene copying device having high cost performance
CN103868460B (en) Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN104680496B (en) A kind of Kinect depth map restorative procedures based on color images
US8855442B2 (en) Image registration of multimodal data using 3D-GeoArcs
CN106920276B (en) A kind of three-dimensional rebuilding method and system
CN107945220A (en) A kind of method for reconstructing based on binocular vision
Li et al. Easy calibration of a blind-spot-free fisheye camera system using a scene of a parking space
CN109143215A (en) It is a kind of that source of early warning and method are cooperateed with what V2X was communicated based on binocular vision
CN104036488A (en) Binocular vision-based human body posture and action research method
CN102831601A (en) Three-dimensional matching method based on union similarity measure and self-adaptive support weighting
CN105389543A (en) Mobile robot obstacle avoidance device based on all-dimensional binocular vision depth information fusion
CN106485207A (en) A kind of Fingertip Detection based on binocular vision image and system
CN113050074B (en) Camera and laser radar calibration system and calibration method in unmanned environment perception
CN113643345A (en) Multi-view road intelligent identification method based on double-light fusion
CN104182968A (en) Method for segmenting fuzzy moving targets by wide-baseline multi-array optical detection system
CN108195359A (en) The acquisition method and system of spatial data
CN108090930A (en) Barrier vision detection system and method based on binocular solid camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170412

Termination date: 20190414