CN105550665A - Method for detecting pilotless automobile through area based on binocular vision - Google Patents
Method for detecting pilotless automobile through area based on binocular vision Download PDFInfo
- Publication number
- CN105550665A CN105550665A CN201610027922.0A CN201610027922A CN105550665A CN 105550665 A CN105550665 A CN 105550665A CN 201610027922 A CN201610027922 A CN 201610027922A CN 105550665 A CN105550665 A CN 105550665A
- Authority
- CN
- China
- Prior art keywords
- disparity map
- obtains
- dense
- image
- barrier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/02—Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Remote Sensing (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a method for detecting a pilotless automobile through area based on binocular vision. The method comprises the following steps: obtaining images of left and right eyes in the front of an automobile acquired by a vehicular binocular camera installed on a pilotless automobile, and taking the images of the left and right eyes as original identification images; pre-processing the images of the left and right eyes so as to obtain a processed dense parallax image; for the dense parallax image, obtaining a corresponding U parallax image; for the U parallax image, obtaining a barrier image; performing barrier elimination of the dense parallax image by utilizing the barrier image so as to obtain a parallax image, a lot of barriers in which are eliminated; for the parallax image, a lot of barriers in which are eliminated, obtaining a standardized V parallax image; for the V parallax image, obtaining the upper edge of a road area; eliminating the part over the upper edge of the road area in the dense parallax image so as to obtain a parallax image, a non-road area in which is eliminated; and, for the parallax image, the non-road area in which is eliminated, performing further barrier elimination, and then, performing gray revesal of the obtained barrier image so as to obtain a through area image.
Description
Technical field
The invention belongs to technical field of image processing, being specifically related to a kind of pilotless automobile based on binocular vision can traffic areas detection method.
Background technology
Along with the development of society, automobile has become the irreplaceable vehicles of mankind's daily life.But, be thereupon the safety problem become increasingly conspicuous that it brings.The fast development of vehicle intellectualized technology then provides powerful measure for addressing this problem.In the last few years, the each large well-known Automobile Enterprises in the world plays an active part in vehicle intellectualized Industrial Revolution, make pilotless automobile be no longer a concept, a lot of ripe intellectualized technology is applied to automobile industry, and achieves significant economic and social benefit.Meanwhile, the equipment such as the unmanned ground vehicle technical research in the fields such as military affairs, security also makes a breakthrough, unmanned explosive-removal robot play great function at major areas such as people's security protection, national security.
In current pilotless automobile technology, obstacle detection method is mainly based on laser radar, the initiative such as millimetre-wave radar or ultrasonic detector sensor, and the usual cost of this kind of sensor is higher, and power consumption is comparatively large, easily interference mutually.Current automobile industry uses ripe pilotless automobile technology to mainly contain automatic cruising system, Lane Keeping System, and autonomous parking system etc., these systems generally carry out work in conjunction with radar and image information.Concrete grammar relies on radar detection barrier, video camera inspection vehicle diatom or other road informations, then both are carried out fusion treatment.If only utilize video camera to realize all functions, not only can reduce equipment cost, more reduce biosensor power consumption, add the serviceable life of system.Therefore, rely on merely binocular camera to realize the job requirements such as lane detection, obstacle detection, driving recording and there are wide market outlook.
Obstacle detection is as the gordian technique in Unmanned Systems and DAS (Driver Assistant System), and at the safe passing of automobile, play a great role in the aspects such as comfortable driving.How accurate, detecting obstacles thing efficiently, acquisition can have vital impact to pilotless automobile and DAS (Driver Assistant System) in traffic areas.Usually, carry out traffic areas detection being mainly divided into based on textural characteristics based on image, color characteristic and depth characteristic three major types.At present, some scholars utilizes monocular camera to obtain coloured image, and utilizing Texture Segmentation or color segmentation to identify can traffic areas, but its effect is often unsatisfactory, affected by environment comparatively large, and be mainly used in the urban traffic environment of comparatively specification, inapplicable in unstructured moving grids.And mainly obtaining environment disparity map by binocular or many orders camera based on depth information, recycling disparity map obtains dense or sparse depth information, estimates ground model.These class methods are applicable to complex road surface environment, but often calculated amount is large, and poor real, can not be applied to unmanned ground vehicle preferably.As can be seen here, detecting efficiently and in real time based on image information traffic areas to be a huge challenge, has very high using value to the development of unmanned ground vehicle and DAS (Driver Assistant System) simultaneously.
Summary of the invention
For solving the problem, the invention provides a kind of pilotless automobile based on binocular vision and can lead to method for detecting area, the method applicability is strong, can under the complicated weather conditions such as sleet and the various roads environment such as field, city steady operation, and real-time is better, can be widely used in unmanned ground vehicle and DAS (Driver Assistant System).
Realize technical scheme of the present invention as follows:
Pilotless automobile based on binocular vision can lead to a method for detecting area, and it comprises:
Step 1, obtains the vehicle front left and right order image of the vehicle-mounted binocular camera collection be arranged on automatic driving car as original recognition image;
Step 2, carries out pre-service to left and right order image: first utilize color constancy method to carry out color enhancement to left and right order image, secondly the image after color enhancement is converted to gray level image; Again adopt SGM method to carry out binocular solid coupling to gray level image, acquisition disparity range is the dense disparity map between 0-128; Finally medium filtering, dilation and erosion process are carried out to described dense disparity map, obtain the dense disparity map after process;
Step 3, to the dense disparity map after the process obtained in step 2, each arranges all pixels and carries out gray-scale statistical, obtains corresponding U disparity map, the disparity range of this U disparity map is standardized to obtain U disparity map of standardizing between 0 to 255;
Step 4, uses Canny operator to carry out Boundary Extraction to the standardization U disparity map obtained in step 3, obtains the border U disparity map of binaryzation; Traversal border U disparity map, if the pixel value of pixel (i, j) is not 0, then travels through all pixels of described dense disparity map jth row, finds identical with the pixel value of pixel (i, j) or differs to be less than and set threshold value δ
1pixel, and be set to obstacle pixel, thus obtained rough obstructions chart, then medium filtering, dilation and erosion process are carried out to this obstructions chart, obtained the obstructions chart after process;
Step 5, the obstructions chart after the process utilizing step 4 to obtain, carries out barrier rejecting to the dense disparity map after step 2 processes, obtains the disparity map of (or containing a small amount of barrier) after rejecting a large amount of barrier;
Step 6, gray-scale statistical (statistical parallax scope is 0-128) is carried out to all pixels of every a line of the disparity map after a large amount of barrier of the rejecting obtained in step 5, obtain corresponding V disparity map, the disparity range of this V disparity map standardized obtains normalized V disparity map between 0 to 255;
Step 7, uses Canny operator to carry out Boundary Extraction to the normalized V disparity map obtained in step 6, obtains the border V disparity map of binaryzation; Utilize Hough transformation, Checking line in the V disparity map of described border, the line segment corresponding to road area in the dense disparity map in step 2 after process is selected, using the upper edge of the horizontal line of itself and border V disparity map longitudinal axis intersection point as road area from the line segment detected;
Step 8, according to the road area upper edge obtained in step 7, rejects the upper edge of road area in the dense disparity map after the process obtained in step 2 with upper part, obtains the disparity map of rejecting non-rice habitats region;
Step 9, for the disparity map of rejecting non-rice habitats region, final barrier is obtained according to the mode of step 3-4, wherein when the mode implementation by step 3, no longer disallowable in statistical parallax figure road area is with the pixel of upper part, and when the mode implementation according to step 4, the road area upper edge in obtained obstacle figure is all set to barrier (this road area upper edge is by being obtained in step 7) with upper part;
Step 10, carry out gray inversion to the obstructions chart that step 9 obtains, acquisition can traffic areas figure.
The present invention also comprises execution following steps after execution of step 10:
Step 11, what obtain step 10 can carry out outermost contour detection by traffic areas figure, obtains all outermost contours, each profile be one potential can traffic areas;
Step 12, screens the profile obtained in step 11, selects near vehicle front-wheel, and the maximum profile of area is as finally can traffic areas.
Beneficial effect
The first, the present invention by obtaining dense disparity map, directly utilize in disparity map statistical information cognitive disorders thing with can traffic areas, can processing speed be improved, and ensure the accuracy that identifies.
Because the present invention serves Unmanned Ground Vehicle platform, therefore require higher to recognition rate.Typically, under unmanned platform moves ahead situation with the speed of 40km/h, requirement can the identification frame per second of traffic areas at about 8Hz.This method, under Intel double-core 2.6GHzCPU processor condition, per secondly processes 5 to 7 frames, and the additive method comparing similar function has obvious speed advantage, if be transplanted to the Embedded Hardware Platforms such as DSP, its processing speed is enough to meet unmanned vehicle requirement.
Why recognition rate is very fast in the present invention, reason is that this method directly carries out statistical study on disparity map can traffic areas to identify, and unlike additive method, dense parallax information being converted to 3D point cloud, recycling 3D information carries out ground model matching, thus decreases a large amount of nonlinear operation.Meanwhile, because the present invention is Corpus--based Method information, lower to noise information susceptibility, can obtain stable can traffic areas testing result.
The second, the present invention obtains U disparity map and V disparity map by carrying out statistics to disparity map, utilizes UV disparity map that barrier characteristic sum roadway characteristic is mapped as linear feature.
Because U disparity map carries out gray-scale statistical to each row of image, and V disparity map carries out gray-scale statistical to each row of image.And the parallax feature of barrier is that gray scale is comparatively concentrated in a column direction, and barrier is higher, then gray-scale value is more concentrated, and the parallax feature of road is that gray scale is comparatively concentrated in the row direction, and from headstock from the close-by examples to those far off its parallax only increase gradually, concentration degree is always higher.Therefore, horizontal line correspondences barrier one by one just in original disparity map in U disparity map, and road area just in V disparity map medium dip line correspondences original disparity map.Therefore, UV disparity map is utilized then can to differentiate barrier and road comparatively accurately, rapidly.
3rd, the present invention uses Canny operator to carry out rim detection to U disparity map and V disparity map, and U disparity map and V disparity map are carried out binaryzation, reduces the noise that non-barrier produces in U disparity map, reduces the noise that barrier produces in V disparity map.
The method of traditional UV disparity map binaryzation utilizes fixing binary-state threshold, sets according to common impairments object height degree, not adaptivity.First UV disparity map standardizes between 0 to 255 by the present invention, then utilizes Canny operator to carry out binaryzation, can the barrier of self-adaptation varying environment and differing heights.
4th, first the present invention uses U disparity map, reject the interfere information of a large amount of barrier in V disparity map, improve the Detection accuracy of road area mapping straight line in V disparity map, detect the approximate range of road area according to V disparity map simultaneously, the accurate U disparity map of further acquisition, thus obtain accurate obstructions chart.
Because barrier can produce interference in V disparity map, the detection difficulty of road mapping straight line in V disparity map is caused to strengthen.First this method obtains rough obstructions chart according to U disparity map, then utilizes this obstructions chart to reject in disparity map and disturbs barrier in a large number, and the disparity map acquisition V disparity map that recycling eliminates a large amount of barrier carries out road area estimation.Thus decrease the impact of barrier on V disparity map.Simultaneously, because road upper edge can produce interference to the effective barrier in road boundary or road with the invalid barrier of upper part in U disparity map, this method utilizes V disparity map to estimate rough road area, then only carry out adding up to obtain meticulous U disparity map for estimated road area scope, thus obtain meticulous obstructions chart further.
5th, the present invention can carry out outermost contour detection by traffic areas figure to what obtain, can efficiently reject invalid can traffic areas, selecting effectively can traffic areas to vehicle, eliminate can the impact of the medium and small barrier in traffic areas simultaneously, further increases identification stability.
This method takes full advantage of can the general features of traffic areas, utilize dexterously outermost contour detection means eliminate invalid can the noise such as traffic areas, thus obtain that really drive unmanned vehicle effectively can traffic areas.And profile process reduce further can the edge noise of traffic areas, add the stability of identification.Can store with point set form by traffic areas profile, for carry out later can traffic areas projection (as IPM image projection) etc. further process provide conveniently, the complexity of follow-up work can be reduced.
Accompanying drawing explanation
Fig. 1 is that the pilotless automobile based on binocular vision of the present invention can lead to the overall identification process of method for detecting area;
Fig. 2 is the original recognition image that the pilotless automobile based on binocular vision of the present invention can lead in method for detecting area;
Fig. 3 be the pilotless automobile based on binocular vision of the present invention can lead to original dense disparity map in method for detecting area (under) and first time obtain U disparity map (on);
Fig. 4 is the border disparity map that the pilotless automobile based on binocular vision of the present invention can lead to that in method for detecting area, first time process U disparity map obtains;
Fig. 5 is that the pilotless automobile based on binocular vision of the present invention can to lead in method for detecting area first time and from original disparity map, obtains first time rough obstructions chart by border U disparity map;
Fig. 6 is the V disparity map (right side) that the pilotless automobile based on binocular vision of the present invention can lead in method for detecting area the dense disparity map after rejecting major obstacle thing (left side) and utilize this disparity map to obtain;
Fig. 7 be the pilotless automobile based on binocular vision of the present invention can lead in method for detecting area utilize V disparity map obtain border V disparity map (in) and in V disparity map, detect that road line correspondence is as road area upper edge (left side);
Fig. 8 can to lead in method for detecting area the U disparity map and border U disparity map that only statistics dense disparity map road area upper edge one lower part obtains for the pilotless automobile based on binocular vision of the present invention;
Fig. 9 be the pilotless automobile based on binocular vision of the present invention can lead in method for detecting area according to second time U disparity map obtain can traffic areas figure (left side) and the result figure (right side) of contour detecting can be carried out by traffic areas figure to this;
Figure 10 is that the pilotless automobile based on binocular vision of the present invention can lead in method for detecting area can the testing result figure of traffic areas.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in detail.
As shown in Figure 1, the pilotless automobile that the present invention is based on binocular vision can lead to method for detecting area, comprises following step:
Step one: obtain the vehicle front left and right order image of the vehicle-mounted binocular camera collection be arranged on automatic driving car as original recognition image.
Step 2: pre-service and Stereo matching acquisition dense disparity map are carried out to the left and right order image gathered in step one:
(201) use color constancy method to carry out pre-service to left and right order image, make image color feature more obvious, be then translated into gray scale GREY image;
(202) SGM binocular solid coupling is carried out to step (201) gained image, select maximum disparity to be 128 here, obtain dense disparity map;
(203) medium filtering is carried out to step (202) dense disparity map that obtains, once expand again afterwards and a corrosion treatment, obtain the rear disparity map of process.
Step 3: the disparity map after the process utilizing step 2 to obtain tries to achieve U disparity map:
(301) each arranges all pixels to add up dense disparity map, be 0 to 128 owing to selecting disparity range before, therefore gained U disparity map is longitudinally 128 row, horizontal columns is identical with dense disparity map columns, each pixel of the U disparity map obtained (i, j) pixel value Val (i, j) arranges the number of pixels that corresponding parallax is i for jth in dense disparity map;
(302) to step (301) obtain U disparity map and carry out standardization processing, especially by minimum value in subtracted image all elements, divided by the difference of maxima and minima, then the method for 255 is multiplied by, image all elements scope is standardized between 0 to 255, obtains U disparity map of standardizing.
Step 4: the standardization U disparity map next utilizing step 3 to obtain goes to obtain rough obstructions chart, is specially:
(401) use Canny operator to carry out Boundary Detection to standardization U disparity map, thus obtain the border U disparity map of binaryzation;
(402) the border U disparity map that (401) obtain is traveled through, if the pixel value of pixel (i, j) is not 0, then travels through dense disparity map jth and arrange all pixels, find the pixel value of pixel value and pixel (i, j) identical or differ and set threshold value δ
1pixel, these pixels are set to barrier pixel, thus obtain rough obstructions chart, described in the present embodiment, set threshold value δ
1be 7;
(403) once expand again after the obstructions chart obtained step (402) carries out medium filtering and a corrosion treatment, obtain the obstructions chart after process.
Step 5: the process obstruction figure utilizing step 4 to obtain, after the process obtain step 2, disparity map carries out barrier rejecting, be specially traversal obstructions chart, if when pixel is barrier pixel, finds described disparity map same position and its parallax is set to 0, thus obtain the disparity map of rejecting a large amount of barrier;
Step 6: utilize the disparity map after a large amount of barrier of the rejecting obtained in step 5 to try to achieve V disparity map:
(601) all pixels of every a line of this disparity map are added up, be 0 to 128 owing to selecting disparity range before, therefore gained V disparity map is longitudinally 128 row, longitudinal line number is identical with corresponding disparity map line number, each pixel of the V disparity map (i obtained, j) pixel value Val (i, j) for the corresponding parallax of the i-th row in corresponding disparity map be the number of pixels of j;
(602) to step (601) obtain V disparity map and carry out standardization processing, especially by minimum value in subtracted image all elements, divided by the difference of maxima and minima, then the method for 255 is multiplied by, image all elements scope is standardized between 0 to 255, obtains standardization V disparity map.
Step 7: the standardization V disparity map utilizing step 6 to obtain asks for road area upper edge, is specially:
(701) use Canny operator to carry out Boundary Extraction to the standardization V disparity map that step 6 obtains, obtain the border V disparity map of binaryzation;
(702) utilize Hough transformation to detect all line segments in the V disparity map of this border, wherein have inclined line segment corresponding be road area in disparity map;
(703) select to meet from all line segments detected below the line segment of four conditions, (1) be tilted to the left set angle, (2) setting threshold value is no more than apart with the V disparity map lower left corner, (3) the not super setpoint distance of distance bottom line segment lower limb and V disparity map, (four) degree of confidence is higher than given threshold value; If there is multistage line segment to meet the demands, then all retain;
(704) straight line obtained in calculation procedure (703), using the upper edge of the horizontal line of itself and longitudinal axis point of intersection as road area, if step has many results in (703), then get the upper edge of horizontal line as road area of the average of itself and longitudinal axis intersection point;
Step 8: the road area upper edge obtained according to step 7, in the dense disparity map after the process obtain step 2, upper edge is rejected with upper part, obtains the disparity map after rejecting;
Step 9: for the disparity map of rejecting non-rice habitats region, select more accurate parameter to obtain final obstructions chart according to step 3-step 4, be specially:
(901) gray-scale statistical is carried out with all pixels of lower part in road area upper edge in each row of disparity map of rejecting non-rice habitats region, obtain U disparity map, the U disparity map form of this U disparity map and (301) in step 3 etc. are identical;
(902) to step (901) obtain U disparity map and carry out standardization processing, especially by minimum value in subtracted image all elements, divided by the difference of maxima and minima, then the method for 255 is multiplied by, image all elements scope is standardized between 0 to 255, obtains U disparity map of standardizing;
(903) the standardization U disparity map obtained step (902) carries out Canny operator Boundary Detection, thus obtains the border U disparity map of binaryzation;
(904) the border U disparity map that (903) obtain is traveled through, if pixel (i, j) pixel value is not 0, then the disparity map jth in traversal rejecting non-rice habitats region arranges all pixels, find the pixel value of pixel value and pixel (i, j) identical or differ to be less than and set threshold value δ
2pixel, it should be noted that more strict unlike selected threshold parameter with step (402) here, described in the present embodiment, set threshold value δ
2=3, these pixels are set to barrier pixel, thus obtain obstructions chart more more accurate than (402);
(905) once expand again after the obstructions chart obtained step (904) carries out medium filtering and a corrosion treatment, obtain the obstructions chart after process;
(906) in obstructions chart step (905) obtained, road area upper edge (this upper edge obtains in step 7) above partial pixel is set to barrier pixel, thus obtains final obstructions chart.
Step 10: gray inversion is carried out to the obstructions chart that step 9 obtains, acquisition can traffic areas figure;
Step 11, what obtain step 10 can carry out outermost contour detection by traffic areas figure, obtains all outermost contours, each profile be one potential can traffic areas;
Step 12, screens the profile obtained in step 11, selects near vehicle front-wheel, and the maximum profile of area is as finally can traffic areas.
Embodiment one
Concrete steps that pilotless automobile can lead to region detection are to adopt the method to carry out:
Step one: the BumblebeeX3 tri-order stereoscopic camera being arranged on the Hui Dian company of vehicle front.In the present embodiment, vehicle-mounted vidicon obtain left and right order image pixel be set to 800 × 600, color mode is RGB.Use the leftmost side of camera and the rightmost side to obtain binocular image in this example, camera base length is 0.23998500 meter, and focal length is 1002.912048pixels, and transmission frame per second is 15Hz, gathers original image as shown in Figure 2;
Step 2: carry out SGM Stereo matching after pre-service is carried out to left and right order image, medium filtering is carried out again after obtaining dense disparity map, filtering parameter selects 3 pixel coverages, expansion process, expansion parameters is 3 pixels, corrosion treatment, corrosion treatment parameter is 3 pixels, and result is as shown in below figure in Fig. 3;
Step 3: after utilizing the process of step 2 gained, dense disparity map tries to achieve first time U disparity map, result is as shown in top figure in Fig. 3, and its pixel is 800 × 128;
Step 4: use Canny operator to carry out to step 3 gained standardization U disparity map the border U disparity map that edge extracting obtains binaryzation, as shown in Figure 4, result is as shown in below figure in Fig. 4.Return dense disparity map utilizing this binaryzation border U disparity map and carry out the screening of barrier pixel, obtain primary obstructions chart, as shown in Figure 5, result is as shown in Fig. 5 right part of flg;
Step 5: utilize the obstructions chart obtained in step 4, barrier rejecting is carried out for the dense disparity map after the process that step 2 obtains, obtain the disparity map of rejecting a large amount of barrier (or containing a small amount of barrier), result is as shown in Fig. 6 left hand view;
Step 6: utilize the disparity map of rejecting major obstacle thing to obtain V disparity map, then by its normalization, result is as shown in Fig. 6 right part of flg;
Step 7: use Canny operator to carry out Boundary Extraction to the normalized V disparity map of step 6 gained, obtain binaryzation border V disparity map, recycling Hough transformation asks for V disparity map middle conductor, filter out the line segment satisfied condition, ask they and V disparity map vertical axis intercept, be averaged as road area upper edge, as shown in Figure 7, be Hough transformation detection of straight lines result wherein, right side is normalized V disparity map, in middle graph, horizontal line is road area upper edge, to return in original disparity map as shown in left hand view in Fig. 7;
Step 8: according to step 7 gained upper edge, upper edge is rejected with upper part in dense disparity map, and then by the mode of disparity map after rejecting according to step 3 and step 4, in current step 4, during barrier screening, Threshold selection is 2 (in step 4, Threshold selection is 6 for the first time), namely it is then by as barrier pixel within 2 that corresponding with white pixel in binaryzation disparity map in dense disparity map pixel value differs with the corresponding parallax of white pixel, thus acquired disturbance thing figure, carry out medium filtering again, dilation erosion process, obtain final obstacle figure, parameter is the same with step 2.Finally carry out gray inversion again, acquisition can traffic areas figure.In this step, as shown in Figure 8, the second time U parallax of acquisition is as shown in Fig. 8 middle graph, and institute wins the second place time border U disparity map as shown in figure above Fig. 8 for second time U disparity map.In this step final obtain can traffic areas figure as shown in Fig. 9 left hand view;
Step 9: what obtain step 8 can carry out outermost contour detection by traffic areas figure, obtains all outermost contours, each profile be one potential can traffic areas, contour detecting result is as shown in Fig. 9 right part of flg;
Step 10: the profile obtained in step 9 is screened, select near vehicle front-wheel, and the maximum profile of area is as finally can traffic areas, finally can traffic areas testing result as shown in Figure 10, Figure 10 left hand view is the design sketch after barrier adds MASK, and in Figure 10, right part of flg is can traffic areas design sketch.
Certainly; the present invention also can have other various embodiments; when not deviating from the present invention's spirit and essence thereof; those of ordinary skill in the art are when making various corresponding change and distortion according to the present invention, but these change accordingly and are out of shape the protection domain that all should belong to the claim appended by the present invention.
Claims (4)
1. the pilotless automobile based on binocular vision can lead to a method for detecting area, and it is characterized in that, it comprises:
Step 1, obtains the vehicle front left and right order image of the vehicle-mounted binocular camera collection be arranged on automatic driving car as original recognition image;
Step 2, carries out pre-service to left and right order image, obtains the dense disparity map after process;
Step 3, to described dense disparity map, each arranges all pixels and carries out gray-scale statistical, obtains corresponding U disparity map, the disparity range of this U disparity map is standardized to obtain U disparity map of standardizing between 0 to 255;
Step 4, uses Canny operator to carry out Boundary Extraction to the standardization U disparity map obtained in step 3, obtains the border U disparity map of binaryzation; Traversal border U disparity map, if the pixel value of pixel (i, j) is not 0, then travels through all pixels of described dense disparity map jth row, finds identical with the pixel value of pixel (i, j) or differs to be less than and set threshold value δ
1pixel, and be set to obstacle pixel, thus obtained rough obstructions chart, then medium filtering, dilation and erosion process are carried out to this obstructions chart, obtained the obstructions chart after process;
Step 5, the obstructions chart after the process utilizing step 4 to obtain, carries out barrier rejecting to the dense disparity map after step 2 processes, obtains the disparity map after rejecting a large amount of barrier;
Step 6, carries out gray-scale statistical to all pixels of every a line of the disparity map after a large amount of barrier of the rejecting obtained in step 5, obtains corresponding V disparity map, the disparity range of this V disparity map is standardized to obtain normalized V disparity map between 0 to 255;
Step 7, uses Canny operator to carry out Boundary Extraction to the normalized V disparity map obtained in step 6, obtains the border V disparity map of binaryzation; Utilize Hough transformation, Checking line in the V disparity map of described border, the line segment corresponding to road area in the dense disparity map in step 2 after process is selected, using the upper edge of the horizontal line of the longitudinal axis intersection point of itself and border V disparity map as road area from the line segment detected;
Step 8, according to the road area upper edge obtained in step 7, rejects road area upper edge in the dense disparity map after the process obtained in step 2 with upper part, obtains the disparity map of rejecting non-rice habitats region;
Step 9, for rejecting the disparity map in non-rice habitats region, obtains final barrier according to the mode of step 3-4, and wherein according in the mode implementation of step 4, setting threshold value is δ
2, and δ
1> δ
2;
Step 10, carry out gray inversion to the obstructions chart that step 9 obtains, acquisition can traffic areas figure.
2. method for detecting area can be led to based on the pilotless automobile of binocular vision according to claim 1, it is characterized in that, the process of described step 2 is: first utilize color constancy method to carry out color enhancement to left and right order image, secondly the image after color enhancement is converted to gray level image; Again adopt SGM method to carry out binocular solid coupling to gray level image, acquisition disparity range is the dense disparity map between 0-128; Finally medium filtering, dilation and erosion process are carried out to described dense disparity map, obtain the dense disparity map after process.
3. can lead to method for detecting area based on the pilotless automobile of binocular vision according to claim 1, it is characterized in that, the process of described step 7 is:
(701) use Canny operator to carry out Boundary Extraction to the standardization V disparity map that step 6 obtains, obtain the border V disparity map of binaryzation;
(702) Hough transformation is utilized to detect all line segments in the V disparity map of described border;
(703) select to meet from all line segments detected below the line segment of 4 conditions, (1) be tilted to the left set angle, (2) near the V disparity map lower left corner, border, (3) line segment lower limb is close to bottom the V disparity map of border, and (four) degree of confidence is higher than setting threshold value;
(704) for line segment detected in step (703), using the upper edge of the horizontal line of itself and V disparity map longitudinal axis point of intersection as road area, if the line segment detected in step (703) has many, then using the upper edge of the horizontal line at itself and V disparity map longitudinal axis intersection point average place as road area.
4. can lead to method for detecting area based on the pilotless automobile of binocular vision according to claim 1, it is characterized in that, the method also comprises the steps:
Step 11, what obtain step 10 can carry out outermost contour detection by traffic areas figure, obtains all outermost contours, each profile be one potential can traffic areas;
Step 12, screens the profile obtained in step 11, selects near vehicle front-wheel, and the maximum profile of area is as finally can traffic areas.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610027922.0A CN105550665B (en) | 2016-01-15 | 2016-01-15 | A kind of pilotless automobile based on binocular vision can lead to method for detecting area |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610027922.0A CN105550665B (en) | 2016-01-15 | 2016-01-15 | A kind of pilotless automobile based on binocular vision can lead to method for detecting area |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105550665A true CN105550665A (en) | 2016-05-04 |
CN105550665B CN105550665B (en) | 2019-01-25 |
Family
ID=55829848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610027922.0A Active CN105550665B (en) | 2016-01-15 | 2016-01-15 | A kind of pilotless automobile based on binocular vision can lead to method for detecting area |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105550665B (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105955303A (en) * | 2016-07-05 | 2016-09-21 | 北京奇虎科技有限公司 | UAV autonomous obstacle avoidance method and device |
CN106228134A (en) * | 2016-07-21 | 2016-12-14 | 北京奇虎科技有限公司 | Drivable region detection method based on pavement image, Apparatus and system |
CN106228138A (en) * | 2016-07-26 | 2016-12-14 | 国网重庆市电力公司电力科学研究院 | A kind of Road Detection algorithm of integration region and marginal information |
CN106295607A (en) * | 2016-08-19 | 2017-01-04 | 北京奇虎科技有限公司 | Roads recognition method and device |
CN106446785A (en) * | 2016-08-30 | 2017-02-22 | 电子科技大学 | Passable road detection method based on binocular vision |
CN106651836A (en) * | 2016-11-04 | 2017-05-10 | 中国科学院上海微***与信息技术研究所 | Ground level detection method based on binocular vision |
CN106651785A (en) * | 2016-10-13 | 2017-05-10 | 西北工业大学 | Color constancy method based on color edge moments and anchoring neighborhood regularized regression |
CN107506711A (en) * | 2017-08-15 | 2017-12-22 | 江苏科技大学 | Binocular vision obstacle detection system and method based on convolutional neural networks |
CN107517592A (en) * | 2016-09-28 | 2017-12-26 | 驭势科技(北京)有限公司 | Automobile wheeled region real-time detection method and system |
CN107564053A (en) * | 2017-08-28 | 2018-01-09 | 海信集团有限公司 | A kind of parallax value correction method and device being applied under road scene |
CN107729856A (en) * | 2017-10-26 | 2018-02-23 | 海信集团有限公司 | A kind of obstacle detection method and device |
CN107909010A (en) * | 2017-10-27 | 2018-04-13 | 北京中科慧眼科技有限公司 | A kind of road barricade object detecting method and device |
CN108009511A (en) * | 2017-12-14 | 2018-05-08 | 元橡科技(北京)有限公司 | Method for detecting area and device are exercised based on RGBD |
CN108243623A (en) * | 2016-09-28 | 2018-07-03 | 驭势科技(北京)有限公司 | Vehicle anticollision method for early warning and system based on binocular stereo vision |
CN108573215A (en) * | 2018-03-16 | 2018-09-25 | 海信集团有限公司 | Reflective road method for detecting area, device and terminal |
CN108596899A (en) * | 2018-04-27 | 2018-09-28 | 海信集团有限公司 | Road flatness detection method, device and equipment |
CN108875640A (en) * | 2018-06-20 | 2018-11-23 | 长安大学 | A kind of end-to-end unsupervised scene can traffic areas cognitive ability test method |
CN109508673A (en) * | 2018-11-13 | 2019-03-22 | 大连理工大学 | It is a kind of based on the traffic scene obstacle detection of rodlike pixel and recognition methods |
CN109741306A (en) * | 2018-12-26 | 2019-05-10 | 北京石油化工学院 | Image processing method applied to hazardous chemical storehouse stacking |
CN110197173A (en) * | 2019-06-13 | 2019-09-03 | 重庆邮电大学 | A kind of curb detection method based on binocular vision |
CN110633600A (en) * | 2018-06-21 | 2019-12-31 | 海信集团有限公司 | Obstacle detection method and device |
CN110807347A (en) * | 2018-08-06 | 2020-02-18 | 海信集团有限公司 | Obstacle detection method and device and terminal |
CN111243003A (en) * | 2018-11-12 | 2020-06-05 | 海信集团有限公司 | Vehicle-mounted binocular camera and method and device for detecting road height limiting rod |
CN111951334A (en) * | 2020-08-04 | 2020-11-17 | 郑州轻工业大学 | Identification and positioning method and lifting method for stacking steel billets based on binocular vision technology |
CN112036227A (en) * | 2020-06-10 | 2020-12-04 | 北京中科慧眼科技有限公司 | Method and device for detecting road surface travelable area and automatic driving system |
WO2021159397A1 (en) * | 2020-02-13 | 2021-08-19 | 华为技术有限公司 | Vehicle travelable region detection method and detection device |
CN114119700A (en) * | 2021-11-26 | 2022-03-01 | 山东科技大学 | Obstacle ranging method based on U-V disparity map |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101852609A (en) * | 2010-06-02 | 2010-10-06 | 北京理工大学 | Ground obstacle detection method based on binocular stereo vision of robot |
CN102313536A (en) * | 2011-07-21 | 2012-01-11 | 清华大学 | Method for barrier perception based on airborne binocular vision |
CN103714532A (en) * | 2013-12-09 | 2014-04-09 | 广西科技大学 | Method for automatically detecting obstacles based on binocular vision |
CN104166834A (en) * | 2013-05-20 | 2014-11-26 | 株式会社理光 | Pavement detection method and pavement detection device |
-
2016
- 2016-01-15 CN CN201610027922.0A patent/CN105550665B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101852609A (en) * | 2010-06-02 | 2010-10-06 | 北京理工大学 | Ground obstacle detection method based on binocular stereo vision of robot |
CN102313536A (en) * | 2011-07-21 | 2012-01-11 | 清华大学 | Method for barrier perception based on airborne binocular vision |
CN104166834A (en) * | 2013-05-20 | 2014-11-26 | 株式会社理光 | Pavement detection method and pavement detection device |
CN103714532A (en) * | 2013-12-09 | 2014-04-09 | 广西科技大学 | Method for automatically detecting obstacles based on binocular vision |
Non-Patent Citations (4)
Title |
---|
刘小莉: "双目立体视觉稠密视差算法研究", 《万方数据》 * |
刘絮: "基于双目立体视觉的障碍物识别研究", 《万方数据》 * |
朱效洲: "户外环境下基于视觉的移动机器人可通行区域识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
陈志波等: "基于双目视觉的边缘检测算法", 《光学与光电技术》 * |
Cited By (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105955303A (en) * | 2016-07-05 | 2016-09-21 | 北京奇虎科技有限公司 | UAV autonomous obstacle avoidance method and device |
CN106228134A (en) * | 2016-07-21 | 2016-12-14 | 北京奇虎科技有限公司 | Drivable region detection method based on pavement image, Apparatus and system |
CN106228138A (en) * | 2016-07-26 | 2016-12-14 | 国网重庆市电力公司电力科学研究院 | A kind of Road Detection algorithm of integration region and marginal information |
CN106295607A (en) * | 2016-08-19 | 2017-01-04 | 北京奇虎科技有限公司 | Roads recognition method and device |
CN106446785A (en) * | 2016-08-30 | 2017-02-22 | 电子科技大学 | Passable road detection method based on binocular vision |
CN108243623A (en) * | 2016-09-28 | 2018-07-03 | 驭势科技(北京)有限公司 | Vehicle anticollision method for early warning and system based on binocular stereo vision |
CN107517592B (en) * | 2016-09-28 | 2021-07-02 | 驭势科技(北京)有限公司 | Real-time detection method and system for automobile driving area |
CN108243623B (en) * | 2016-09-28 | 2022-06-03 | 驭势科技(北京)有限公司 | Automobile anti-collision early warning method and system based on binocular stereo vision |
CN107517592A (en) * | 2016-09-28 | 2017-12-26 | 驭势科技(北京)有限公司 | Automobile wheeled region real-time detection method and system |
CN106651785A (en) * | 2016-10-13 | 2017-05-10 | 西北工业大学 | Color constancy method based on color edge moments and anchoring neighborhood regularized regression |
CN106651836B (en) * | 2016-11-04 | 2019-10-22 | 中国科学院上海微***与信息技术研究所 | A kind of ground level detection method based on binocular vision |
CN106651836A (en) * | 2016-11-04 | 2017-05-10 | 中国科学院上海微***与信息技术研究所 | Ground level detection method based on binocular vision |
CN107506711A (en) * | 2017-08-15 | 2017-12-22 | 江苏科技大学 | Binocular vision obstacle detection system and method based on convolutional neural networks |
CN107564053A (en) * | 2017-08-28 | 2018-01-09 | 海信集团有限公司 | A kind of parallax value correction method and device being applied under road scene |
CN107564053B (en) * | 2017-08-28 | 2020-06-09 | 海信集团有限公司 | Parallax value correction method and device applied to road scene |
CN107729856A (en) * | 2017-10-26 | 2018-02-23 | 海信集团有限公司 | A kind of obstacle detection method and device |
CN107909010A (en) * | 2017-10-27 | 2018-04-13 | 北京中科慧眼科技有限公司 | A kind of road barricade object detecting method and device |
CN107909010B (en) * | 2017-10-27 | 2022-03-18 | 北京中科慧眼科技有限公司 | Road obstacle detection method and device |
CN108009511A (en) * | 2017-12-14 | 2018-05-08 | 元橡科技(北京)有限公司 | Method for detecting area and device are exercised based on RGBD |
CN108573215A (en) * | 2018-03-16 | 2018-09-25 | 海信集团有限公司 | Reflective road method for detecting area, device and terminal |
CN108573215B (en) * | 2018-03-16 | 2021-08-03 | 海信集团有限公司 | Road reflective area detection method and device and terminal |
CN108596899A (en) * | 2018-04-27 | 2018-09-28 | 海信集团有限公司 | Road flatness detection method, device and equipment |
CN108875640B (en) * | 2018-06-20 | 2022-04-05 | 长安大学 | Method for testing cognitive ability of passable area in end-to-end unsupervised scene |
CN108875640A (en) * | 2018-06-20 | 2018-11-23 | 长安大学 | A kind of end-to-end unsupervised scene can traffic areas cognitive ability test method |
CN110633600B (en) * | 2018-06-21 | 2023-04-25 | 海信集团有限公司 | Obstacle detection method and device |
CN110633600A (en) * | 2018-06-21 | 2019-12-31 | 海信集团有限公司 | Obstacle detection method and device |
CN110807347A (en) * | 2018-08-06 | 2020-02-18 | 海信集团有限公司 | Obstacle detection method and device and terminal |
CN110807347B (en) * | 2018-08-06 | 2023-07-25 | 海信集团有限公司 | Obstacle detection method, obstacle detection device and terminal |
CN111243003A (en) * | 2018-11-12 | 2020-06-05 | 海信集团有限公司 | Vehicle-mounted binocular camera and method and device for detecting road height limiting rod |
CN109508673A (en) * | 2018-11-13 | 2019-03-22 | 大连理工大学 | It is a kind of based on the traffic scene obstacle detection of rodlike pixel and recognition methods |
CN109741306B (en) * | 2018-12-26 | 2021-07-06 | 北京石油化工学院 | Image processing method applied to dangerous chemical storehouse stacking |
CN109741306A (en) * | 2018-12-26 | 2019-05-10 | 北京石油化工学院 | Image processing method applied to hazardous chemical storehouse stacking |
CN110197173B (en) * | 2019-06-13 | 2022-09-23 | 重庆邮电大学 | Road edge detection method based on binocular vision |
CN110197173A (en) * | 2019-06-13 | 2019-09-03 | 重庆邮电大学 | A kind of curb detection method based on binocular vision |
CN114981138A (en) * | 2020-02-13 | 2022-08-30 | 华为技术有限公司 | Method and device for detecting vehicle travelable region |
WO2021159397A1 (en) * | 2020-02-13 | 2021-08-19 | 华为技术有限公司 | Vehicle travelable region detection method and detection device |
CN112036227A (en) * | 2020-06-10 | 2020-12-04 | 北京中科慧眼科技有限公司 | Method and device for detecting road surface travelable area and automatic driving system |
CN112036227B (en) * | 2020-06-10 | 2024-01-16 | 北京中科慧眼科技有限公司 | Road surface drivable area detection method, device and automatic driving system |
CN111951334A (en) * | 2020-08-04 | 2020-11-17 | 郑州轻工业大学 | Identification and positioning method and lifting method for stacking steel billets based on binocular vision technology |
CN111951334B (en) * | 2020-08-04 | 2023-11-21 | 郑州轻工业大学 | Identification and positioning method and lifting method for stacked billets based on binocular vision technology |
CN114119700A (en) * | 2021-11-26 | 2022-03-01 | 山东科技大学 | Obstacle ranging method based on U-V disparity map |
CN114119700B (en) * | 2021-11-26 | 2024-03-29 | 山东科技大学 | Obstacle ranging method based on U-V disparity map |
Also Published As
Publication number | Publication date |
---|---|
CN105550665B (en) | 2019-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105550665A (en) | Method for detecting pilotless automobile through area based on binocular vision | |
CN105206109B (en) | A kind of vehicle greasy weather identification early warning system and method based on infrared CCD | |
CN108960183B (en) | Curve target identification system and method based on multi-sensor fusion | |
CN107392103B (en) | Method and device for detecting road lane line and electronic equipment | |
CN107341454B (en) | Method and device for detecting obstacles in scene and electronic equipment | |
CN111563412B (en) | Rapid lane line detection method based on parameter space voting and Bessel fitting | |
CN105184852B (en) | A kind of urban road recognition methods and device based on laser point cloud | |
WO2018058356A1 (en) | Method and system for vehicle anti-collision pre-warning based on binocular stereo vision | |
CN103123722B (en) | Road object detection method and system | |
CN102270301B (en) | Method for detecting unstructured road boundary by combining support vector machine (SVM) and laser radar | |
CN109829403A (en) | A kind of vehicle collision avoidance method for early warning and system based on deep learning | |
CN107389084B (en) | Driving path planning method and storage medium | |
CN107590470B (en) | Lane line detection method and device | |
CN104902261B (en) | Apparatus and method for the road surface identification in low definition video flowing | |
CN104916163A (en) | Parking space detection method | |
CN107886034B (en) | Driving reminding method and device and vehicle | |
CN110197173B (en) | Road edge detection method based on binocular vision | |
CN104915642B (en) | Front vehicles distance measuring method and device | |
CN112115889B (en) | Intelligent vehicle moving target detection method based on vision | |
CN104700072A (en) | Lane line historical frame recognition method | |
Pantilie et al. | Real-time obstacle detection using dense stereo vision and dense optical flow | |
CN109635737A (en) | Automobile navigation localization method is assisted based on pavement marker line visual identity | |
Sun | Vision based lane detection for self-driving car | |
CN107220632B (en) | Road surface image segmentation method based on normal characteristic | |
CN103679121A (en) | Method and system for detecting roadside using visual difference image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |