CN105550665B - A kind of pilotless automobile based on binocular vision can lead to method for detecting area - Google Patents
A kind of pilotless automobile based on binocular vision can lead to method for detecting area Download PDFInfo
- Publication number
- CN105550665B CN105550665B CN201610027922.0A CN201610027922A CN105550665B CN 105550665 B CN105550665 B CN 105550665B CN 201610027922 A CN201610027922 A CN 201610027922A CN 105550665 B CN105550665 B CN 105550665B
- Authority
- CN
- China
- Prior art keywords
- disparity map
- dense
- carried out
- rejecting
- obtains
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/02—Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Remote Sensing (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention, which provides a kind of pilotless automobile based on binocular vision, can lead to method for detecting area, process are as follows: obtain the vehicle front of vehicle-mounted binocular camera acquisition being mounted on automatic driving car or so mesh image as original identification image;Left and right mesh image is pre-processed, treated dense disparity map is obtained;Corresponding U disparity map is obtained for dense disparity map;For U disparity map acquired disturbance object figure;Use barriers object figure carries out barrier rejecting to dense disparity map, obtains the disparity map after rejecting a large amount of barriers;For the disparity map after a large amount of barriers of rejecting, the V disparity map of its standardization is obtained;For V disparity map, the upper edge of road area is obtained;The upper edge above section of road area in the dense disparity map is rejected, the disparity map for rejecting non-rice habitats region is obtained;For the disparity map for rejecting non-rice habitats region, further barrier rejecting is carried out, gray inversion then is carried out to the obstructions chart of acquisition, acquisition can traffic areas figure.
Description
Technical field
The invention belongs to technical field of image processing, and in particular to a kind of pilotless automobile based on binocular vision can lead to
Row method for detecting area.
Background technique
With the development of society, automobile has become the irreplaceable vehicles of mankind's daily life.However, therewith
Being safety problem that its bring becomes increasingly conspicuous.And the fast development of vehicle intellectualized technology then provides to solve this problem
Powerful measure.In recent years, the major well-known Automobile Enterprises in the world play an active part in vehicle intellectualized Industrial Revolution, so that nothing
People's driving has no longer been a concept, and many mature intellectualized technologies have been applied to automobile industry, and is achieved significant
Economic and social benefit.Meanwhile the unmanned ground vehicle technical research in the fields such as military affairs, security also makes a breakthrough,
The equipment such as unmanned explosive-removal robot play great function in major areas such as people's security protection, national security.
Obstacle detection method is mainly based upon laser radar, millimetre-wave radar or super in pilotless automobile technology at present
The initiatives sensor such as detector of sound, and this kind of sensor typically cost is higher, power consumption is larger, is easy to interfere with each other.Vapour at present
Garage's industry mainly has automatic cruising system, Lane Keeping System, autonomous parking system using mature pilotless automobile technology
Deng these systems work general with radar and image information.Specific method is by radar detection barrier, video camera
Lane line or other road informations are detected, then the two is subjected to fusion treatment.If only realizing that institute is functional using video camera, no
Equipment cost can be only reduced, biosensor power consumption is more reduced, increases the service life of system.Therefore, binocular is relied on merely
Camera realizes that the job requirements such as lane detection, obstacle detection, driving recording have a vast market foreground.
Obstacle detection is logical in the safety of automobile as the key technology in Unmanned Systems and DAS (Driver Assistant System)
Row, comfortable driving etc. are played a great role.It is how accurate, efficiently detect barrier, acquisition can traffic areas to nobody
Driving and DAS (Driver Assistant System) suffer from vital influence.Carrying out generally, based on image can traffic areas detection master
It is divided into based on textural characteristics, color characteristic and depth characteristic three categories.Currently, some scholars obtain colour using monocular camera
Image, using Texture Segmentation or color segmentation identify can traffic areas, but its effect is often unsatisfactory, it is affected by environment compared with
Greatly, it and is mainly used in the urban traffic environment more standardized, in unstructured moving grids and is not suitable for.And believed based on depth
Breath mainly obtains environment disparity map by binocular or more mesh cameras, and disparity map is recycled to obtain dense or sparse depth letter
Breath, estimates ground model.Such methods are applicable to complex road surface environment, but often computationally intensive, and real-time is poor, can not
Preferably it is applied to unmanned ground vehicle.It can be seen that based on image information efficiently and in real time detect can traffic areas be one
Huge challenge, while there is very high application value to the development of unmanned ground vehicle and DAS (Driver Assistant System).
Summary of the invention
To solve the above problems, the present invention, which provides a kind of pilotless automobile based on binocular vision, can lead to region detection side
Method, this method strong applicability, can under the complicated various roads environment such as weather condition and field, city such as sleet steady operation,
And real-time is preferable, can be widely applied to unmanned ground vehicle and DAS (Driver Assistant System).
Realize that technical scheme is as follows:
A kind of pilotless automobile based on binocular vision can lead to method for detecting area comprising:
Step 1, the vehicle front of vehicle-mounted binocular camera acquisition being mounted on automatic driving car or so mesh image is obtained
As original identification image;
Step 2, left and right mesh image is pre-processed: color is carried out to left and right mesh image first with color constancy method
Coloured silk enhancing, is secondly converted to gray level image for the image after color enhancement;Binocular is carried out to gray level image using SGM method again
Stereo matching obtains dense disparity map of the disparity range between 0-128;Intermediate value filter finally is carried out to the dense disparity map
Wave, expansion and corrosion treatment obtain treated dense disparity map;
Step 3, treated that each column all pixels of dense disparity map carry out gray-scale statistical to acquired in step 2,
Corresponding U disparity map is obtained, standardization U disparity map will be obtained between the disparity range standardization of the U disparity map to 0 to 255;
Step 4, Boundary Extraction is carried out to the standardization U disparity map obtained in step 3 using Canny operator, obtains two
The boundary U disparity map of value;It traverses boundary U disparity map and traverses the dense parallax if the pixel value of pixel (i, j) is not 0
The all pixels of figure jth column find or difference identical as the pixel value of pixel (i, j) and are less than given threshold δ1Pixel, and will
It is set as obstacle pixel, to obtain rough obstructions chart, then carries out at median filtering, expansion and corrosion to the obstructions chart
Reason obtains treated obstructions chart;
Step 5, using step 4 treated obstructions chart obtained, to step 2, treated that dense disparity map carries out
Barrier is rejected, and the disparity map of (or containing a small amount of barrier) is obtained after rejecting a large amount of barriers;
Step 6, ash is carried out to every a line all pixels of the disparity map after a large amount of barriers of rejecting obtained in step 5
Degree statistics (statistical parallax range is 0-128), obtains corresponding V disparity map, by the disparity range standardization of the V disparity map to 0
The V disparity map to be standardized between to 255;
Step 7, Boundary Extraction is carried out using V disparity map of the Canny operator to the standardization obtained in step 6, obtained
The boundary V disparity map of binaryzation;Using Hough transformation, line segment is detected in the boundary V disparity map, from the line segment detected
Selection corresponds to the line segment of road area in treated dense disparity map in step 2, by itself and boundary V disparity map longitudinal axis intersection point
Upper edge of the horizontal line as road area;
Step 8, according to the road area upper edge obtained in step 7, the dense view that will be obtained that treated in step 2
The upper edge above section of road area is rejected in poor figure, obtains the disparity map for rejecting non-rice habitats region;
Step 9, for the disparity map for rejecting non-rice habitats region, final barrier is obtained in the way of step 3-4, wherein
In the implementation procedure in the way of step 3, the pixel for the road area above section being no longer removed in statistical parallax figure, and
In the way of step 4 when implementation procedure, the road area upper edge above section in obstacle figure obtained is all set to
Barrier (the road area upper edge in step 7 by obtaining);
Step 10, gray inversion is carried out to the obstructions chart that step 9 obtains, acquisition can traffic areas figure.
The present invention further includes executing following steps after executing the step 10:
Step 11, to step 10 obtain can traffic areas figure carry out outermost contour detection, obtain all outermost contours, often
One profile be one it is potential can traffic areas;
Step 12, the profile obtained in step 11 is screened, is selected near vehicle front-wheel, and the maximum wheel of area
Exterior feature is used as finally can traffic areas.
Beneficial effect
First, the present invention by obtain dense disparity map, directly in disparity map using statistical information cognitive disorders object with
Can traffic areas, processing speed can be improved, and guarantee identification accuracy.
It is more demanding to recognition rate since the present invention serves Unmanned Ground Vehicle platform.Typically, in nothing
People's platform with the speed of 40km/h move ahead situation under, it is desirable that can traffic areas identification frame per second in 8Hz or so.This method exists
It is per second to handle 5 to 7 frames under the conditions of Intel double-core 2.6GHzCPU processor, have compared to the other methods of similar function bright
Aobvious rate advantage, if being transplanted to the Embedded Hardware Platforms such as DSP, processing speed is sufficient for unmanned vehicle requirement.
Why recognition rate is very fast by the present invention, and reason is that this method is for statistical analysis to know directly on disparity map
Not can traffic areas recycle 3D information to carry out ground face mould unlike dense parallax information is converted to 3D point cloud by other methods
Type fitting, to reduce a large amount of nonlinear operations.It is sensitive to noise information meanwhile because the present invention is to be based on statistical information
Spend it is lower, can get it is stable can traffic areas testing result.
Second, the present invention obtains U disparity map and V disparity map by carrying out statistics to disparity map, will be hindered using UV disparity map
Object feature and roadway characteristic is hindered to be mapped as linear feature.
Since U disparity map is each column of image to be carried out with gray-scale statistical, and V disparity map carries out gray scale system to each row of image
Meter.And the parallax of barrier is characterized in that gray scale is more concentrated in a column direction, and barrier is higher, then gray value is more concentrated, and
The parallax of road is characterized in that gray scale is more concentrated in the row direction, and its parallax is only gradually increased from the near to the distant from headstock, concentrates
It spends relatively high always.Therefore, lateral line correspondences exactly barrier one by one in original disparity map in U disparity map, and V parallax
It is exactly road area in original disparity map that angled straight lines, which correspond to, in figure.It therefore, then can more accurately, fastly using UV disparity map
Barrier and road are differentiated fastly.
Third, the present invention carry out edge detection to U disparity map and V disparity map using Canny operator, U disparity map and V are regarded
Difference figure carries out binaryzation, and the noise that non-barrier generates is reduced in U disparity map, reduces what barrier generated in V disparity map
Noise.
The method of traditional UV disparity map binaryzation be using fixed binarization threshold, according to common impairments object height into
Row setting, there is no adaptivitys.The present invention first standardizes UV disparity map between 0 to 255, is then calculated using Canny
Son carries out binaryzation, can adaptive varying environment and different height barrier.
4th, the present invention uses U disparity map first, rejects interference information of a large amount of barriers in V disparity map, improves V
The Detection accuracy of road area mapping straight line in disparity map, at the same according to V disparity map detect road area approximate range, into
One step obtains accurate U disparity map, to obtain accurate obstructions chart.
Since barrier can generate interference in V disparity map, the detection difficulty of road mapping straight line in V disparity map is caused to add
Greatly.This method obtains rough obstructions chart according to U disparity map first, is then rejected using the obstructions chart a large amount of in disparity map
Barrier is interfered, recycles the disparity map for eliminating a large amount of barriers to obtain V disparity map and carries out road area estimation.To reduce
Influence of the barrier to V disparity map.Simultaneously as the invalid barrier of road upper edge above section can be in U disparity map
Interference is generated to effective barrier in road boundary or road, this method estimates rough road area using V disparity map,
Then it is counted just for estimated road area range to obtain fine U disparity map, to further obtain fine
Obstructions chart.
5th, the present invention to acquisition can traffic areas figure carry out outermost contour detection, can efficiently reject invalid lead to
Row region, select to vehicle effectively can traffic areas, while eliminate can in traffic areas small barrier influence, into one
Step increases identification stability.
This method take full advantage of can traffic areas general features, dexterously eliminated using outermost contour detection means
In vain can the noises such as traffic areas, really effectively can traffic areas to unmanned vehicle driving to obtain.And profile is handled into one
Step reduce can traffic areas edge noise, increase the stability of identification.Can traffic areas profile stored in the form of point set,
For carry out later can traffic areas project (such as IPM image projection) and be further processed and provide convenience, follow-up work can be reduced
Complexity.
Detailed description of the invention
Fig. 1 is that the pilotless automobile of the invention based on binocular vision can lead to method for detecting area entirety identification process;
Fig. 2 is that the pilotless automobile of the invention based on binocular vision can lead to the original identification figure in method for detecting area
Picture;
Fig. 3 is that the pilotless automobile of the invention based on binocular vision can lead to the original dense view in method for detecting area
Difference figure (under) and for the first time acquisition U disparity map (on);
Fig. 4 can lead to for the pilotless automobile of the invention based on binocular vision handles U for the first time in method for detecting area
The boundary disparity map that disparity map obtains;
Fig. 5 is that the pilotless automobile of the invention based on binocular vision can lead in method for detecting area for the first time by boundary
U disparity map obtains rough obstructions chart for the first time from original disparity map;
Fig. 6 can lead to for the pilotless automobile of the invention based on binocular vision rejects major obstacle in method for detecting area
Dense disparity map (left side) after object and the V disparity map (right side) using disparity map acquisition;
Fig. 7 can lead to for the pilotless automobile of the invention based on binocular vision utilizes V disparity map in method for detecting area
Acquisition boundary V disparity map (in) and detect that road corresponds to straight line as road area upper edge (left side) in V disparity map;
Fig. 8 can lead to for the pilotless automobile of the invention based on binocular vision only counts dense view in method for detecting area
Poor figure road area upper edge once part U disparity map obtained and boundary U disparity map;
Fig. 9 is that the pilotless automobile of the invention based on binocular vision can lead in method for detecting area according to second of U
Disparity map obtain can traffic areas figure (left side) and to this can traffic areas figure progress contour detecting result figure (right side);
Figure 10 can lead in method for detecting area for the pilotless automobile of the invention based on binocular vision can traffic areas
Testing result figure.
Specific embodiment
The following describes the present invention in detail with reference to the accompanying drawings and specific embodiments.
As shown in Figure 1, the present invention is based on the pilotless automobiles of binocular vision can lead to method for detecting area, including following several
A step:
Step 1: the vehicle front of vehicle-mounted binocular camera acquisition being mounted on automatic driving car or so mesh image is obtained
As original identification image.
Step 2: pretreatment is carried out to the left and right mesh image acquired in step 1 and Stereo matching obtains dense disparity map:
(201) left and right mesh image is pre-processed using color constancy method, keeps image color feature more obvious,
Then it is translated into gray scale GREY image;
(202) matching of SGM binocular solid is carried out to image obtained by step (201), selecting maximum disparity here is 128, is obtained
Obtain dense disparity map;
(203) it to step (202) obtained dense disparity map progress median filtering, carries out once expanding and primary corruption later again
Erosion processing, disparity map after being handled.
Step 3: obtain that treated that disparity map acquires U disparity map using step 2:
(301) each column all pixels of dense disparity map are counted, are 0 to 128 due to selecting disparity range before, institute
U disparity map is longitudinally 128 rows, lateral columns is identical as dense disparity map columns, obtained U disparity map each pixel (i,
J) pixel value Val (i, j) is the number of pixels that jth arranges that corresponding parallax is i in dense disparity map;
(302) standardization processing is carried out to the obtained U disparity map of step (301), especially by subtracted image all elements
Middle minimum value, divided by the difference of maxima and minima, then multiplied by 255 method, by image all elements range standardize
Between to 0 to 255, standardization U disparity map is obtained.
Step 4: it next goes to obtain rough obstructions chart using step 3 standardization U disparity map obtained, specifically
Are as follows:
(401) border detection is carried out to standardization U disparity map using Canny operator, to obtain the boundary U view of binaryzation
Difference figure;
(402) traversal (401) boundary U disparity map obtained traverses dense if the pixel value of pixel (i, j) is not 0
Disparity map jth column all pixels, searching pixel value is identical as the pixel value of pixel (i, j) or differs given threshold δ1Picture
These pixels are set as barrier pixel by element, so that rough obstructions chart is obtained, given threshold δ described in the present embodiment1For
7;
(403) step (402) obstructions chart obtained is carried out carrying out after median filtering once expanding again and primary rotten
Erosion processing, obtains treated obstructions chart.
Step 5: using obstructions chart after step 4 processing obtained, to disparity map after step 2 processing obtained
Carry out barrier rejecting, specially traversal obstructions chart, if pixel be barrier pixel when if find the disparity map identical bits
It sets and its parallax is set as 0, to obtain the disparity map for rejecting a large amount of barriers;
Step 6: V disparity map is acquired using the disparity map after a large amount of barriers of rejecting obtained in step 5:
(601) the every a line all pixels for counting the disparity map, are 0 to 128 due to selecting disparity range before, institute
V disparity map is longitudinally 128 column, longitudinal line number is identical as correspondence disparity map line number, obtained V disparity map each pixel (i,
J) pixel value Val (i, j) is that the i-th row corresponds to the number of pixels that parallax is j in corresponding disparity map;
(602) standardization processing is carried out to the obtained V disparity map of step (601), especially by subtracted image all elements
Middle minimum value, divided by the difference of maxima and minima, then multiplied by 255 method, by image all elements range standardize
Between to 0 to 255, standardization V disparity map is obtained.
Step 7: seeking road area upper edge using the standardization V disparity map that step 6 obtains, specifically:
(701) Boundary Extraction is carried out using Canny operator standardization V disparity map obtained to step 6, obtains two-value
The boundary V disparity map of change;
(702) all line segments are detected in the V disparity map of this boundary using Hough transformation, wherein there is an inclined line segment pair
What is answered is road area in disparity map;
(703) selection meets the line segment of following four condition from all line segments detected, and (one) is tilted to the left setting
Angle, (two) and the V disparity map lower left corner are apart no more than given threshold, and (three) line segment lower edge is at a distance from V disparity map bottom
Not super set distance, (four) confidence level are higher than given threshold value;If there is multistage line segment to meet the requirements, retain;
(704) straight line obtained in step (703) is calculated, using its horizontal line with longitudinal axis point of intersection as road area
Upper edge, if having in step (703) it is a plurality of as a result, if take it with the mean value of longitudinal axis intersection point horizontal line as road area
Upper edge;
Step 8: according to step 7 road area upper edge obtained, by the step 2 dense view that obtains that treated
Upper edge above section is rejected in poor figure, the disparity map after being rejected;
Step 9: for the disparity map for rejecting non-rice habitats region, select more accurate parameter according to step 3-step 4
Final obstructions chart is obtained, specifically:
(901) by all pixels of part below road area upper edge in each column of disparity map for rejecting non-rice habitats region
Gray-scale statistical is carried out, obtains U disparity map, the U disparity map is identical with the U disparity map format of (301) in step 3 etc.;
(902) standardization processing is carried out to the obtained U disparity map of step (901), especially by subtracted image all elements
Middle minimum value, divided by the difference of maxima and minima, then multiplied by 255 method, by image all elements range standardize
Between to 0 to 255, standardization U disparity map is obtained;
(903) Canny operator border detection is carried out to the standardization U disparity map that step (902) obtain, to obtain two-value
The boundary U disparity map of change;
(904) traversal (903) boundary U disparity map obtained traverses rejecting if the pixel value of pixel (i, j) is not 0
The disparity map jth column all pixels in non-rice habitats region, searching pixel value is identical as the pixel value of pixel (i, j) or difference is less than
Given threshold δ2Pixel, it should be noted that the selected threshold parameter unlike step (402) is more stringent here,
Given threshold δ described in the present embodiment2=3, these pixels are set as barrier pixel, to obtain more more accurate than (402)
Obstructions chart;
(905) step (904) obstructions chart obtained is carried out carrying out after median filtering once expanding again and primary rotten
Erosion processing, obtains treated obstructions chart;
(906) road area upper edge in step (905) obstructions chart obtained (is obtained the upper edge in step 7
Taking) above section pixel is set as barrier pixel, to obtain final obstructions chart.
Step 10: gray inversion is carried out to the obstructions chart that step 9 obtains, acquisition can traffic areas figure;
Step 11, to step 10 obtain can traffic areas figure carry out outermost contour detection, obtain all outermost contours,
Each profile be one it is potential can traffic areas;
Step 12 screens the profile obtained in step 11, selects near vehicle front-wheel, and area is most
Big profile is used as finally can traffic areas.
Embodiment one
The specific steps of region detection can be led to by carrying out pilotless automobile using this method are as follows:
Step 1: it is mounted on the tri- mesh stereoscopic camera of BumblebeeX3 of the Hui Dian company of vehicle front.In the present embodiment,
The pixel of the obtained left and right mesh image of vehicle-mounted vidicon is arranged to 800 × 600, color mode RGB.Phase is used in this example
The leftmost side of machine and the rightmost side obtain binocular image, and camera baseline length is 0.23998500 meter, and focal length is
1002.912048pixels transmission frame per second is 15Hz, acquisition original image is as shown in Figure 2;
Step 2: carrying out SGM Stereo matching after pre-processing to left and right mesh image, carries out again after obtaining dense disparity map
Median filtering, filtering parameter select 3 pixel coverages, expansion process, and expansion parameters are 3 pixels, corrosion treatment, corrosion treatment
Parameter is 3 pixels, as a result as shown in lower section figure in Fig. 3;
Step 3: acquiring first time U disparity map using dense disparity map after the resulting processing of step 2, as a result as in Fig. 3
Shown in the figure of top, pixel is 800 × 128;
Step 4: edge extracting is carried out to standardization U disparity map obtained by step 3 using Canny operator and obtains binaryzation
Boundary U disparity map, as shown in figure 4, result is as shown in lower section figure in Fig. 4.It is dense being returned using the binaryzation boundary U disparity map
Disparity map carries out the screening of barrier pixel, the obstructions chart of first time is obtained, as shown in figure 5, processing result such as Fig. 5 right part of flg institute
Show;
Step 5: using obstructions chart obtained in step 4, for the step 2 dense parallax that obtained that treated
Scheme to carry out barrier rejecting, obtains the disparity map for rejecting a large amount of barriers (or containing a small amount of barrier), the left side processing result such as Fig. 6
Shown in figure;
Step 6: obtaining V disparity map using the disparity map for rejecting major obstacle object, then normalized, as a result as Fig. 6 is right
Shown in the figure of side;
Step 7: Boundary Extraction is carried out to the V disparity map to standardize obtained by step 6 using Canny operator, obtains two-value
Change boundary V disparity map, recycles Hough transformation to seek V disparity map middle conductor, filter out the line segment of the condition of satisfaction, ask they and V
Disparity map vertical axis intercept takes average conduct road area upper edge, as shown in fig. 7, wherein intermediate is that Hough transformation detects straight line
As a result, right side is the V disparity map of standardization, horizontal line is road area upper edge in middle graph, is returned in original disparity map such as
In Fig. 7 shown in left hand view;
Step 8: according to upper edge obtained by step 7, rejecting upper edge above section in dense disparity map, then again will
Disparity map is in the way of step 3 and step 4 after rejecting, and threshold value is selected as 2 (first when barrier screens in current step 4
In secondary step 4 6) threshold value is selected as, i.e., in dense disparity map pixel value corresponding with white pixel in binaryzation disparity map with it is white
Color pixel corresponds to parallax difference then by as barrier pixel, thus acquired disturbance object figure, then to carry out median filtering within 2,
Dilation erosion processing, obtains final obstacle figure, parameter is as step 2.Gray inversion is finally carried out again, and acquisition can FOH
Domain figure.In the step, second of U disparity map is as shown in figure 8, second of the U parallax obtained as shown in Fig. 8 middle graph, obtains the
Secondary boundary U disparity map is as shown in figure above Fig. 8.Finally obtained in the step can traffic areas figure as shown in Fig. 9 left hand view;
Step 9: to step 8 obtain can traffic areas figure carry out outermost contour detection, obtain all outermost contours, often
One profile be one it is potential can traffic areas, contour detecting result is as shown in Fig. 9 right part of flg;
Step 10: screening the profile obtained in step 9, selects near vehicle front-wheel, and area is maximum
Profile as finally can traffic areas, finally can traffic areas testing result it is as shown in Figure 10, Figure 10 left hand view adds for barrier
Effect picture after MASK, in Figure 10 right part of flg be can traffic areas effect picture.
Certainly, the invention may also have other embodiments, without deviating from the spirit and substance of the present invention, ripe
It knows those skilled in the art and makes various corresponding changes and modifications, but these corresponding changes and change in accordance with the present invention
Shape all should fall within the scope of protection of the appended claims of the present invention.
Claims (3)
1. a kind of pilotless automobile based on binocular vision can lead to method for detecting area, characterized in that it comprises:
Step 1, the vehicle front of vehicle-mounted binocular camera acquisition being mounted on automatic driving car or so mesh image conduct is obtained
Original identification image;
Step 2, left and right mesh image is pre-processed, obtains treated dense disparity map;
Step 3, gray-scale statistical is carried out to each column all pixels of the dense disparity map, corresponding U disparity map is obtained, by the U
The disparity range standardization of disparity map is to obtaining standardization U disparity map between 0 to 255;
Step 4, Boundary Extraction is carried out to the standardization U disparity map obtained in step 3 using Canny operator, obtains binaryzation
Boundary U disparity map;Boundary U disparity map is traversed, if the pixel value of pixel (i, j) is not 0, traverses the dense disparity map the
The all pixels of j column find or difference identical as the pixel value of pixel (i, j) and are less than given threshold δ1Pixel, and set
For obstacle pixel, to obtain rough obstructions chart, then median filtering, expansion and corrosion treatment are carried out to the obstructions chart,
Obtain treated obstructions chart;
Step 5, using step 4 treated obstructions chart obtained, to step 2, treated that dense disparity map carries out obstacle
Object is rejected, and the disparity map after rejecting a large amount of barriers is obtained;
Step 6, gray scale system is carried out to every a line all pixels of the disparity map after a large amount of barriers of rejecting obtained in step 5
Meter, obtains corresponding V disparity map, the V parallax that will be standardized between the disparity range standardization of the V disparity map to 0 to 255
Figure;
Step 7, the upper edge for determining road area specifically includes following sub-step:
(701) Boundary Extraction is carried out using Canny operator standardization V disparity map obtained to step 6, obtains the side of binaryzation
Boundary's V disparity map;
(702) all line segments are detected in the boundary V disparity map using Hough transformation;
(703) selection meets the line segments of following 4 conditions from all line segments detected, and (one) is tilted to the left set angle,
(2) the proximal border V disparity map lower left corner is leaned on, close to boundary V disparity map bottom, (four) confidence level is higher than to be set (three) line segment lower edge
Determine threshold value;
(704) for detected line segment in step (703), using its horizontal line with V disparity map longitudinal axis point of intersection as road
The upper edge in region, if the line segment detected in step (703) have it is a plurality of, by its at V disparity map longitudinal axis intersection point mean value
Upper edge of the horizontal line as road area;
Step 8, according to the road area upper edge obtained in step 7, the dense disparity map that will be obtained that treated in step 2
Middle road area upper edge above section is rejected, and the disparity map for rejecting non-rice habitats region is obtained;
Step 9, for the disparity map for rejecting non-rice habitats region, final barrier is obtained in the way of step 3-4, wherein
In the way of step 4 in implementation procedure, given threshold δ2, and δ1> δ2;
Step 10, gray inversion is carried out to the obstructions chart that step 9 obtains, acquisition can traffic areas figure.
2. the pilotless automobile based on binocular vision can lead to method for detecting area according to claim 1, which is characterized in that
The process of the step 2 are as follows: color enhancement is carried out to left and right mesh image first with color constancy method, secondly increases color
Image after strong is converted to gray level image;Binocular solid matching is carried out to gray level image using SGM method again, obtains parallax model
Enclose the dense disparity map between 0-128;Median filtering, expansion and corrosion treatment finally are carried out to the dense disparity map, obtained
The dense disparity map that takes that treated.
3. the pilotless automobile based on binocular vision can lead to method for detecting area according to claim 1, which is characterized in that
This method further includes following steps:
Step 11, to step 10 obtain can traffic areas figure carry out outermost contour detection, obtain all outermost contours, each round
Exterior feature for one it is potential can traffic areas;
Step 12, the profile obtained in step 11 is screened, selection is near vehicle front-wheel, and the maximum profile of area is made
For finally can traffic areas.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610027922.0A CN105550665B (en) | 2016-01-15 | 2016-01-15 | A kind of pilotless automobile based on binocular vision can lead to method for detecting area |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610027922.0A CN105550665B (en) | 2016-01-15 | 2016-01-15 | A kind of pilotless automobile based on binocular vision can lead to method for detecting area |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105550665A CN105550665A (en) | 2016-05-04 |
CN105550665B true CN105550665B (en) | 2019-01-25 |
Family
ID=55829848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610027922.0A Active CN105550665B (en) | 2016-01-15 | 2016-01-15 | A kind of pilotless automobile based on binocular vision can lead to method for detecting area |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105550665B (en) |
Families Citing this family (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105955303A (en) * | 2016-07-05 | 2016-09-21 | 北京奇虎科技有限公司 | UAV autonomous obstacle avoidance method and device |
CN106228134A (en) * | 2016-07-21 | 2016-12-14 | 北京奇虎科技有限公司 | Drivable region detection method based on pavement image, Apparatus and system |
CN106228138A (en) * | 2016-07-26 | 2016-12-14 | 国网重庆市电力公司电力科学研究院 | A kind of Road Detection algorithm of integration region and marginal information |
CN106295607A (en) * | 2016-08-19 | 2017-01-04 | 北京奇虎科技有限公司 | Roads recognition method and device |
CN106446785A (en) * | 2016-08-30 | 2017-02-22 | 电子科技大学 | Passable road detection method based on binocular vision |
CN108243623B (en) * | 2016-09-28 | 2022-06-03 | 驭势科技(北京)有限公司 | Automobile anti-collision early warning method and system based on binocular stereo vision |
CN107517592B (en) * | 2016-09-28 | 2021-07-02 | 驭势科技(北京)有限公司 | Real-time detection method and system for automobile driving area |
CN106651785A (en) * | 2016-10-13 | 2017-05-10 | 西北工业大学 | Color constancy method based on color edge moments and anchoring neighborhood regularized regression |
CN106651836B (en) * | 2016-11-04 | 2019-10-22 | 中国科学院上海微***与信息技术研究所 | A kind of ground level detection method based on binocular vision |
CN107506711B (en) * | 2017-08-15 | 2020-06-30 | 江苏科技大学 | Convolutional neural network-based binocular vision barrier detection system and method |
CN107564053B (en) * | 2017-08-28 | 2020-06-09 | 海信集团有限公司 | Parallax value correction method and device applied to road scene |
CN107729856B (en) * | 2017-10-26 | 2019-08-23 | 海信集团有限公司 | A kind of obstacle detection method and device |
CN107909010B (en) * | 2017-10-27 | 2022-03-18 | 北京中科慧眼科技有限公司 | Road obstacle detection method and device |
CN108009511A (en) * | 2017-12-14 | 2018-05-08 | 元橡科技(北京)有限公司 | Method for detecting area and device are exercised based on RGBD |
CN108573215B (en) * | 2018-03-16 | 2021-08-03 | 海信集团有限公司 | Road reflective area detection method and device and terminal |
CN108596899B (en) * | 2018-04-27 | 2022-02-18 | 海信集团有限公司 | Road flatness detection method, device and equipment |
CN108875640B (en) * | 2018-06-20 | 2022-04-05 | 长安大学 | Method for testing cognitive ability of passable area in end-to-end unsupervised scene |
CN110633600B (en) * | 2018-06-21 | 2023-04-25 | 海信集团有限公司 | Obstacle detection method and device |
CN110807347B (en) * | 2018-08-06 | 2023-07-25 | 海信集团有限公司 | Obstacle detection method, obstacle detection device and terminal |
CN111243003B (en) * | 2018-11-12 | 2023-06-20 | 海信集团有限公司 | Vehicle-mounted binocular camera and method and device for detecting road height limiting rod |
CN109508673A (en) * | 2018-11-13 | 2019-03-22 | 大连理工大学 | It is a kind of based on the traffic scene obstacle detection of rodlike pixel and recognition methods |
CN109741306B (en) * | 2018-12-26 | 2021-07-06 | 北京石油化工学院 | Image processing method applied to dangerous chemical storehouse stacking |
CN110197173B (en) * | 2019-06-13 | 2022-09-23 | 重庆邮电大学 | Road edge detection method based on binocular vision |
WO2021159397A1 (en) * | 2020-02-13 | 2021-08-19 | 华为技术有限公司 | Vehicle travelable region detection method and detection device |
CN112036227B (en) * | 2020-06-10 | 2024-01-16 | 北京中科慧眼科技有限公司 | Road surface drivable area detection method, device and automatic driving system |
CN111951334B (en) * | 2020-08-04 | 2023-11-21 | 郑州轻工业大学 | Identification and positioning method and lifting method for stacked billets based on binocular vision technology |
CN114119700B (en) * | 2021-11-26 | 2024-03-29 | 山东科技大学 | Obstacle ranging method based on U-V disparity map |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101852609A (en) * | 2010-06-02 | 2010-10-06 | 北京理工大学 | Ground obstacle detection method based on binocular stereo vision of robot |
CN102313536A (en) * | 2011-07-21 | 2012-01-11 | 清华大学 | Method for barrier perception based on airborne binocular vision |
CN103714532A (en) * | 2013-12-09 | 2014-04-09 | 广西科技大学 | Method for automatically detecting obstacles based on binocular vision |
CN104166834A (en) * | 2013-05-20 | 2014-11-26 | 株式会社理光 | Pavement detection method and pavement detection device |
-
2016
- 2016-01-15 CN CN201610027922.0A patent/CN105550665B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101852609A (en) * | 2010-06-02 | 2010-10-06 | 北京理工大学 | Ground obstacle detection method based on binocular stereo vision of robot |
CN102313536A (en) * | 2011-07-21 | 2012-01-11 | 清华大学 | Method for barrier perception based on airborne binocular vision |
CN104166834A (en) * | 2013-05-20 | 2014-11-26 | 株式会社理光 | Pavement detection method and pavement detection device |
CN103714532A (en) * | 2013-12-09 | 2014-04-09 | 广西科技大学 | Method for automatically detecting obstacles based on binocular vision |
Non-Patent Citations (4)
Title |
---|
双目立体视觉稠密视差算法研究;刘小莉;《万方数据》;20130724;第1-62页 |
基于双目立体视觉的障碍物识别研究;刘絮;《万方数据》;20150817;第1-71页 |
基于双目视觉的边缘检测算法;陈志波等;《光学与光电技术》;20080831;第71-74页 |
户外环境下基于视觉的移动机器人可通行区域识别研究;朱效洲;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150115;第15-48页 |
Also Published As
Publication number | Publication date |
---|---|
CN105550665A (en) | 2016-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105550665B (en) | A kind of pilotless automobile based on binocular vision can lead to method for detecting area | |
CN108596129B (en) | Vehicle line-crossing detection method based on intelligent video analysis technology | |
CN107330376B (en) | Lane line identification method and system | |
CN105206109B (en) | A kind of vehicle greasy weather identification early warning system and method based on infrared CCD | |
CN111563412B (en) | Rapid lane line detection method based on parameter space voting and Bessel fitting | |
Yuan et al. | Robust lane detection for complicated road environment based on normal map | |
CN107590470B (en) | Lane line detection method and device | |
CN109460709A (en) | The method of RTG dysopia analyte detection based on the fusion of RGB and D information | |
CN104036246B (en) | Lane line positioning method based on multi-feature fusion and polymorphism mean value | |
CN104700072B (en) | Recognition methods based on lane line historical frames | |
CN110738121A (en) | front vehicle detection method and detection system | |
CN107862290A (en) | Method for detecting lane lines and system | |
CN107341454A (en) | The detection method and device of barrier, electronic equipment in a kind of scene | |
CN105488454A (en) | Monocular vision based front vehicle detection and ranging method | |
CN106022243B (en) | A kind of retrograde recognition methods of the car lane vehicle based on image procossing | |
Li et al. | Nighttime lane markings recognition based on Canny detection and Hough transform | |
CN104902261B (en) | Apparatus and method for the road surface identification in low definition video flowing | |
CN112115889B (en) | Intelligent vehicle moving target detection method based on vision | |
CN103996198A (en) | Method for detecting region of interest in complicated natural environment | |
CN103413308A (en) | Obstacle detection method and device | |
CN104915642B (en) | Front vehicles distance measuring method and device | |
CN117094914B (en) | Smart city road monitoring system based on computer vision | |
CN107909047A (en) | A kind of automobile and its lane detection method and system of application | |
CN109101932A (en) | The deep learning algorithm of multitask and proximity information fusion based on target detection | |
CN107220632B (en) | Road surface image segmentation method based on normal characteristic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |