CN107729856A - A kind of obstacle detection method and device - Google Patents

A kind of obstacle detection method and device Download PDF

Info

Publication number
CN107729856A
CN107729856A CN201711012442.8A CN201711012442A CN107729856A CN 107729856 A CN107729856 A CN 107729856A CN 201711012442 A CN201711012442 A CN 201711012442A CN 107729856 A CN107729856 A CN 107729856A
Authority
CN
China
Prior art keywords
line section
straight line
row
barrier
horizontal line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711012442.8A
Other languages
Chinese (zh)
Other versions
CN107729856B (en
Inventor
冯谨强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Co Ltd
Original Assignee
Hisense Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Co Ltd filed Critical Hisense Group Co Ltd
Priority to CN201711012442.8A priority Critical patent/CN107729856B/en
Publication of CN107729856A publication Critical patent/CN107729856A/en
Priority to PCT/CN2018/096079 priority patent/WO2019080557A1/en
Application granted granted Critical
Publication of CN107729856B publication Critical patent/CN107729856B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a kind of obstacle detection method and device, including:Obtain the sparse disparities figure and U disparity map for not including road surface;Detect to obtain first straight line section based on U disparity map;Lateral connection first straight line section, obtains second straight line section, and the shooting width of barrier and the position in sparse disparities figure are determined according to second straight line section;In sparse disparities figure, statistics draws the number of effective parallax value in every a line in column region corresponding to second straight line section;Coboundary row and lower boundary row of the barrier in sparse disparities figure are determined according to the number of effective parallax value in every a line and predetermined number threshold value, the shooting height of barrier is determined according to coboundary row and lower boundary row;According to shooting width, the shooting height of barrier, and the position in sparse disparities figure, target obstacle is determined in the first image or the second image.Using this method, the degree of accuracy of efficiency and testing result that detection of obstacles is carried out based on sparse disparities figure can be improved.

Description

A kind of obstacle detection method and device
Technical field
The application is related to technical field of image processing, more particularly to a kind of obstacle detection method and device.
Background technology
Binocular stereo vision (Binocular Stereo Vision) is a kind of important form of machine vision, and it is based on Principle of parallax, pass through meter from the two images of different position acquisition testees using imaging device, such as binocular camera Nomogram obtains the three-dimensional geometric information of object as the position deviation between corresponding points., can be in automobile assistant driving field based on this The barrier on road surface is detected using technique of binocular stereoscopic vision.
In correlation technique, when the sparse disparities figure based on original image carries out detection of obstacles, due in sparse disparities In figure, effective parallax value at characteristic point or characteristic curve only just be present, therefore, in the pixel corresponding to barrier, only Effective parallax value on partial pixel point be present, effective parallax value is then not present on another part pixel, so as to which cause will be same Detection of obstacles is two or more barrier, subsequently, in order to improve the degree of accuracy of detection of obstacles result, is then needed to detecting The two or more barrier gone out further merges processing.As can be seen here, hindered in correlation technique based on sparse disparities figure Hinder the amount of calculation of analyte detection larger, cause the less efficient of detection of obstacles.
The content of the invention
In view of this, the application provides a kind of obstacle detection method and device, is carried out with improving based on sparse disparities figure The efficiency of detection of obstacles, while improve the degree of accuracy of testing result.
Specifically, the application is achieved by the following technical solution:
According to the first aspect of the embodiment of the present application, there is provided a kind of obstacle detection method, methods described include:
The first image and the second image collected based on binocular camera obtains the sparse disparities figure and U for not including road surface Disparity map;
Detect to obtain a plurality of first straight line section discontinuous on same level direction based on the U disparity map;
The a plurality of first straight line section of lateral connection, obtains a second straight line section without breakpoint, with according to described second Straightway determines shooting width and position of the barrier in the sparse disparities figure of the barrier;
In the sparse disparities figure, statistics draws in every a line to belong in column region corresponding to the second straight line section and set Determine the number of effective parallax value of parallax distribution;
Determine the barrier described dilute with predetermined number threshold value according to the number of effectively parallax value in described every a line The coboundary row and lower boundary row in disparity map are dredged, to determine the bat of the barrier according to the coboundary row and lower boundary row Take the photograph height;
According to shooting width, the shooting height of the barrier, and the barrier is in the sparse disparities figure Position, target obstacle is determined in described first image or second image.
According to the second aspect of the embodiment of the present application, there is provided a kind of obstacle detector, described device include:
Disparity map acquisition module, the first image and the second image for being collected based on binocular camera are not included The sparse disparities figure and U disparity map on road surface;
Detection module, for detecting to obtain a plurality of discontinuous on same level direction first based on the U disparity map Straightway;
Merging treatment module, for a plurality of first straight line section of lateral connection, obtain a second straight line without breakpoint Section, to be determined the shooting width of the barrier with the barrier in the sparse disparities figure according to the second straight line section Position;
Statistical module, in the sparse disparities figure, statistics to be drawn in column region corresponding to the second straight line section Belong to the number of effective parallax value of setting parallax distribution on per a line;
Height determining module, for determining institute with predetermined number threshold value according to the effectively number of parallax value in described every a line Coboundary row and lower boundary row of the barrier in the sparse disparities figure are stated, with true according to the coboundary row and lower boundary row The shooting height of the fixed barrier;
Target determination module, for shooting width, the shooting height according to the barrier, and the barrier is in institute The position in sparse disparities figure is stated, target obstacle is determined in described first image or second image.
As seen from the above-described embodiment, by detecting to obtain a plurality of first straight line section for representing barrier based on U disparity map Afterwards, a plurality of first straight line section of lateral connection, obtains second straight line section, and the length of the second straight line section then represents the shooting of barrier Width, then based on the second straight line section, the shooting height of barrier is determined with reference to sparse disparities figure, is based ultimately upon the bat of barrier The position of width, shooting height and barrier in sparse disparities figure is taken the photograph, in the original image that binocular camera collects Determine target obstacle.During whole detection of obstacles, due to it is determined that before target obstacle, i.e., to representing same barrier The straightway of thing is hindered to carry out lateral connection processing, so as to compared to existing technologies, avoid same detection of obstacles For the situation of two or more barrier, the degree of accuracy of detection of obstacles result is improved, simultaneously, it is thus also avoided that in order to improve obstacle The degree of accuracy of analyte detection result is, it is necessary to which the two or more barrier drawn to detection further merges the operation of processing, together When due to the lateral connection processing procedure to straightway it is more convenient, amount of calculation is smaller, so as to, using the application propose side Method, the efficiency that detection of obstacles is carried out based on sparse disparities figure can be improved.
Brief description of the drawings
Figure 1A is one embodiment flow chart of the application obstacle detection method;
Figure 1B is the example for the width original image that binocular camera collects;
Fig. 1 C are a kind of example for the sparse disparities figure for not including road surface;
Fig. 1 D are a kind of example for the U disparity map for not including road surface;
Fig. 1 E are a kind of example of UZ disparity maps;
Fig. 1 F are that detection obtains the effect diagram of horizontal line section in U disparity map;
A kind of example of overlapping regions of Fig. 1 G between upper and lower two horizontal line sections;
Fig. 1 H are that vertical consolidation horizontal line section obtains the schematic diagram of first straight line section in U disparity map;
Fig. 1 I are the schematic diagram of the application detection of obstacles effect;
Fig. 2 is another embodiment flow chart of the application obstacle detection method;
Fig. 3 is the further embodiment flow chart of the application obstacle detection method;
Fig. 4 is a kind of hardware structure diagram of the network equipment where the application obstacle detector;
Fig. 5 is one embodiment block diagram of the application obstacle detector.
Embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following description is related to During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the application.On the contrary, they be only with it is such as appended The example of the consistent apparatus and method of some aspects be described in detail in claims, the application.
It is only merely for the purpose of description specific embodiment in term used in this application, and is not intended to be limiting the application. " one kind " of singulative used in the application and appended claims, " described " and "the" are also intended to including majority Form, unless context clearly shows that other implications.It is also understood that term "and/or" used herein refers to and wrapped Containing the associated list items purpose of one or more, any or all may be combined.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the application A little information should not necessarily be limited by these terms.These terms are only used for same type of information being distinguished from each other out.For example, do not departing from In the case of the application scope, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as One information.Depending on linguistic context, word as used in this " if " can be construed to " ... when " or " when ... When " or " in response to determining ".
With the swift and violent growth of automobile quantity, traffic safety problem is increasingly serious, the identification and inspection of road obstacle Survey the study hotspot as field of traffic safety.In correlation technique, by installing binocular camera on vehicle, with vehicle row Pavement image, the pavement image photographed including " left eye " in binocular camera, with binocular are gathered during sailing in real time The pavement image that " right eye " of video camera photographs, afterwards, using Stereo Matching Algorithm, obtain regarding for above-mentioned two width pavement image Difference figure, and Treatment Analysis is carried out to the disparity map based on technique of binocular stereoscopic vision, to detect the barrier on road surface.
According to using primitive difference, Stereo Matching Algorithm can be classified, one type is then the vertical of feature based Body matching algorithm.In the Stereo Matching Algorithm of feature based, primarily directed to geometric properties information, for example (,) it is edge, profile, emerging Interesting point, line, angle point etc. carry out disparity estimation, therefore, in the disparity map drawn by the Stereo Matching Algorithm of feature based, only With the presence of the effective parallax value of ability at characteristic point or characteristic curve (being usually non-zero parallax value), based on this, in correlation technique, will be based on The disparity map that the Stereo Matching Algorithm of feature is drawn is referred to as sparse disparities figure.
By above-mentioned described sparse disparities figure, those skilled in the art could be aware that, just because of in sparse disparities figure only Go out just effective parallax value with the presence of characteristic point or characteristic curve, then, in the pixel corresponding to barrier, also only part picture Effective parallax value on vegetarian refreshments be present, effective parallax value is then not present on another part pixel, so as to based on sparse disparities figure When carrying out detection of obstacles, also lead to same barrier and be detected as multiple barriers.Subsequently, in order to improve detection of obstacles As a result the degree of accuracy, the then multiple barriers for also needing to further draw detection merge processing.As can be seen here, related skill It is larger that the amount of calculation of detection of obstacles is carried out in art based on sparse disparities figure, causes the less efficient of detection of obstacles.
Based on this, the application provides a kind of obstacle detection method and device, is hindered with improving based on sparse disparities figure Hinder the efficiency of analyte detection, while improve the degree of accuracy of testing result.
It is as follows, enumerate the obstacle detection method that following embodiments are provided the application and illustrate.
Figure 1A is referred to, is one embodiment flow chart of the application obstacle detection method, this method includes following step Suddenly:
Step 101:The first image and the second image collected based on binocular camera obtains not including the sparse of road surface Disparity map and U disparity map.
In the embodiment of the present application, for convenience, the two width original images difference that binocular camera can be collected Referred to as the first image and the second image, as shown in Figure 1B, the example of the width original image collected for binocular camera.This Shen First image and the second image please be handled by the Stereo Matching Algorithm of feature based, can be obtained in embodiments Sparse disparities figure.
Further, in order to reduce follow-up computation complexity, road surface parallax can be removed in above-mentioned sparse disparities figure, with Obtain the sparse disparities figure for not including road surface.For example, can be handled according to above-mentioned the first image and the second image, obtain Sparse disparities figure further obtain V disparity maps, using Hough detection technique, detect that one or more is straight in V disparity maps Line fits within the linear equation of road in V disparity maps, and subsequently, in the above-mentioned sparse disparities figure, deletion meets the linear equation Parallax point, namely the parallax point on road surface, so as to obtain not including the sparse disparities figure on road surface.As shown in Figure 1 C, it is not include A kind of example of the sparse disparities figure on road surface.
In the embodiment of the present application, after obtaining not including the sparse disparities figure on road surface, it can further obtain not including road The U disparity map in face, for example, as shown in figure iD, not include a kind of example of the U disparity map on road surface.To then how obtaining figure U disparity map exemplified by 1D, those skilled in the art can learn that this is no longer described in detail the application from correlation technique.
Step 102:Detect to obtain a plurality of first straight line section discontinuous on same level direction based on U disparity map.
In one embodiment, the U disparity map exemplified by Fig. 1 D can be converted into UZ disparity maps first, for example, such as Fig. 1 E It is shown, it is a kind of example of UZ disparity maps.Subsequently, straight-line detection is carried out line by line in the UZ disparity maps exemplified by Fig. 1 E, can obtain To a plurality of first straight line section discontinuous on same level direction, these first straight line sections then represent barrier.
The detailed process embodiment shown in Figure 2 of the embodiment, as shown in Fig. 2 being the application detection of obstacles side Another embodiment flow chart of method, method shown in the Fig. 2 describes how to obtain UZ disparity maps emphatically, and how to be based on UZ Disparity map detects to obtain a plurality of first straight line section, and method shown in the Fig. 2 comprises the following steps:
Step 201:Maximum detecting distance value and the parallax value meter of pixel in sparse disparities figure based on binocular camera Calculation draws distance value and the corresponding relation of parallax value.
Step 202:UZ disparity maps are obtained based on U disparity map and corresponding relation.
It is as follows, step 201 and step 202 are illustrated:
In the embodiment of the present application, the maximum detecting distance value of binocular camera and pixel in sparse disparities figure are primarily based on Point parallax value distance value and the corresponding relation of parallax value is calculated, for example, can be calculated according to following formula (one) away from From the corresponding relation of value and parallax value:
In above-mentioned formula (one), d represents parallax value, and z represents distance value, and BF represents the parameter of binocular camera.
For concrete example, it is assumed that the maximum detecting distance value of binocular camera is 100 meters, then can be calculated every 1 meter Go out distance value and the corresponding relation of parallax value, that is, 1 meter, 2 meters, 3 meters ... can be calculated by above-mentioned formula (one), directly To 100 meters of each self-corresponding parallax values.
In an optional implementation, the corresponding relation of the distance value being calculated and parallax value can be stored in one In dimension group A, array A has 100 elements, wherein, A [0] value represents parallax value corresponding to 1 meter, and A [1] value represents 2 Parallax value corresponding to rice, A [2] value represent parallax value corresponding to 3 meters, and the rest may be inferred, and A [99] value is represented corresponding to 100 meters Parallax value.
In the embodiment of the present application, a width UZ disparity maps can be initially set up, the transverse axis of the UZ disparity maps and U disparity map Transverse axis is identical, unlike, the longitudinal axis of the UZ disparity maps represents distance value, wherein, the maximum range value represented by the longitudinal axis is The maximum detecting distance value of binocular camera, in an initial condition, the pixel of each pixel in the newly-established UZ disparity maps Value can be 0.
Subsequently, based on Fig. 1 D exemplified by U disparity map, and combine above-mentioned array A, above-mentioned newly-established UZ disparity maps are entered Row processing.Concrete processing procedure is:
In the U disparity map exemplified by Fig. 1 D, the parallax model belonging to the parallax value that each pixel represents is determined by column Enclose, distance range can be determined based on the disparity range, can be determined corresponding to the parallax value of pixel based on the distance range Distance value, then, then can be in UZ disparity maps, coordinate that will be belonging to the pixel represented by corresponding with this distance value of row The pixel value of pixel on position adds 1.
For example, for example, it is assumed that A [3]=156, A [4]=256, the parallax of the 2nd upper pixel of row in U disparity map It is worth for 168, it is assumed that drawn by searching array A, the disparity range belonging to the parallax value of the pixel is 156~256, corresponding Distance range then be 4 meters~5 meters.In an optional implementation, the lower limit of corresponding distance range can be determined For distance value corresponding to the parallax value of the pixel, for example, distance value corresponding to the pixel is 4 meters, then, it can be regarded in UZ In poor figure, the pixel value of pixel on (2,4) this coordinate position is added 1.
By foregoing description, in the UZ disparity maps finally given, the pixel value on each pixel represents, the picture In row belonging to vegetarian refreshments, the number of the pixel of same distance value corresponding with the pixel.For example, it is assumed that finally giving On (2,4) of UZ disparity maps this coordinate position, the pixel value of pixel is 125, then it is believed that describing in a step 101 Sparse disparities figure in the 2nd row on, share the pixels of 125 4 meters of respective distances values.
Summary describes, and the U disparity map exemplified by comparison diagram 1D and the UZ disparity maps exemplified by Fig. 1 E, Ke Yifa Existing, the essence of UZ disparity maps is that the pixel in U disparity map has been carried out into vertical consolidation, it is thus possible to so that on barrier Parallax point is compacter, more concentrates.
Step 203:Detected line by line in UZ disparity maps, obtain a plurality of table discontinuous on same level direction first is straight Line segment.
In the embodiment of the present application, the UZ disparity maps exemplified by Fig. 1 E can be detected line by line, it is straight by what is detected Line segment is defined as representing the first straight line section of barrier.
It is as follows, so that exemplified by wherein a line detects, it is straight that description obtains first in the UZ disparity maps exemplified by Fig. 1 E The process of line segment:
It is mentioned here effective when detecting first effective pixel points according to order from left to right in the row Pixel refers to that pixel value is more than the pixel of the first pixel threshold of setting, and first effective pixel points are straight as first The left end point of line segment;Afterwards, the first tolerance based on setting, setting one at the left end point current detection model Enclose, in this prior in detection range, individual element point is detected, will be currently detected when detecting effective pixel points Effective pixel points are as terminating point;Then, then based on above-mentioned first tolerance, one is reset since at the terminating point Current detection scope, in the range of the current detection of new settings, then individual element point is detected, if still detecting effective picture Vegetarian refreshments, then continue to reset current detection scope, and continue individual element point and detected, until the current detection model of setting When effective pixel points are not present in enclosing, the terminating point obtained for the last time is defined as to the right endpoint of first straight line section.
It should be noted that " near big and far smaller " this feature in view-based access control model field, can be each in UZ disparity maps Row sets the first different tolerances, and it is possible to based on " distance is more remote, and the first tolerance is smaller, namely parallax value is got over Greatly, the first tolerance is smaller " first tolerance of this principle setting per a line.
In one embodiment, the U disparity map exemplified by Fig. 1 E can be detected line by line, obtains horizontal line section, for example, such as It is that detection obtains the effect diagram of horizontal line section in U disparity map shown in Fig. 1 F.
In order that point on barrier is compacter, concentrates, it is real with convenient follow-up detection of obstacles process, the application Apply in example, longitudinal superposition further is carried out to the horizontal line section shown in Fig. 1 F, obtain representing the first straight line section of barrier.
The detailed process embodiment shown in Figure 3 of the embodiment, as shown in figure 3, being the application detection of obstacles side The further embodiment flow chart of method, method shown in the Fig. 3 describe how to detect to obtain a plurality of first based on U disparity map emphatically Straightway, method shown in the Fig. 3 comprise the following steps:
Step 301:Detected line by line in U disparity map, the horizontal line section in being gone.
In the embodiment of the present application, the U disparity map exemplified by Fig. 1 D can be detected line by line, is obtained shown in Fig. 1 F The horizontal line section of example.
It is as follows, so that exemplified by wherein a line detects, description obtains horizontal line section in the U disparity map exemplified by Fig. 1 D Process:
It is mentioned here effective when detecting first effective pixel points according to order from left to right in the row Pixel refers to that pixel value is more than the pixel of the second pixel threshold of setting, using first effective pixel points as horizontal line section Left end point, afterwards, the second tolerance based on setting, setting one at the left end point current detection scope, In this prior in detection range, individual element point is detected, will be currently detected effective when detecting effective pixel points Pixel is as terminating point;Then, then based on above-mentioned second tolerance, working as resetting one at the terminating point Preceding detection range, in the range of the current detection of reset, continue individual element point and detected, if still detecting effective picture Vegetarian refreshments, then continue to reset current detection scope, continue individual element point and detected, until the current detection scope of setting It is interior when effective pixel points are not present, the terminating point obtained for the last time is defined as to the right endpoint of horizontal line section.
It should be noted that " near big and far smaller " this feature in view-based access control model field, can be every a line in U disparity map The second different tolerances is set, and it is possible to based on " distance is more remote, and the second tolerance is smaller, namely parallax value is got over Greatly, the second tolerance is smaller " second tolerance of this principle setting per a line.
In addition it is also necessary to explanation, because UZ disparity maps are substantially the vertical consolidation to U disparity map, so as to above-mentioned First pixel threshold can be more than above-mentioned second pixel threshold.
Step 302:Detected line by line downwards since the horizontal line section positioned at first trip, when detecting satisfaction in the range of current line First impose a condition horizontal line section when, by positioned at the horizontal line section of first trip with meeting that the first horizontal line section for imposing a condition is overlapped, Obtain the horizontal line section after a superposition.
Step 303:Detected line by line downwards at the horizontal line section after superposition, it is full when being detected in current setting range During the horizontal line section that foot first imposes a condition, step 304 is performed;Otherwise, step 305 is performed.
Step 304:The horizontal line section for continuing to impose a condition the horizontal line section and satisfaction first after being superimposed is overlapped, and is obtained Horizontal line section after one new superposition;Return and perform step 303;
Step 305:Horizontal line section after the superposition finally given is defined as first straight line section.
It is as follows, step 303 to step 305 is illustrated:
In the embodiment of the present application, in the U disparity map exemplified by Fig. 1 D, since the horizontal line section positioned at first trip downwards by Row detection, " first trip " mentioned here refers to, the doubtful barrier region (region 1 of example as shown in figure iF in U disparity map With region 2) in be located at the top horizontal line section, the tolerance based on first trip, set a horizontal line section for being located at first trip from this Locate the current line range started, detected line by line in line range in this prior, meet the first horizontal line section to impose a condition when detecting When, that is, detected line by line downwards since being located at the horizontal line section of first trip this, when detecting satisfaction first in the range of current line During the horizontal line section to impose a condition, this is located at the horizontal line section that horizontal line section of first trip imposes a condition with currently detected satisfaction first It is overlapped, obtains the horizontal line section after a new superposition.
Subsequently, the tolerance based on row belonging to the horizontal line section after above-mentioned superposition, a horizontal line after the superposition is set The current line range of section place beginning, is detected line by line in line range in this prior, meets that above-mentioned first imposes a condition when detecting During horizontal line section, that is, since going out the horizontal line section after the superposition downwards detection, when detected in the range of current line meet it is above-mentioned First impose a condition horizontal line section when, continue the horizontal line section after previous superposition and currently detected meeting that above-mentioned first sets The horizontal line section of fixed condition is overlapped, and obtains the horizontal line section after new superposition.Subsequently, the horizontal line section after the new superposition is continued Place starts to detect, until being not detected by the range of current line when meeting horizontal line section that above-mentioned first imposes a condition, by last Horizontal line section after secondary obtained superposition is defined as representing the first straight line section of barrier.
It should be noted that in above-mentioned described additive process, the pixel value of pixel in the horizontal line section after superposition It is cumulative to be obtained by the pixel value of corresponding pixel points in two horizontal line sections being overlapped.
In an optional implementation, above-mentioned first, which imposes a condition, to be:Currently detected horizontal line Duan Yuqi The alignment ratio of horizontal line section reaches default proportion threshold value in lastrow, and alignment ratio mentioned here can refer to two up and down The ratio for the horizontal line section that " overlapping region " between horizontal line section is accounted among two horizontal line sections.For example, as shown in Figure 1 G, it is A kind of example of overlapping region between upper and lower two horizontal line sections.
In order that above-mentioned described additive process is more clearly understood in those skilled in the art, with reference to exemplified by Fig. 1 H Horizontal line section be illustrated:
As shown in fig. 1H, there are 3 horizontal line sections, respectively horizontal line section 11 to 13.As described above, opened at horizontal line section 11 Beginning is detected downwards line by line, meets the above-mentioned first horizontal line section 12 to impose a condition if being detected in the range of current line, and superposition should Horizontal line section 11 and horizontal line section 12, the horizontal line section (not shown in Fig. 1 H) after being superimposed, horizontal line section and horizontal line section after the superposition 12 are located at same a line, and the row starting point of the horizontal line section after the superposition is the smaller of horizontal line section 11 and row starting point in horizontal line section 12, The row terminal of horizontal line section after the superposition is horizontal line section 11 and the greater of row terminal in horizontal line section 12;Subsequently, continue folded from this Start to detect line by line downwards at horizontal line section after adding, meet above-mentioned first horizontal stroke to impose a condition when being detected in the range of current line Line segment 13, then continue to be superimposed the horizontal line section and horizontal line section 13 after the superposition, obtain the horizontal line section after new superposition and (do not show in Fig. 1 H Go out), the horizontal line section after the new superposition is located at same a line with horizontal line section 13, and the row starting point of the new horizontal line section is previous superposition The smaller of horizontal line section afterwards and row starting point in horizontal line section 13, row terminal are the horizontal line section after previous superposition with being arranged in horizontal line section 13 The greater of terminal, so far, the horizontal line section after above-mentioned new superposition are the first straight line for representing barrier finally given Section.
It should be noted that in Fig. 1 H, in order to intuitively show first straight line section, first straight line Duan Weiyu horizontal line sections 13 alignment, it will be appreciated by persons skilled in the art that being processed as having no effect on the technical scheme of the application proposition.
It should also be noted that, " near big and far smaller " this feature in view-based access control model field, can be segmented in U disparity map and set The fixed tolerance per a line, for example, the tolerance of the 1st row to the 5th row can be arranged into a1, by the 6th row to the 10th row Tolerance be arranged to a2, etc., and it is possible to based on " distance is more remote, and tolerance is smaller, namely parallax value is bigger, holds It is smaller to bear scope " tolerance of this principle setting per a line.
Pass through the method exemplified by comparative analysis Fig. 2 and Fig. 3, it is found that because UZ disparity maps inherently regard to U The longitudinal compression of poor figure, also, the size of UZ disparity maps is less than U disparity map, so as to the detection of straight lines section on UZ disparity maps Efficiency is than the efficiency high of the detection of straight lines section in U disparity map, also, the straightway for detecting to obtain on UZ disparity maps can be direct As the first straight line section for representing barrier, and after horizontal line section is detected in U disparity map, also further horizontal line section is entered Row superposition, it can just obtain representing the first straight line section of barrier, it can be seen that, the efficiency based on UZ disparity maps detection barrier It is higher, the degree of accuracy is higher.
Step 103:The a plurality of first straight line section of lateral connection, a second straight line section without breakpoint is obtained, with according to second Straightway determines shooting width and position of the barrier in sparse disparities figure of barrier.
Horizontal line section according to exemplified by Fig. 1 F, in conjunction with foregoing description sparse disparities figure the characteristics of (edge, profile etc. Effective parallax value just be present), it is known that in figure 1f, same barrier substantially " is split " into left and right two parts, because This, in the embodiment of the present application, for the right two first straight line sections of adjacent the first from left, if two first straight line sections meet second Impose a condition, then lateral connection two first straight line sections, obtain representing the second straight line section of barrier, the second straight line section Left end point is the left end point for the first straight line section for being located at left side in two first straight line sections, and the right endpoint of the second straight line section is It is located at the right endpoint of the first straight line section on right side in two first straight line sections.As can be seen here, can be true by second straight line section The shooting width of barrier is determined, it is necessary to explanation, is in order to be different from the developed width of barrier, and by barrier here The width presented in the original image that binocular camera collects is referred to as shooting width.
It should be noted that for UZ disparity maps, above-mentioned described " adjacent " refers to that two first straight line sections are located at Same a line, and in a lateral direction, again without other first straight line sections between two first straight line sections, then claim this two it is first straight Line segment is adjacent;For U disparity map, because first straight line section is superimposed by the horizontal line section on multirow, so as to, if two First straight line section is included with the horizontal line section in a line, and in a longitudinal direction, between two first straight line sections again without with this two Any bar in bar first straight line section includes other first straight line sections with the horizontal line section in a line, then claims two first straight lines Duan Xianglin.
In one embodiment, with reference to the characteristics of sparse disparities figure (edge, profile etc. just have effective parallax value), can be with Learn, represent the first straight line section of barrier, the spy of " two nose heave centres are light " is showed in U disparity map, or UZ disparity maps Point, refer to the characteristics of " two nose heave centres are light " mentioned here, in the pixel value sum of pixel on left and right ends is more than Between partial pixel point pixel value sum, based on this, above-mentioned second imposes a condition and can include:
(1) for the first straight line section (hereinafter referred to as left side straightway) in left side, in the first straight line section, it is located at The pixel value sum of the pixel of left-half is more than the pixel value sum of the pixel positioned at right half part;
Accordingly, for positioned at the first straight line section on right side (hereinafter referred to as right side straightway), the first straight line Duan Shang, the pixel value sum of the pixel positioned at right half part is less than positioned at the pixel value sum of the pixel of left-half.
(2) it is directed to in a line, the pixel value sum L of N number of pixel of left-half is located on the left side straightway Difference between the pixel value sum R for the N number of pixel for being located at right half part on the right side straightway is less than setting difference Threshold value, the setting difference threshold can between L and R the greater 1/4.
Wherein, N be left side straightway left-half with the right half part of right side straightway pixel number it is smaller Person, for example, it is assumed that the left-half of left side straightway has 5 pixels, the right half part of right side straightway has 3 pictures Vegetarian refreshments, then N is 3.
It should be noted that in general, same barrier substantially " is split " into left and right two parts, but also not Situation of the same barrier by " segmentation " into some can be excluded, in such a case, it is assumed that in U disparity map or UZ parallaxes In figure since the leftmost side, tri- first straight line sections of A, B, C be present, if A, B meet that above-mentioned second imposes a condition, horizontal meaders A and B, it is assumed that B ' is obtained after horizontal meaders, now, is also existed and C ' that B ' is adjacent, and B ' and C meets above-mentioned second setting bar Part, then continue horizontal meaders B ' and C, the first straight line Duan Zewei after final lateral connection represents the second straight line section of barrier.
Step 104:In sparse disparities figure, statistics draws in every a line to belong in column region corresponding to second straight line section and set Determine the number of effective parallax value of parallax distribution.
In the embodiment of the present application, after the second straight line section that expression barrier is obtained in U disparity map or UZ disparity maps, The second straight line section can be put into the sparse disparities figure exemplified by Fig. 1 C, based on second straight line section in sparse disparities figure Position, and the shooting width of the barrier of above-mentioned determination, obtain the shooting height of barrier.
It should be noted that in order to be different from the actual height of barrier, by barrier in binocular camera in the application The height presented in the original image collected is referred to as shooting height.
It is as follows, illustrate the process for the shooting height for determining barrier:
Assuming that second straight line section, in the sparse disparities figure exemplified by Fig. 1 D, left end point is on the 8th row, and right endpoint is the 42nd On row, then the region between the 8th row and the 42nd row (including the 8th row and the 42nd row) is row area corresponding to second straight line section Domain.
Assuming that distance value corresponding to second straight line section is 30 meters, can be by A [29] to A [30] with reference to above-mentioned one-dimension array A Represented parallax value scope is as setting parallax distribution.
So, then in column region corresponding to above-mentioned second straight line section, count line by line on per a line, belong to above-mentioned setting and regard The number of the parallax value of poor distribution, for convenience, regarding for above-mentioned setting parallax distribution will be belonged in the application Difference is referred to as effective parallax value.
In an optional implementation, the number of effective parallax value in every a line can be stored in one-dimension array B In, for example, assuming that the height of sparse disparities figure is 100 pixel columns, then one-dimension array B can include 100 elements, its In, according to order from top to bottom, B [0] represents that effectively the number of parallax value, B [1] represent effective parallax in the 2nd row in the 1st row The number of value, B [2] represent the number of effective parallax value in the 3rd row, and the rest may be inferred, and B [99] represents effective parallax in the 100th row The number of value.
Step 105:Determine that barrier regards sparse with predetermined number threshold value according to the number of effective parallax value in every a line Coboundary row and lower boundary row in poor figure, to determine the shooting of barrier height according to the coboundary row and lower boundary row Degree.
In the embodiment of the present application, can be detected line by line downwards since the top row of sparse disparities figure, it is effective when detecting When the number of parallax value is more than the row of predetermined threshold value, it is upper in sparse disparities figure that currently detected row is defined as barrier Border row;Tolerance based on setting, setting one at the coboundary row determination row detection range;Examined in the row Detected line by line in the range of survey, will be currently detected when the number for detecting effective parallax value is not more than the row of predetermined threshold value Capable previous row is defined as lower boundary row of the barrier in sparse disparities figure.
Specifically, due in step 104, the number of effective parallax value in every a line being stored in one-dimension array B, then may be used To travel through one-dimension array B, when the element value traversed is more than predetermined threshold value, the row corresponding to element that this is traversed is true It is set to coboundary row of the barrier in sparse disparities figure, for example, the row of coboundary behavior the 15th;Afterwards, it is expert in detection range, Such as the 15th in row to the 30th row (including the 30th row) this row detection range, detect line by line, namely traversal B [14] to B [29], If the element value traversed is more than predetermined threshold value, continue to travel through, will be all over if the element value traversed is not more than predetermined threshold value Row corresponding to the previous element for the element gone through is defined as lower boundary row of the barrier in sparse disparities figure, for example, being the 20 rows.
It can be drawn based on this, shooting height of the barrier in sparse disparities figure is 5 rows.
Step 106:According to shooting width, the shooting height of barrier, and position of the barrier in sparse disparities figure, Target obstacle is determined in the first image or the second image.
By above-mentioned steps, shooting width, the shooting height of barrier are obtained, and barrier is in sparse disparities figure Position, and due to sparse disparities figure and the first image and the second picture traverse height all same, so as in the first image or Target obstacle is determined in second image.As shown in Figure 1 I, it is the schematic diagram of the application detection of obstacles effect.
As seen from the above-described embodiment, by detecting to obtain a plurality of first straight line section for representing barrier based on U disparity map Afterwards, a plurality of first straight line section of lateral connection, obtains second straight line section, and the length of the second straight line section then represents the shooting of barrier Width, then based on the second straight line section, the shooting height of barrier is determined with reference to sparse disparities figure, is based ultimately upon the bat of barrier The position of width, shooting height and barrier in sparse disparities figure is taken the photograph, in the original image that binocular camera collects Determine target obstacle.During whole detection of obstacles, due to it is determined that before target obstacle, i.e., to representing same barrier The straightway of thing is hindered to carry out lateral connection processing, so as to compared to existing technologies, avoid same detection of obstacles For the situation of two or more barrier, the degree of accuracy of detection of obstacles result is improved, simultaneously, it is thus also avoided that in order to improve obstacle The degree of accuracy of analyte detection result is, it is necessary to which the two or more barrier drawn to detection further merges the operation of processing, together When due to the lateral connection processing procedure to straightway it is more convenient, amount of calculation is smaller, so as to, using the application propose side Method, the efficiency that detection of obstacles is carried out based on sparse disparities figure can be improved.
Corresponding with the embodiment of aforementioned obstacles object detecting method, present invention also provides the implementation of obstacle detector Example.
The embodiment of the application obstacle detector can be applied on network devices.Device embodiment can be by soft Part is realized, can also be realized by way of hardware or software and hardware combining.Exemplified by implemented in software, as a logical meaning On device, be to be read corresponding computer program instructions in nonvolatile memory by the processor of the network equipment where it Get what operation in internal memory was formed.For hardware view, as shown in figure 4, the network where the application obstacle detector A kind of hardware structure diagram of equipment, except the processor 41 shown in Fig. 4, internal memory 42, network interface 43 and non-volatile memories Outside device 44, the network equipment in embodiment where device can also include it generally according to the actual functional capability of the network equipment His hardware, is repeated no more to this.
Fig. 5 is refer to, is one embodiment block diagram of the application obstacle detector, the device can include:Parallax Figure acquisition module 51, detection module 52, merging treatment module 53, statistical module 54, height determining module 55, target determination module 56。
Wherein, disparity map acquisition module 51, it can be used for the first image and the second figure collected based on binocular camera As obtaining the sparse disparities figure and U disparity map for not including road surface;
Detection module 52, can be used for based on the U disparity map detect to obtain it is a plurality of on same level direction it is discontinuous First straight line section;
It merging treatment module 53, can be used for a plurality of first straight line section of lateral connection, obtain one article of without breakpoint Two straightways, to determine that the shooting width of the barrier and the barrier sparse regard described according to the second straight line section Position in poor figure;
Statistical module 54, it can be used in the sparse disparities figure, statistics draws and arranged corresponding to the second straight line section Belong to the number of effective parallax value of setting parallax distribution in region in every a line;
Height determining module 55, it can be used for number and predetermined number threshold value according to effective parallax value in described every a line Coboundary row and lower boundary row of the barrier in the sparse disparities figure are determined, with according to the coboundary row and below Boundary's row determines the shooting height of the barrier;
Target determination module 56, it can be used for shooting width, shooting height according to the barrier, and the obstacle Position of the thing in the sparse disparities figure, target obstacle is determined in described first image or second image.
In one embodiment, the detection module 52 can include (not shown in Fig. 5):
First detection sub-module, for being detected line by line in the U disparity map, the horizontal line section in being gone;
Second detection sub-module, for being detected line by line downwards since the horizontal line section positioned at first trip;
First merge submodule, for when detected in the range of current line meet first impose a condition horizontal line section when, The horizontal line section positioned at first trip is met that the first horizontal line section to impose a condition is overlapped with described, after obtaining a superposition Horizontal line section;
3rd detection sub-module, for the horizontal line section after the superposition detect line by line downwards;
Second merges submodule, meets first horizontal line to impose a condition for that ought be detected in current setting range Duan Shi, continue the horizontal line section after the superposition meeting that the described first horizontal line section to impose a condition is overlapped with described, obtain Horizontal line section after one new superposition;Return perform performed by the 3rd detection sub-module from the horizontal line section after the superposition The step of starting to detect line by line downwards;
Determination sub-module, if described meeting that described first imposes a condition for being not detected by the current setting range Horizontal line section, then the horizontal line section after the superposition finally given is defined as first straight line section.
In one embodiment, the detection module 52 can include (not shown in Fig. 5):
Calculating sub module, for picture in the maximum detecting distance value based on the binocular camera and the sparse disparities figure Distance value and the corresponding relation of parallax value is calculated in the parallax value of vegetarian refreshments;
UZ view determination sub-modules, for obtaining UZ disparity maps, the UZ based on the U disparity map and the corresponding relation Disparity map is identical with the transverse axis of the U disparity map, and the longitudinal axis of the UZ disparity maps represents detecting distance value;
4th detection sub-module, for being detected line by line in the UZ disparity maps, obtain a plurality of on same level direction Discontinuous first straight line section.
In one embodiment, the merging treatment module 53 is specifically used for:
In the disparity map for obtaining the first straight line section, for the right two adjacent first straight line sections of a first from left, if institute State two first straight line sections and meet that second imposes a condition, then two first straight line sections described in lateral connection, after obtaining a connection First straight line section;
If in the disparity map for obtaining the first straight line section, exist horizontally adjacent with the first straight line section after being connected Another first straight line section, and the first straight line section after the connection and another first straight line section meet described second Impose a condition, then first straight line section and another first straight line section after connection described in lateral connection, obtain one it is new First straight line section after connection, until another first straight line section adjacent with the first straight line section after the connection is not present When, the first straight line section after the connection finally given is defined as second straight line section.
In one embodiment, the height determining module 55 can include (not shown in Fig. 5):
5th detection sub-module, in the column region, being detected line by line downwards since being pushed up row;
Coboundary row determination sub-module, for when the number for detecting effective parallax value is more than the row of predetermined threshold value, inciting somebody to action Currently detected row is defined as coboundary row of the barrier in the sparse disparities figure;
Range determination submodule, for determining row detection range based on the coboundary row;
6th detection sub-module, for being detected line by line in the row detection range;
Lower boundary row determination sub-module, for being not more than the row of the predetermined threshold value when the number for detecting effective parallax value When, the previous row of currently detected row is defined as lower boundary row of the barrier in the sparse disparities figure.
The function of unit and the implementation process of effect specifically refer to and step are corresponded in the above method in said apparatus Implementation process, it will not be repeated here.
For device embodiment, because it corresponds essentially to embodiment of the method, so related part is real referring to method Apply the part explanation of example.Device embodiment described above is only schematical, wherein described be used as separating component The unit of explanation can be or may not be physically separate, can be as the part that unit is shown or can also It is not physical location, you can with positioned at a place, or can also be distributed on multiple NEs.Can be according to reality Need to select some or all of module therein to realize the purpose of application scheme.Those of ordinary skill in the art are not paying In the case of going out creative work, you can to understand and implement.
The preferred embodiment of the application is the foregoing is only, not limiting the application, all essences in the application God any modification, equivalent substitution and improvements done etc., should be included within the scope of the application protection with principle.

Claims (10)

1. a kind of obstacle detection method, it is characterised in that methods described includes:
The first image and the second image collected based on binocular camera obtains the sparse disparities figure and U parallaxes for not including road surface Figure;
Detect to obtain a plurality of first straight line section discontinuous on same level direction based on the U disparity map;
The a plurality of first straight line section of lateral connection, obtains a second straight line section without breakpoint, with according to the second straight line Section determines shooting width and position of the barrier in the sparse disparities figure of the barrier;
In the sparse disparities figure, statistics show that belonging to setting in every a line in column region corresponding to the second straight line section regards The number of effective parallax value of poor distribution;
According to described per effectively the number of parallax value and predetermined number threshold value determine that the barrier sparse regards described in a line Coboundary row and lower boundary row in poor figure, to determine the shooting of barrier height according to the coboundary row and lower boundary row Degree;
According to shooting width, the shooting height of the barrier, and position of the barrier in the sparse disparities figure, Target obstacle is determined in described first image or second image.
2. according to the method for claim 1, it is characterised in that it is described detect to obtain based on the U disparity map it is a plurality of same Discontinuous first straight line section in one horizontal direction, including:
Detected line by line in the U disparity map, the horizontal line section in being gone;
Detected line by line downwards since the horizontal line section positioned at first trip, meet that first imposes a condition when being detected in the range of current line Horizontal line section when, the horizontal line section positioned at first trip is met that the first horizontal line section for imposing a condition is overlapped with described, obtained Horizontal line section after one superposition;
Detected line by line downwards since the horizontal line section after the superposition, meet described first when being detected in current setting range During the horizontal line section to impose a condition, continue the horizontal line section after the superposition meeting the described first horizontal line section to impose a condition with described It is overlapped, obtains the horizontal line section after a new superposition;Return downward since the horizontal line section after the superposition described in performing The step of detecting line by line;
If be not detected by the current setting range it is described meet the described first horizontal line section to impose a condition, will it is final must To superposition after horizontal line section be defined as first straight line section.
3. according to the method for claim 1, it is characterised in that it is described detect to obtain based on the U disparity map it is a plurality of same Discontinuous first straight line section in one horizontal direction, including:
Maximum detecting distance value and the parallax value of pixel in the sparse disparities figure based on the binocular camera calculate Go out distance value and the corresponding relation of parallax value;
UZ disparity maps, the UZ disparity maps and the transverse axis of the U disparity map are obtained based on the U disparity map and the corresponding relation Identical, the longitudinal axis of the UZ disparity maps represents detecting distance value;
Detected line by line in the UZ disparity maps, obtain a plurality of first straight line section discontinuous on same level direction.
4. the method according to claim 1 or 3, it is characterised in that a plurality of first straight line section of lateral connection, obtain To a second straight line section without breakpoint, including:
In the disparity map for obtaining the first straight line section, for the right two adjacent first straight line sections of a first from left, if described two Bar first straight line section meets second to impose a condition, then two first straight line sections described in lateral connection, obtains the after one article of connection One straightway;
If in the disparity map for obtaining the first straight line section, exist be connected after horizontally adjacent another of first straight line section One first straight line section, and the first straight line section after the connection meets second setting with another first straight line section Condition, then the first straight line section after connection described in lateral connection and another first straight line section, obtain a new connection First straight line section afterwards, until when another first straight line section adjacent with the first straight line section after the connection is not present, will First straight line section after the connection finally given is defined as second straight line section.
5. according to the method for claim 1, it is characterised in that the number according to effective parallax value in described every a line Coboundary row and lower boundary row of the barrier in the sparse disparities figure are determined with predetermined number threshold value, including:
In the column region, detected line by line downwards since being pushed up row, when the number for detecting effective parallax value is more than default threshold During the row of value, currently detected row is defined as coboundary row of the barrier in the sparse disparities figure;
Row detection range is determined based on the coboundary row;
Detected line by line in the row detection range, when the number for detecting effective parallax value is not more than the row of the predetermined threshold value When, the previous row of currently detected row is defined as lower boundary row of the barrier in the sparse disparities figure.
6. a kind of obstacle detector, it is characterised in that described device includes:
Disparity map acquisition module, the first image and the second image for being collected based on binocular camera obtain not including road surface Sparse disparities figure and U disparity map;
Detection module, for detecting to obtain a plurality of first straight line discontinuous on same level direction based on the U disparity map Section;
Merging treatment module, for a plurality of first straight line section of lateral connection, a second straight line section without breakpoint is obtained, with Shooting width and position of the barrier in the sparse disparities figure of the barrier are determined according to the second straight line section Put;
Statistical module, in the sparse disparities figure, statistics to draw each in column region corresponding to the second straight line section Belong to the number of effective parallax value of setting parallax distribution on row;
Height determining module, for determining the barrier with predetermined number threshold value according to the effectively number of parallax value in described every a line Hinder coboundary row and lower boundary row of the thing in the sparse disparities figure, to determine institute according to the coboundary row and lower boundary row State the shooting height of barrier;
Target determination module, for shooting width, the shooting height according to the barrier, and the barrier is described dilute The position in disparity map is dredged, target obstacle is determined in described first image or second image.
7. device according to claim 6, it is characterised in that the detection module, including:
First detection sub-module, for being detected line by line in the U disparity map, the horizontal line section in being gone;
Second detection sub-module, for being detected line by line downwards since the horizontal line section positioned at first trip;
First merge submodule, for when detected in the range of current line meet first impose a condition horizontal line section when, by institute Rheme meets that the first horizontal line section to impose a condition is overlapped in the horizontal line section of first trip with described, obtains the horizontal line after a superposition Section;
3rd detection sub-module, for the horizontal line section after the superposition detect line by line downwards;
Second merges submodule, meets the described first horizontal line section to impose a condition for that ought be detected in current setting range When, continue the horizontal line section after the superposition meeting that the described first horizontal line section to impose a condition is overlapped with described, obtain one Horizontal line section after the new superposition of bar;Return and perform being opened from the horizontal line section after the superposition performed by the 3rd detection sub-module The step of beginning to detect line by line downwards;
Determination sub-module, if described meeting first horizontal stroke to impose a condition for being not detected by the current setting range Line segment, then the horizontal line section after the superposition finally given is defined as first straight line section.
8. device according to claim 6, it is characterised in that the detection module, including:
Calculating sub module, for pixel in the maximum detecting distance value based on the binocular camera and the sparse disparities figure Parallax value distance value and the corresponding relation of parallax value is calculated;
UZ view determination sub-modules, for obtaining UZ disparity maps, the UZ parallaxes based on the U disparity map and the corresponding relation Figure is identical with the transverse axis of the U disparity map, and the longitudinal axis of the UZ disparity maps represents detecting distance value;
4th detection sub-module, for detecting line by line in the UZ disparity maps, obtain a plurality of on same level direction not connecting Continuous first straight line section.
9. the device according to claim 6 or 8, it is characterised in that the merging treatment module is specifically used for:
In the disparity map for obtaining the first straight line section, for the right two adjacent first straight line sections of a first from left, if described two Bar first straight line section meets second to impose a condition, then two first straight line sections described in lateral connection, obtains the after one article of connection One straightway;
If in the disparity map for obtaining the first straight line section, exist be connected after horizontally adjacent another of first straight line section One first straight line section, and the first straight line section after the connection meets second setting with another first straight line section Condition, then the first straight line section after connection described in lateral connection and another first straight line section, obtain a new connection First straight line section afterwards, until when another first straight line section adjacent with the first straight line section after the connection is not present, will First straight line section after the connection finally given is defined as second straight line section.
10. device according to claim 6, it is characterised in that the height determining module, including:
5th detection sub-module, in the column region, being detected line by line downwards since being pushed up row;
Coboundary row determination sub-module, for when the number for detecting effective parallax value is more than the row of predetermined threshold value, inciting somebody to action current The row detected is defined as coboundary row of the barrier in the sparse disparities figure;
Range determination submodule, for determining row detection range based on the coboundary row;
6th detection sub-module, for being detected line by line in the row detection range;
Lower boundary row determination sub-module, for when detect effective parallax value number be not more than the predetermined threshold value row when, The previous row of currently detected row is defined as lower boundary row of the barrier in the sparse disparities figure.
CN201711012442.8A 2017-10-26 2017-10-26 A kind of obstacle detection method and device Active CN107729856B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711012442.8A CN107729856B (en) 2017-10-26 2017-10-26 A kind of obstacle detection method and device
PCT/CN2018/096079 WO2019080557A1 (en) 2017-10-26 2018-07-18 Obstacle detection method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711012442.8A CN107729856B (en) 2017-10-26 2017-10-26 A kind of obstacle detection method and device

Publications (2)

Publication Number Publication Date
CN107729856A true CN107729856A (en) 2018-02-23
CN107729856B CN107729856B (en) 2019-08-23

Family

ID=61212849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711012442.8A Active CN107729856B (en) 2017-10-26 2017-10-26 A kind of obstacle detection method and device

Country Status (2)

Country Link
CN (1) CN107729856B (en)
WO (1) WO2019080557A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596899A (en) * 2018-04-27 2018-09-28 海信集团有限公司 Road flatness detection method, device and equipment
CN109446886A (en) * 2018-09-07 2019-03-08 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium based on unmanned vehicle
WO2019080557A1 (en) * 2017-10-26 2019-05-02 海信集团有限公司 Obstacle detection method and apparatus
CN110069990A (en) * 2019-03-18 2019-07-30 北京中科慧眼科技有限公司 A kind of height-limiting bar detection method, device and automated driving system
CN110378168A (en) * 2018-04-12 2019-10-25 海信集团有限公司 The method, apparatus and terminal of polymorphic type barrier fusion
CN110472486A (en) * 2019-07-03 2019-11-19 北京三快在线科技有限公司 A kind of shelf obstacle recognition method, device, equipment and readable storage medium storing program for executing
CN110633600A (en) * 2018-06-21 2019-12-31 海信集团有限公司 Obstacle detection method and device
CN111243003A (en) * 2018-11-12 2020-06-05 海信集团有限公司 Vehicle-mounted binocular camera and method and device for detecting road height limiting rod
CN111310663A (en) * 2020-02-17 2020-06-19 北京三快在线科技有限公司 Road fence detection method, device, equipment and storage medium
CN112116644A (en) * 2020-08-28 2020-12-22 辽宁石油化工大学 Vision-based obstacle detection method and device and obstacle distance calculation method and device
WO2022041737A1 (en) * 2020-08-28 2022-03-03 北京石头世纪科技股份有限公司 Distance measuring method and apparatus, robot, and storage medium
US20220063608A1 (en) * 2020-08-26 2022-03-03 Carvi Inc. Method of recognizing median strip and predicting risk of collision through analysis of image

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149458A (en) * 2019-06-27 2020-12-29 商汤集团有限公司 Obstacle detection method, intelligent driving control method, device, medium, and apparatus

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5937079A (en) * 1996-09-05 1999-08-10 Daimler-Benz Ag Method for stereo image object detection
CN103679127A (en) * 2012-09-24 2014-03-26 株式会社理光 Method and device for detecting drivable area of road pavement
CN103955948A (en) * 2014-04-03 2014-07-30 西北工业大学 Method for detecting space moving object in dynamic environment
CN104112268A (en) * 2013-04-22 2014-10-22 株式会社理光 Sparse parallax image processing method, sparse parallax image processing device, object detection method, and object detection device
CN104217208A (en) * 2013-06-03 2014-12-17 株式会社理光 Target detection method and device
CN104899855A (en) * 2014-03-06 2015-09-09 株式会社日立制作所 Three-dimensional obstacle detection method and apparatus
CN105550665A (en) * 2016-01-15 2016-05-04 北京理工大学 Method for detecting pilotless automobile through area based on binocular vision
CN105740802A (en) * 2016-01-28 2016-07-06 北京中科慧眼科技有限公司 Disparity map-based obstacle detection method and device as well as automobile driving assistance system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729856B (en) * 2017-10-26 2019-08-23 海信集团有限公司 A kind of obstacle detection method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5937079A (en) * 1996-09-05 1999-08-10 Daimler-Benz Ag Method for stereo image object detection
CN103679127A (en) * 2012-09-24 2014-03-26 株式会社理光 Method and device for detecting drivable area of road pavement
CN104112268A (en) * 2013-04-22 2014-10-22 株式会社理光 Sparse parallax image processing method, sparse parallax image processing device, object detection method, and object detection device
CN104217208A (en) * 2013-06-03 2014-12-17 株式会社理光 Target detection method and device
CN104899855A (en) * 2014-03-06 2015-09-09 株式会社日立制作所 Three-dimensional obstacle detection method and apparatus
CN103955948A (en) * 2014-04-03 2014-07-30 西北工业大学 Method for detecting space moving object in dynamic environment
CN105550665A (en) * 2016-01-15 2016-05-04 北京理工大学 Method for detecting pilotless automobile through area based on binocular vision
CN105740802A (en) * 2016-01-28 2016-07-06 北京中科慧眼科技有限公司 Disparity map-based obstacle detection method and device as well as automobile driving assistance system

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019080557A1 (en) * 2017-10-26 2019-05-02 海信集团有限公司 Obstacle detection method and apparatus
CN110378168B (en) * 2018-04-12 2023-05-30 海信集团有限公司 Method, device and terminal for fusing multiple types of barriers
CN110378168A (en) * 2018-04-12 2019-10-25 海信集团有限公司 The method, apparatus and terminal of polymorphic type barrier fusion
CN108596899A (en) * 2018-04-27 2018-09-28 海信集团有限公司 Road flatness detection method, device and equipment
CN110633600A (en) * 2018-06-21 2019-12-31 海信集团有限公司 Obstacle detection method and device
CN110633600B (en) * 2018-06-21 2023-04-25 海信集团有限公司 Obstacle detection method and device
US11043002B2 (en) 2018-09-07 2021-06-22 Baidu Online Network Technology Co., Ltd. Obstacle detecting method and obstacle detecting apparatus based on unmanned vehicle, and device, and storage medium
CN109446886B (en) * 2018-09-07 2020-08-25 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium based on unmanned vehicle
CN109446886A (en) * 2018-09-07 2019-03-08 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium based on unmanned vehicle
CN111243003A (en) * 2018-11-12 2020-06-05 海信集团有限公司 Vehicle-mounted binocular camera and method and device for detecting road height limiting rod
CN110069990A (en) * 2019-03-18 2019-07-30 北京中科慧眼科技有限公司 A kind of height-limiting bar detection method, device and automated driving system
CN110472486B (en) * 2019-07-03 2021-05-11 北京三快在线科技有限公司 Goods shelf obstacle identification method, device, equipment and readable storage medium
CN110472486A (en) * 2019-07-03 2019-11-19 北京三快在线科技有限公司 A kind of shelf obstacle recognition method, device, equipment and readable storage medium storing program for executing
CN111310663A (en) * 2020-02-17 2020-06-19 北京三快在线科技有限公司 Road fence detection method, device, equipment and storage medium
US20220063608A1 (en) * 2020-08-26 2022-03-03 Carvi Inc. Method of recognizing median strip and predicting risk of collision through analysis of image
US11634124B2 (en) * 2020-08-26 2023-04-25 Carvi Inc. Method of recognizing median strip and predicting risk of collision through analysis of image
CN112116644A (en) * 2020-08-28 2020-12-22 辽宁石油化工大学 Vision-based obstacle detection method and device and obstacle distance calculation method and device
WO2022041737A1 (en) * 2020-08-28 2022-03-03 北京石头世纪科技股份有限公司 Distance measuring method and apparatus, robot, and storage medium

Also Published As

Publication number Publication date
CN107729856B (en) 2019-08-23
WO2019080557A1 (en) 2019-05-02

Similar Documents

Publication Publication Date Title
CN107729856B (en) A kind of obstacle detection method and device
CN105225482B (en) Vehicle detecting system and method based on binocular stereo vision
CN108520536B (en) Disparity map generation method and device and terminal
JP6131704B2 (en) Detection method for continuous road segment and detection device for continuous road segment
CN104079912B (en) Image processing apparatus and image processing method
CN111462503B (en) Vehicle speed measuring method and device and computer readable storage medium
JP2013109760A (en) Target detection method and target detection system
JP6111745B2 (en) Vehicle detection method and apparatus
CN110956069B (en) Method and device for detecting 3D position of pedestrian, and vehicle-mounted terminal
CN107909036A (en) A kind of Approach for road detection and device based on disparity map
CN111553252A (en) Road pedestrian automatic identification and positioning method based on deep learning and U-V parallax algorithm
EP3605460A1 (en) Information processing method and apparatus, cloud processing device and computer program product
CN105404888A (en) Saliency object detection method integrated with color and depth information
JP2006236104A (en) Object determining device
CN108460333B (en) Ground detection method and device based on depth map
CN108510540A (en) Stereoscopic vision video camera and its height acquisition methods
CN109493373B (en) Stereo matching method based on binocular stereo vision
US20230394832A1 (en) Method, system and computer readable media for object detection coverage estimation
CN105678720A (en) Image matching judging method and image matching judging device for panoramic stitching
CN104243970A (en) 3D drawn image objective quality evaluation method based on stereoscopic vision attention mechanism and structural similarity
CN109961092B (en) Binocular vision stereo matching method and system based on parallax anchor point
CN107977649A (en) A kind of obstacle recognition method, device and terminal
CN107578419B (en) Stereo image segmentation method based on consistency contour extraction
CN105138979A (en) Method for detecting the head of moving human body based on stereo visual sense
CN104252707B (en) Method for checking object and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant