CN108269281B - Obstacle avoidance technical method based on binocular vision - Google Patents

Obstacle avoidance technical method based on binocular vision Download PDF

Info

Publication number
CN108269281B
CN108269281B CN201611252687.3A CN201611252687A CN108269281B CN 108269281 B CN108269281 B CN 108269281B CN 201611252687 A CN201611252687 A CN 201611252687A CN 108269281 B CN108269281 B CN 108269281B
Authority
CN
China
Prior art keywords
point cloud
camera
ground
obstacle
height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611252687.3A
Other languages
Chinese (zh)
Other versions
CN108269281A (en
Inventor
孟伟
范柘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aware Information Technology Co ltd
Wuxi Dingshi Technology Co ltd
Original Assignee
Wuxi Dingshi Technology Co ltd
Shanghai Aware Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Dingshi Technology Co ltd, Shanghai Aware Information Technology Co ltd filed Critical Wuxi Dingshi Technology Co ltd
Priority to CN201611252687.3A priority Critical patent/CN108269281B/en
Publication of CN108269281A publication Critical patent/CN108269281A/en
Application granted granted Critical
Publication of CN108269281B publication Critical patent/CN108269281B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

According to the invention, real-time three-dimensional point cloud reconstruction of a scene in the advancing direction of the field tire gantry crane is realized by utilizing a binocular camera, and accurate segmentation of noise point cloud, ground point cloud and obstacle point cloud is realized by projection statistics and non-maximum suppression of point cloud data in different dimensions; the influence of shaking of the field tire type gantry crane on the detection precision and robustness is effectively restrained through ground repositioning, and the detection precision and the detection distance of the obstacle are effectively improved; the abnormality of the obstacle avoidance system is monitored in real time through the gesture learning of the camera and the detection of the number and the height of the point clouds, and the method has important significance for improving port automation and port safety production.

Description

Obstacle avoidance technical method based on binocular vision
Technical Field
The invention relates to a real-time field obstacle avoidance technology of binocular vision, in particular to a binocular vision obstacle avoidance technology which can be used for field tire gantry crane (RTG) obstacle avoidance.
Background
With the rapid development of economic globalization, port automation demands are becoming more and more urgent, and the rapid distribution of containers is directly related to port throughput. In highly automated port operations, vision technology has been increasingly used in port automation in recent years because of its advantages such as high speed, high precision, non-contact, and high degree of automation. The vision not only can replace a lot of manual work, improve the production automation level and improve the monitoring precision, but also is an effective solution when a lot of conventional measuring methods cannot realize.
Compared with a traditional vision system, the binocular vision system not only can acquire image information, but also can effectively, timely and accurately acquire the depth information of the environment, can perform real-time and accurate three-dimensional reconstruction on surrounding scenes, has important significance for automatic handling of the container by the gantry crane, can detect obstacles in real time, and has important precaution effects on loss of personnel and property of a port and safety production of the port due to collision of the gantry crane.
At present, binocular vision obstacle avoidance is widely applied to robots and unmanned aerial vehicles, but the vision measurement distance and measurement accuracy are very limited. Thereby achieving the purpose of timely finding out the remote small target obstacle and preventing accidents in advance. And the installation posture of the binocular camera is monitored in real time through consistency constraint, so that the real-time monitoring of damage or deviation of the binocular camera caused by irresistible factors (jolt or impact) is realized.
Disclosure of Invention
The invention aims to apply the binocular stereoscopic vision reconstruction technology to obstacle avoidance of the field tire gantry crane, and accurately separate obstacle point clouds, ground point clouds and noise point clouds by carrying out projection statistics and non-maximum suppression on binocular vision reconstruction point cloud data in different dimensions and repositioning on the ground, so as to achieve remote real-time monitoring of small obstacles; because the camera installation posture is greatly changed due to jolt or other irresistible factors in the running process of the cart, and the analysis precision of the obstacle avoidance system is reduced, real-time monitoring of the binocular camera posture is designed, when the camera posture has a system tolerance value, the real-time monitoring is reported to the system, and ground learning is requested to be carried out again, so that the detection precision of an obstacle is ensured, and the fixed binocular vision obstacle avoidance technology for the binocular camera installation posture can be monitored in real time. According to the technical scheme provided by the invention, the binocular vision obstacle avoidance technical method comprises the following steps:
the method comprises the steps of firstly, carrying out real-time three-dimensional point cloud reconstruction of a scene in the advancing direction of a field tire gantry crane by using a binocular camera.
Second, camera pose angle learning and point cloud mapping conversion from camera coordinate system (Oc-XcYcZc) to ground coordinate system (Ow-XwYwZw).
And thirdly, separating an obstacle point cloud, a noise point cloud and a ground point cloud. And measuring the width and height of the obstacle. And the wide and high threshold value is utilized to further distinguish the obstacle from abnormal noise, so that the detection precision of the long-distance small obstacle is ensured.
And fourthly, monitoring the abnormal state of the binocular camera in real time by utilizing the relation between the initial attitude angle of the camera and the ground point cloud.
The camera attitude angle learning and point cloud image conversion from a camera coordinate system (Oc-XcYcZc) to a ground coordinate system (Ow-XwYwZw) comprises the following steps:
(2.1) as shown in FIG. 1, a binocular camera is schematically installed, (Oc-XcYcZc) is an origin of a left camera, an optical axis of the camera is a Z axis, a horizontal direction of an imaging plane is an X axis, and a vertical direction of the imaging plane is a Y axis. (Ow-XwYwZw) is a direction in which the ground directly below the left camera is used as the origin, the forward direction of the cart is used as the Z axis, the height direction is used as the Y axis, and the width direction is used as the X axis. Three-dimensional reconstruction is carried out by utilizing the parallax map to obtain a point cloud shown in fig. 3; fitting a straight line to the point cloud by utilizing least square, and then calculating an included angle omega between the straight line and the Oxc r We approximate the roll angle of the camera
Figure SMS_1
(2.2), from ω 3 Calculating a rotation matrix about the Zc axis
Figure SMS_2
The point cloud corrected by the rotation matrix p=p s R(ω 3 ) Each Y-coordinate mean value after reconstruction of each uniform solution of the rotated point cloud to the image
Figure SMS_3
Obtaining->
Figure SMS_4
Included angle omega between straight line and Zc axis 2 I.e. pitch angle;
(2.3) utilization of omega 2 Computing a rotation matrix about the Xc axis
Figure SMS_5
P w =PR(ω 2 )
The final corrected ground point cloud and reference coordinate system are shown in fig. 4: the ground point cloud is located on an XwZw plane, and Yw represents the height of the ground point cloud;
(2.4) for all P w The y coordinate of (2) is averaged to obtain the mounting height H of the camera, and finally the finally converted point cloud P is obtained g =P w –H。
The separation obstacle point cloud, the noise point cloud and the ground point cloud. And measuring the width and height of the obstacle. Further distinguishing between obstructions and abnormal noise using a wide high threshold, comprising the steps of:
and (3.1) carrying out histogram statistics on the Z coordinate of the point cloud as shown in fig. 5 on the point cloud with the ground obstacle, namely carrying out histogram statistics on the distance from the point cloud to the RTG cart, and obtaining a histogram as shown in fig. 6. Finding local maxima D for histogram maxpress Unfilled bars in the histogram. At the location of the local maximum, there is a large number of dense point clouds, finding suspicious obstructions;
(3.2) extracting the position block, wherein the extraction method comprises the following steps: search Point cloud P g The Z coordinate of the midpoint is at the local extremum D maxpress All pixels in the vicinity get white blocks as shown on the left of fig. 7;
(3.3), the upper white block is at P as shown in the left graph of FIG. 7 g If the Y coordinate of the corresponding point P in the point cloud is larger than-30 cm and smaller than 15cm, the point P is listed as a ground suspicious point, and finally the ground suspicious point cloud P corresponding to the white block is obtained gdubious . The statistical graph obtained by carrying out histogram statistics on the Y coordinate of the point cloud is shown in fig. 8, and when the graph has and only has one local extremum P gdubiouspress When (as shown in the white bar of fig. 8) we consider that a better ground repositioning effect is obtained at this time, where the ground height H i =P gdubiouspress
(3.4) removal of P gdubious Y coordinates in the point cloud are less than H i Obtaining a final obstacle block as shown in the right diagram of fig. 7;
(3.5), calculationThe R-th row of pixels of the white block shown in the right diagram of FIG. 7 corresponds to the height average Y of the point cloud r Calculating the point cloud width mean value X of Bai Kuaidi C column c Obtaining a linear equation aR+bY+c=0 and dC+eX+f=0 of a row value and a height average value bY using a least square method; respectively bringing the maximum and minimum values of the rows and columns into the equation to obtain a maximum and minimum value Xmax and Xmin of the width; height maximum and minimum value Ymax, ymin; the width wo=xmax-Xmin of the obstacle; height ho=ymax-Ymin;
(3.6) utilizing a larger wide-height filtering threshold value at a distance to ensure that false alarms are not generated; the small wide-height filtering threshold value is adopted in the near area to ensure that no obstacle is missed report W threshod =apha*Dist,H threshold =beta×dist; wherein Dist is the average value corresponding to the white block shown in the right diagram in FIG. 7, apha and beta are artificial setting values. If Wo>W threshod And Ho>H threshold The object is determined to be an obstacle.
The real-time monitoring of the abnormal state of the binocular camera by utilizing the relation between the initial attitude angle of the camera and the ground point cloud comprises the following steps:
(4.1), monitoring all point clouds with the height of more than-50 cm in the point clouds, and if the number of the point clouds is found to be extremely small, binocular camera offset or damage can occur.
(4.2) manually setting ground floating thresholds Hlow and Hhigh by monitoring an average value H with a distance larger than the dist point; if H < Hlow or H > Hhigh, it is determined that the camera is shifted, and the ground learning needs to be performed again.
Drawings
FIG. 1 is a schematic view of a binocular camera installation
Figure 2 binocular camera attitude angle schematic
Reconstruction point cloud for a row of pixels in the disparity map of fig. 3
FIG. 4 is a schematic diagram of a point cloud transformed from the coordinate system and its reference coordinate system
FIG. 5 is a schematic view of a ground point cloud and an obstacle point cloud
FIG. 6Z-coordinate statistical histogram of ground point cloud
FIG. 7 is a schematic diagram of an obstacle block and a post-repositioning obstacle block
FIG. 8Y-coordinate statistical histogram of ground point cloud
FIG. 9 is a schematic diagram of a process flow of an obstacle avoidance technique of the present invention
Detailed Description
The invention utilizes the three-dimensional point cloud reconstruction of the scene of the real-time field tire gantry crane forward direction of the binocular camera, classifies the obstacle and the noise point cloud by histogram statistical clustering and non-maximum value transplanting mode, further separates the ground point cloud and the obstacle point cloud by ground repositioning, improves the accuracy and the robustness of the height and the width measurement of the target point cloud by using a least square method, and ensures the accurate detection of the remote small obstacle; by learning the real-time monitoring of the mounting posture of the camera and the ground point cloud, the real-time monitoring of the abnormality of the binocular camera and the abnormality of the obstacle detection system is achieved, and the reliability of the obstacle avoidance system is improved.
The invention is further described below with reference to the drawings and examples.
The invention uses a binocular camera to reconstruct a field tire gantry crane advancing direction scene in real time three-dimensional point cloud, and obtains calibration parameters by three-dimensional calibration [1] Then, the calibration parameters are utilized to carry out three-dimensional correction on the left and right camera images [1] The method comprises the steps of carrying out a first treatment on the surface of the Stereo matching is carried out on the stereo corrected pictures to obtain depth maps [2] Three-dimensional point cloud reconstruction of a scene is completed by utilizing a depth map to obtain a point cloud set P s [1]
According to the invention, camera attitude angle learning and conversion of a point cloud image from a camera coordinate system (Oc-XcYcZc) to a ground coordinate system (Ow-XwYwZw) are realized, a ground point cloud is utilized to calculate a roll angle, a pitch angle and a camera height of a camera, real-time point cloud is subjected to rotation and translation transformation, and a reference coordinate system of the point cloud is converted from the camera coordinate system to the ground coordinate system.
The invention discloses a method for separating obstacle point cloud, noise point cloud and ground point cloud. And measuring the width and height of the obstacle. And the wide and high threshold value is utilized to further distinguish the obstacle from abnormal noise, so that the detection precision of the long-distance small obstacle is ensured. And the projection of the real-time point cloud in different directions is carried out, and the accuracy and the robustness of obstacle detection are improved through a maximum value inhibition and least square method.
The invention utilizes the relation between the initial attitude angle of the camera and the ground point cloud to monitor the abnormal state of the binocular camera in real time. And obtaining a transformed point cloud according to the learned initial pose of the camera, and then judging the abnormal state of the system in real time according to the height and the number information of the point cloud.
As shown in fig. 1, the working process of the present invention is specifically described as follows:
the method comprises the steps of firstly, carrying out real-time three-dimensional point cloud reconstruction of a scene in the advancing direction of a field tire gantry crane by using a binocular camera.
Three-dimensional calibration to obtain calibration parameters [1] Then, the calibration parameters are utilized to carry out three-dimensional correction on the left and right camera images [1] The method comprises the steps of carrying out a first treatment on the surface of the Stereo matching is carried out on the stereo corrected pictures to obtain depth maps [2] Three-dimensional point cloud reconstruction of a scene is completed by utilizing a depth map to obtain a point cloud set P s [1]
[1]Richard Hartley,Andrew Zisserman.Multiple View Geometry in computer vison[M].NewYork:Cambridge University Press,2003:237-360
[2]A Geiger,M Roser,R Urtasun.Efficient Large-Scale Stereo Matching[J].Springer Berlin Heidelberg,2010,6492:25-38
Second, camera pose angle learning and point cloud mapping conversion from camera coordinate system (Oc-XcYcZc) to ground coordinate system (Ow-XwYwZw).
And thirdly, separating an obstacle point cloud, a noise point cloud and a ground point cloud. And measuring the width and height of the obstacle. And the wide and high threshold value is utilized to further distinguish the obstacle from abnormal noise, so that the detection precision of the long-distance small obstacle is ensured.
And fourthly, monitoring the abnormal state of the binocular camera in real time by utilizing the relation between the initial attitude angle of the camera and the ground point cloud.
The camera attitude angle learning and point cloud image conversion from a camera coordinate system (Oc-XcYcZc) to a ground coordinate system (Ow-XwYwZw) comprises the following steps:
(2.1) as shown in FIG. 1, is a binocular cameraThe schematic diagram is assembled by taking a left camera as an origin, taking a camera optical axis as a Z axis, taking the horizontal direction of an imaging plane as an X axis and taking the vertical direction of the imaging plane as a Y axis. (Ow-XwYwZw) is a direction in which the ground directly below the left camera is used as the origin, the forward direction of the cart is used as the Z axis, the height direction is used as the Y axis, and the width direction is used as the X axis. Three-dimensional reconstruction is carried out by utilizing the parallax map to obtain a point cloud shown in fig. 3; fitting a straight line to the point cloud by utilizing least square, and then calculating an included angle omega between the straight line and the Oxc r We approximate the roll angle of the camera
Figure SMS_6
(2.2), from ω 3 Calculating a rotation matrix about the Zc axis
Figure SMS_7
The point cloud corrected by the rotation matrix p=p s R(ω 3 ) Each Y-coordinate mean value after reconstruction of each uniform solution of the rotated point cloud to the image
Figure SMS_8
Obtaining->
Figure SMS_9
Included angle omega between straight line and Zc axis 2 I.e. pitch angle;
(2.3) utilization of omega 2 Computing a rotation matrix about the Xc axis
Figure SMS_10
P w =PR(ω 2 )
The final corrected ground point cloud and reference coordinate system are shown in fig. 4: the ground point cloud is located on an XwZw plane, and Yw represents the height of the ground point cloud;
(2.4) for all P w The y coordinate of (2) is averaged to obtain the mounting height H of the camera, and finally the finally converted point cloud P is obtained g =P w –H。
The separation obstacle point cloud, the noise point cloud and the ground point cloud. And measuring the width and height of the obstacle. Further distinguishing between obstructions and abnormal noise using a wide high threshold, comprising the steps of:
and (3.1) carrying out histogram statistics on the Z coordinate of the point cloud as shown in fig. 5 on the point cloud with the ground obstacle, namely carrying out histogram statistics on the distance from the point cloud to the RTG cart, and obtaining a histogram as shown in fig. 6. Finding local maxima D for histogram maxpress Unfilled bars in the histogram. At the location of the local maximum, there is a large number of dense point clouds, finding suspicious obstructions;
(3.2) extracting the position block, wherein the extraction method comprises the following steps: search Point cloud P g The Z coordinate of the midpoint is at the local extremum D maxpress All pixels in the vicinity get white blocks as shown on the left of fig. 7;
(3.3), the upper white block is at P as shown in the left graph of FIG. 7 g If the Y coordinate of the corresponding point P in the point cloud is larger than-30 cm and smaller than 15cm, the point P is listed as a ground suspicious point, and finally the ground suspicious point cloud P corresponding to the white block is obtained gdubious . The statistical graph obtained by carrying out histogram statistics on the Y coordinate of the point cloud is shown in fig. 8, and when the graph has and only has one local extremum P gdubiouspress When (as shown in the white bar of fig. 8) we consider that a better ground repositioning effect is obtained at this time, where the ground height H i =P gdubiouspress
(3.4) removal of P gdubious Y coordinates in the point cloud are less than H i Obtaining a final obstacle block as shown in the right diagram of fig. 7;
(3.5) calculating the height mean value Y of the point cloud corresponding to the R-th row of pixels of the white block shown in the right graph in FIG. 7 r Calculating the point cloud width mean value X of Bai Kuaidi C column c Obtaining a linear equation aR+bY+c=0 and dC+eX+f=0 of a row value and a height average value bY using a least square method; respectively bringing the maximum and minimum values of the rows and columns into the equation to obtain a maximum and minimum value Xmax and Xmin of the width; height maximum and minimum value Ymax, ymin; the width wo=xmax-Xmin of the obstacle; height ho=ymax-Ymin;
(3.6) utilizing a larger wide-height filtering threshold value at a distance to ensure that false alarms are not generated; the small wide-height filtering threshold value is adopted in the near area to ensure that no obstacle is missed report W threshod =apha*Dist,H threshold =beta×dist; wherein Dist is the average value corresponding to the white block shown in the right diagram in FIG. 7, apha and beta are artificial setting values. If Wo>W threshod And Ho>H threshold The object is determined to be an obstacle.
The real-time monitoring of the abnormal state of the binocular camera by utilizing the relation between the initial attitude angle of the camera and the ground point cloud comprises the following steps:
(4.1), monitoring all point clouds with the height of more than-50 cm in the point clouds, and if the number of the point clouds is found to be extremely small, binocular camera offset or damage can occur.
(4.2) manually setting ground floating thresholds Hlow and Hhigh by monitoring an average value H with a distance larger than the dist point; if H < Hlow or H > Hhigh, it is determined that the camera is shifted, and the ground learning needs to be performed again.

Claims (2)

1. The technical method for avoiding the obstacle based on binocular vision is characterized by comprising the following steps of:
firstly, carrying out real-time three-dimensional point cloud reconstruction of a scene in the advancing direction of a field tire gantry crane on a binocular camera by utilizing the binocular camera;
second, camera attitude angle learning and point cloud mapping are performed from a camera coordinate system (O c -X c Y c Z c ) To the ground coordinate system (O w -X w Y w Z w ) Is a conversion of (2);
third, separating obstacle point cloud, noise point cloud and ground point cloud; measuring the width and height of the obstacle; the wide and high threshold value is utilized to further distinguish the obstacle and the abnormal noise, so that the detection precision of the long-distance small obstacle is ensured;
fourthly, monitoring abnormal states of the binocular camera in real time by utilizing the relation between the initial attitude angle of the camera and the ground point cloud;
the camera attitude angle learning and point cloud image are obtained from a camera coordinate system (O c -X c Y c Z c ) To the ground coordinate system (O w -X w Y w Z w ) The conversion of (a) comprises the steps of:
(2.1) As shown in FIG. 1, a binocular camera is schematically installed, (O) c -X c Y c Z c ) The method comprises the steps of taking a left camera as an origin, taking a camera optical axis as a Z axis, taking the horizontal direction of an imaging plane as an X axis, and taking the vertical direction of the imaging plane as a Y axis; (O) w -X w Y w Z w ) The ground right below the left camera is taken as an origin, the forward direction of the cart is taken as a Z axis, the height direction is taken as a Y axis, and the width direction is taken as an X axis; three-dimensional reconstruction is carried out by utilizing the parallax map to obtain a point cloud shown in fig. 3; fitting a straight line to the point cloud by least squares, and calculating the straight line and O c X c Included angle omega of (2) r We approximate the roll angle of the camera
Figure QLYQS_1
(2.2), from ω 3 Calculating the winding Z c Rotation matrix of shaft
Figure QLYQS_2
The point cloud corrected by the rotation matrix p=p s R(ω 3 ) The rotated point cloud is used for reconstructing each identical Y-coordinate mean value of each identical image by using a straight line and Z which are obtained by minimum c Included angle omega of axes 2 I.e. pitch angle;
(2.3) utilization of omega 2 Computing a rotation matrix about the Xc axis
Figure QLYQS_3
The final corrected ground point cloud and reference coordinate system are shown in fig. 4: the ground point cloud is located at X w Z w Plane, Y w Representing the height of the ground point cloud;
(2.4) for all P w The y coordinate of (2) is averaged to obtain the mounting height H of the camera, and finally the finally converted point cloud P is obtained g =P w –H;
The camera attitude angle learning and point cloud image are obtained from a camera coordinate system (O c -X c Y c Z c ) To the ground coordinate system (Ow-X w Y w Z w ) The conversion of (a) comprises the steps of:
(3.1) carrying out histogram statistics on a Z coordinate of the point cloud with the ground obstacle as shown in fig. 5, namely carrying out histogram statistics on the distance from the point cloud to the RTG cart to obtain a histogram; finding local maxima D for histogram maxpress Unfilled bars in the histogram; at the location of the local maximum, there is a large number of dense point clouds, finding suspicious obstructions;
(3.2) extracting the position block, wherein the extraction method comprises the following steps: search Point cloud P g The Z coordinate of the midpoint is at the local extremum D maxpress All pixels in the vicinity get white blocks as shown on the left of fig. 7;
(3.3) if the Y coordinate of the point P corresponding to the white block in the Pg point cloud shown in the left diagram of FIG. 7 is larger than-30 cm and smaller than 15cm, the point P is listed as a ground suspicious point, and finally the ground suspicious point cloud P corresponding to the white block is obtained gdubious The method comprises the steps of carrying out a first treatment on the surface of the The statistical graph obtained by carrying out histogram statistics on the Y coordinate of the point cloud is shown in fig. 8, and when the graph has and only has one local extremum P gdubiouspress When (as shown in the white bar of fig. 8) we consider that a better ground repositioning effect is obtained at this time, where the ground height hi=p gdubiouspress
(3.4) removal of P gdubious Y coordinates in the point cloud are less than H i Obtaining a final obstacle block as shown in the right diagram of fig. 7;
(3.5) calculating the height mean value Y of the point cloud corresponding to the R-th row of pixels of the white block shown in the right graph in FIG. 7 r Calculating the point cloud width mean value X of Bai Kuaidi C column c Obtaining a linear equation aR+bY+c=0 and dC+eX+f=0 of a row value and a height average value bY using a least square method; the maximum and minimum values of the rows are respectively brought into the equation to obtain the maximum and minimum value X of the width max ,X min The method comprises the steps of carrying out a first treatment on the surface of the Maximum and minimum height Y max ,Y min The method comprises the steps of carrying out a first treatment on the surface of the Width W of the obstacle o =X max -X min The method comprises the steps of carrying out a first treatment on the surface of the Height H o =Y max -Y min
(3.6) utilizing a larger wide-height filtering threshold value at a distance to ensure that false alarms are not generated; the small wide-height filtering threshold value is adopted in the near area to ensure that no obstacle is missed report W threshod =apha*Dist,H threshold =beta×dist; wherein Dist is the average value corresponding to the white block shown in the right diagram in FIG. 7, apha and beta are artificial set values; if W is o >W threshod And H is o >H threshold The target is determined to be an obstacle.
2. The binocular vision based obstacle avoidance technique of claim 1, wherein,
the real-time monitoring of the abnormal state of the binocular camera by utilizing the relation between the initial attitude angle of the camera and the ground point cloud comprises the following steps:
(4.1) monitoring all point clouds with the height of more than-50 cm in the point clouds, and if the number of the point clouds is found to be extremely small, binocular camera offset or damage can occur;
(4.2) monitoring the mean value H with the distance larger than the dist point, and manually setting the ground floating threshold value H low And H high
If H<H low Or H>H high It is determined that the camera is shifted and the ground learning needs to be performed again.
CN201611252687.3A 2016-12-30 2016-12-30 Obstacle avoidance technical method based on binocular vision Active CN108269281B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611252687.3A CN108269281B (en) 2016-12-30 2016-12-30 Obstacle avoidance technical method based on binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611252687.3A CN108269281B (en) 2016-12-30 2016-12-30 Obstacle avoidance technical method based on binocular vision

Publications (2)

Publication Number Publication Date
CN108269281A CN108269281A (en) 2018-07-10
CN108269281B true CN108269281B (en) 2023-06-13

Family

ID=62754061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611252687.3A Active CN108269281B (en) 2016-12-30 2016-12-30 Obstacle avoidance technical method based on binocular vision

Country Status (1)

Country Link
CN (1) CN108269281B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3789001A4 (en) * 2018-07-13 2022-07-06 Whill, Inc. Electric mobility apparatus
CN109141364B (en) * 2018-08-01 2020-11-03 北京进化者机器人科技有限公司 Obstacle detection method and system and robot
CN110378915A (en) * 2019-07-24 2019-10-25 西南石油大学 A kind of climbing robot obstacle detection method based on binocular vision
CN110928301B (en) * 2019-11-19 2023-06-30 北京小米智能科技有限公司 Method, device and medium for detecting tiny obstacle
CN113678136A (en) * 2019-12-30 2021-11-19 深圳元戎启行科技有限公司 Obstacle detection method and device based on unmanned technology and computer equipment
CN111260773B (en) * 2020-01-20 2023-10-13 深圳市普渡科技有限公司 Three-dimensional reconstruction method, detection method and detection system for small obstacle
CN111890358B (en) * 2020-07-01 2022-06-14 浙江大华技术股份有限公司 Binocular obstacle avoidance method and device, storage medium and electronic device
CN112330808B (en) * 2020-10-30 2024-04-02 珠海一微半导体股份有限公司 Optimization method based on local map and visual robot
CN112965486A (en) * 2021-02-04 2021-06-15 天津港第二集装箱码头有限公司 Binocular vision and radar-based field bridge obstacle avoidance system and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175222B (en) * 2011-03-04 2012-09-05 南开大学 Crane obstacle-avoidance system based on stereoscopic vision
CN103955920B (en) * 2014-04-14 2017-04-12 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN105222760A (en) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method

Also Published As

Publication number Publication date
CN108269281A (en) 2018-07-10

Similar Documents

Publication Publication Date Title
CN108269281B (en) Obstacle avoidance technical method based on binocular vision
CN112418103B (en) Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision
CN107767423B (en) mechanical arm target positioning and grabbing method based on binocular vision
CN108444390B (en) Unmanned automobile obstacle identification method and device
CN107133985B (en) Automatic calibration method for vehicle-mounted camera based on lane line vanishing point
CN112686938B (en) Power transmission line clear distance calculation and safety alarm method based on binocular image ranging
CN107703951B (en) A kind of unmanned plane barrier-avoiding method and system based on binocular vision
WO2015024407A1 (en) Power robot based binocular vision navigation system and method based on
CN107590836A (en) A kind of charging pile Dynamic Recognition based on Kinect and localization method and system
CN111046776A (en) Mobile robot traveling path obstacle detection method based on depth camera
CN102622767A (en) Method for positioning binocular non-calibrated space
CN104626206A (en) Robot operation pose information measuring method under non-structural environment
CN109263637B (en) Collision prediction method and device
CN111260773A (en) Three-dimensional reconstruction method, detection method and detection system for small obstacles
CN107796373B (en) Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model
CN111260715B (en) Depth map processing method, small obstacle detection method and system
US20170113611A1 (en) Method for stereo map generation with novel optical resolutions
Momeni-k et al. Height estimation from a single camera view
CN113205604A (en) Feasible region detection method based on camera and laser radar
CN111860321B (en) Obstacle recognition method and system
CN111046809B (en) Obstacle detection method, device, equipment and computer readable storage medium
CN114119729A (en) Obstacle identification method and device
CN111047636B (en) Obstacle avoidance system and obstacle avoidance method based on active infrared binocular vision
CN113724335B (en) Three-dimensional target positioning method and system based on monocular camera
CN113610910B (en) Obstacle avoidance method for mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210722

Address after: 200000 4th floor, building 23, Lane 2777, Jinxiu East Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: SHANGHAI AWARE INFORMATION TECHNOLOGY Co.,Ltd.

Applicant after: WUXI DINGSHI TECHNOLOGY Co.,Ltd.

Address before: Jinxi road Binhu District 214125 Jiangsu city of Wuxi province No. 100

Applicant before: WUXI DINGSHI TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant