CN109269478A - A kind of container terminal based on binocular vision bridge obstacle detection method - Google Patents
A kind of container terminal based on binocular vision bridge obstacle detection method Download PDFInfo
- Publication number
- CN109269478A CN109269478A CN201811243435.3A CN201811243435A CN109269478A CN 109269478 A CN109269478 A CN 109269478A CN 201811243435 A CN201811243435 A CN 201811243435A CN 109269478 A CN109269478 A CN 109269478A
- Authority
- CN
- China
- Prior art keywords
- barrier
- camera
- frame
- road
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 77
- 230000004888 barrier function Effects 0.000 claims abstract description 104
- 238000000034 method Methods 0.000 claims description 35
- 238000012937 correction Methods 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 9
- 238000013135 deep learning Methods 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 2
- 239000012491 analyte Substances 0.000 claims 1
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 230000000694 effects Effects 0.000 description 5
- 230000000903 blocking effect Effects 0.000 description 4
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 239000011248 coating agent Substances 0.000 description 3
- 238000000576 coating method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000010009 beating Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012372 quality testing Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000009738 saturating Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
Abstract
The container terminal bridge obstacle detection method that the invention discloses a kind of based on binocular vision, comprising: step 1, acquire the left images of mounted binocular camera;Step 2, left images are corrected respectively, removal distortion;Step 3, using left image as benchmark image, Road (region of interest ROI) detection is carried out on this image;Step 4, it carries out detection of obstacles module and carries out detection of obstacles;Step 5, camera is blocked and is judged;Step 6, barrier is simply tracked, is alarmed if being moved towards (ROI) on the inside of Road.
Description
Technical field
The container terminal bridge obstacle detection method that the present invention relates to a kind of based on binocular vision.
Background technique
Tyred container door type crane (RTG, Rubber Tyre Gantry) and track type container gate-type lifting
Big machinery of the machine (RMG) (being referred to as field bridge) as harbour, plays a crucial role harbour service.Its efficiency, safety,
Operation correctness has important influence to dock operation.Field bridge mainly undertakes the container sum aggregate of pile in container and stockyard
Container between vanning horizontal trasportation equipment (container truck or automated guided vehicle AGV, run on truck lane)
Transfer tasks, its main feature is that operating environment is complicated, danger coefficient is big, and the driver visual field is bad, and to driver according to lazyness height.
When bridge operation, since drivers' cab position is higher, lower section lighting condition is undesirable, and to block the visual field blind there are suspender
Area, thus driver sometimes can not in time, complete observation to lower section pile container the case where.When dock operation is busy, Si Jilian
When continuous operation or night work, such situation is more so.Since trolley and lifting mechanism movement velocity are very fast, and suspender institute band load
It is often weighing several tons to arrive tens of tons, once it cannot slow down in time because not observing pile container and truck height correctly
Parking, gently then case it is too fast or touch case, cause case to damage;It is heavy then " beating bowling " accident may be caused, pile container is caused to be toppled over,
Cargo damage, or even kill, injure related personnel by a crashing object.
Field bridge anti-collision system currently on the market, most of detection systems for being all based on laser radar, but it is existing
Field bridge collision resistant detecting system based on laser radar has the drawback that
1, laser radar is with high costs
The equipment cost of laser radar is very high, laser radar of 32 lines about 20,000 U.S. dollar of price or so, and 64 lines swash
Optical radar price is up to 80,000 U.S. dollars.
2, lane line cannot be detected in the case where not changing
Obstacle detection system based on laser radar relies on reflective marker, if not coating special coating on lane line
If, the detection system based on laser radar cannot detect that lane line etc. indicates, the region of detection can only be delimited manually, or
Special coating is coated on lane line, this brings biggish human cost to different harbours.It installs the reflective marker later period simultaneously
It safeguards difficulty, also increases cost.
3, more difficult to the classification of barrier, identification
Laser radar has biggish difficulty for the classification of barrier, has no idea to carry out the barrier that detected
Identification, classification.
In addition also there is the obstacle detection system for being based partially on monocular depth study, which uses deep learning
Model does object detection, and specific disadvantage is as follows:
1, the distance for calculating barrier is more troublesome
It can only be with fixed height, direction if monocular vision detection of obstacles vision system will calculate obstacle distance
Etc. installing camera, and deviation is larger or the distance of barrier can only be estimated, and is not accurately to calculate
, there is also certain deviations.
2, misrecognition will appear to the image of plane
Monocular obstacle detection system will appear misrecognition to the image of plane.System is caused to be missed there may be frequent
Report.For the plane pattern on ground, the image that such as water stain, inverted image is even drawn in ground is likely to will lead to misrecognition.
3, inspection does not measure blocking for camera
If barrier is especially close with a distance from camera, system alarm is at this time needed, and monocular vision obstacle quality testing
Examining system tends not to work normally in the case where this abnormally dangerous, and there are serious security risks.
4, the barrier of particular category can only be detected
Object detection based on deep learning can only detect the barrier for the specific classification trained by training data,
Barrier other than training data can not be detected.
Summary of the invention
Goal of the invention: for the detection of obstacles and laser radar detection of obstacles in the prior art based on monocular vision
It is insufficient, it is necessary to propose a kind of RTG obstacle detection system based on binocular vision, be used for real-time detection RTG track
Road, barrier, if there is barrier will generate alarm on travel route, if there is barrier is in prewarning area
And the speed of early warning reduction RTG will be generated to the movement of Road interior direction.
It comprises the following steps that
Step 1, the image of field bridge traveling ahead Long Dao, including left image L and right image R are obtained by binocular camera, are passed through
Zhang Shi standardization demarcates left images, obtains the intrinsic parameter I of left and right camera, intrinsic parameter I includes left camera focus fx, it is right
Camera focus fy, two identical cameras are generally used in the implementation, and left and right focal length is equal, i.e. fx=fy=f, principal point coordinate
x0、y0, reference axis tilt parameters s and distortion parameter;
Step 2, left and right camera is demarcated using stereo calibration method, obtains the outer parameter E of camera, including spin moment
Battle array R and translation matrix T;
Step 3, each frame is corrected, obtains to left image L, right image R using obtained intrinsic parameter I and outer parameter E
Left image L2, right image R2 after correction;
Step 4, each frame or every n frame (generally 3-5 frame) use left figure L or L2 feeding road detection module
Hough transform detects straight line, and obtained two road line marks two road line inside region (ROI, region of
Interest, area-of-interest) A1, and appropriate two sides to the left and right extend acquisition prewarning area A2;
Step 5, left images L2, R2 after correction are sent into detection of obstacles module, obtain the disparity map of left images
D obtains the classification C of each barrier and the size B1 and coordinate B2 of the corresponding frame for outlining barrier;
Step 6, the coordinate B2 of the barrier frame obtained using step 5 uses the method or binocular ranging of perspective transform
Method calculate the distance J of each barrier;
Step 7, obstacle information C, B1, B2, the D obtained based on step 5 and 6 carries out barrier tracking, if barrier
In precautionary areas and to Road inside region (ROI), then early warning is carried out;
Step 8, the disparity map D obtained using step 5 carries out shadowing, judges whether there is that barrier is closer to be blocked
Left or right camera;
Step 9, obstacle information B1, the B2 obtained using step 5, whether disturbance in judgement object is in Road inside region
(ROI) in, if alarming if.
In step 1, the bridge Long Dao to be travelled in face field is installed binocular camera in a parallel manner, it is desirable that binocular phase
The left and right imaging of machine is parallel as far as possible, it is desirable that left and right camera is all fixed-focus.Left and right is taken the photograph with Zhang Shi standardization using scaling board
As head is demarcated respectively, the intrinsic parameter I1 of left camera and the intrinsic parameter I2 of right camera are respectively obtained.It is imaged using left and right
Head obtains the image of Long Dao;Bibliography: Zhang, Zhengyou.A flexible new technique for camera
calibration[J].Pattern Analysis&Machine Intelligence IEEE Transactions on,
2000,22(11):1330-1334。
In step 2, using the stereoCalibrate function in OpenCV to left and right camera calibration, obtains left and right and take the photograph
As the outer parameter E of head.It is stereoRectify function completion calibration (the left camera focus in OpenCV used in the present invention
fx, right camera focus fy, principal point coordinate x0、y0, reference axis tilt parameters s and distortion parameter these parameters are by opencv
The two function calls of stereoRectify and initUndistortRectifyMap are the input parameters of the two functions);
In step 3, using intrinsic parameter I1, I2 and outer parameter E are corrected to obtain school to the left images that camera obtains
Left image L2, right image R2 after just.The initUndistortRectifyMap and remap in OpenCV are used in the present invention
Function completes correction;Bibliography: mobile robot obstacle detection technology research of [1] the Feng Jin based on binocular stereo vision
[D] China Mining University, 2015.
In step 4, L or L2 are first switched into grayscale image, then obtain grayscale image using the edge of Canny operator detection image
Image L3;Hough transformation straight-line detection is carried out to L3, the straight line detected is made by screening the line of satisfactory length, angle
For Road, Road inside region (ROI) A1 is marked according to Road later, and according to the parameter of manual setting come by road
Extension a distance obtains prewarning area A2 to line to the left and right.In the present implementation, road line slope is filtered out in 0.5-0.6, length
It is left Road less than the line of 200 leftmost side more than 100 pixels, and with the intersection point abscissa on image bottom edge, Road is oblique
For rate in 2.4-2.7, length is more than 100 pixels, and the line of the rightmost side is right Road.Extension distance is arranged for 140 pixels.With reference to
Document: Canny J F.A Variational Approach to Edge Detection [C] //AAAI Conference
on Artificial Intelligence.AAAI Press,1983:54-58.Ballard D H.Generalizing the
Hough transform to detect arbitrary shapes[C]//Readings in Computer Vision:
Issues,Problems,Principles,&Paradigms.Morgan Kaufmann Publishers Inc.1981:
111-122.
In step 5, detection of obstacles is carried out to left images L2, R2 after correction in step 5 and is divided into the inspection based on binocular
It surveys and the deep learning object detection based on monocular, the two parts can execute parallel.Bibliography: Ju á rez D H,
Chacón A,Espinosa A,et al.Embedded Real-time Stereo Estimation via Semi-
Global Matching on the GPU[J].Procedia Computer Science,2016,80(C):143-
153.Wang B,Florez S A R,Frémont V.Multiple obstacle detection and tracking
using stereo vision:Application and analysis[C]//International Conference on
Control Automation Robotics&Vision.IEEE,2015:142-153.
For the detection based on binocular, the specific steps are as follows:
Step 5-1-1: by left images L2, R2 after correction, using parallel SGM (Semi-Global Matching,
Half global registration) algorithm calculating, generate disparity map D;Bibliography: Ju á rez D H, Chac ó n A, Espinosa A, et
al.Embedded Real-time Stereo Estimation via Semi-Global Matching on the GPU
[J].
Step 5-1-2: being partitioned into road surface region according to the method for v parallax for disparity map D, is detected according to the method for u parallax
The size B1 and coordinate B2 of the frame of barrier or barrier out;Bibliography: Hu Z, Uchimura K.UV-
disparity:an efficient algorithm for stereovision based scene analysis[C]
.Intelligent Vehicles Symposium,2005.Proceedings.IEEE.IEEE,2005:48-54
Step 5-1-3: according to disparity map D, the barrier that will test out passes through a SVM (Support Vector
Machine, support vector machines) to determine whether being plane, to remove some error detections, such as water stain;Bibliography: [1]
Zhang Xuegong automates journal, 2000,26 (1): 32-42. about Statistical Learning Theory and support vector machines [J]
Trained object detection model is used for deep learning object detection module, is had the following steps:
Step 5-2-1, by trained object detection model come detection object, acquired disturbance object information C and obstacle
The size B1 and coordinate B2 of the frame of object;
Step 5-1-1~5-1-3 and 5-2-1 acquired disturbance object information is integrated, obtains complete obstacle by step 5-2-2
Object information.
In step 6, if specifically included using the method for perspective transform: first choosing four points on Road (in image
Four vertex of this quadrangle of Road inside region) project to a rectangle by perspective transform, further according to barrier frame
The ordinate Y1 of bottom edge coordinate B2 calculate the ordinate Y2 by the B2 after perspective transform, by multiplied by a ratio system
Number K obtains the distance J of barrier;Proportional coefficient K is subject to by trial close to actual distance value, the present invention realize in K take
7.0。
If specifically included using the method for binocular ranging: carrying out pixel value to the frame B of each barrier of disparity map D
Statistics, obtains its mode M, further according to binocular ranging formulaObtain the distance J of barrier, wherein f is camera
Focal length, included in the camera intrinsic parameter that step 1 obtains, b is baseline length, i.e. the distance between two camera optical centers, z
For the z-axis coordinate in three-dimensional world coordinate system, i.e. depth coordinate, distance is indicated.
In step 7, store continuous n frame (generally 3-5 frame) obstacle information (classification, confidence level including barrier,
Location), similarity is calculated to the barrier of different frame, the most similar barrier is considered as same barrier, frame
Coordinate is considered as the coordinate of the barrier, it is possible thereby to calculate the moving direction and approximate velocity of this each barrier of n frame, such as exists
(n-1)th frame barrier classification is car, and the centre coordinate of frame is (x1,y1), it is z by the calculated distance of step 71, n-th frame
Detecting it nearby to have a barrier classification is also car, then is considered as same barrier, and the centre coordinate of frame is (x2,y2), by
The calculated distance of step 7 is z2, it is assumed that current sequential operation speed is s fps, then the movement velocity of object is aboutIf (frame of barrier is a rectangle to the centre coordinate of the barrier frame, is specifically existed
In step 5-2-1, the size B1 (width, height) and coordinate B2 (x, y) of obtained frame, centre coordinate are (x+1/2*
Width, y+1/2*height), width, height respectively indicates width and height) in prewarning area, and towards in Road
Side region (ROI) is mobile, then carries out early warning.
In step 8, the disparity map D obtained using step 5 takes fixed m small boxes to calculate mean value and variance on D,
If there is mean value is more than threshold value t1 or variance is more than threshold value t2, then being considered as has barrier to block left or right camera, carries out
Alarm.In the present invention, threshold value t1=50, threshold value t2=20.Here small box takes the position in picture compared with centered on as far as possible,
M value is 6 in the present invention, and each box size is 30 × 30 pixels.
In step 9, the centre coordinate of Use barriers object frame is determined, if the coordinate falls in Road inside region
(ROI) being then considered as in has barrier obstruction RTG traveling, alarms.
The utility model has the advantages that remarkable advantage of the invention is, the help in some driving is provided for RTG, reduces the generation of accident
Detect the lane RTG in barrier, and to close to ROI barrier carry out early warning, compared to the radar sensing system system at
This is low, good compared to detection method this system detection effect of monocular vision, is able to detect untrained barrier.Simultaneously
This detection system can be compatible with existing detection system, and such as laser radar detection system, this system can be used as supplement.
Detailed description of the invention
The present invention is done with reference to the accompanying drawings and detailed description and is further illustrated, it is of the invention above-mentioned or
Otherwise advantage will become apparent.
Fig. 1 is operational flow diagram of the present invention.
Fig. 2 is the schematic diagram of camera installation in the present invention.
Fig. 3 is the actual load figure of camera installation in the present invention.
Fig. 4 a is that left camera acquires image.
Fig. 4 b is the disparity map that left camera acquires image.
Fig. 5 a is the left camera collection image of unobstructed situation,
Fig. 5 b is the right camera collection image of unobstructed situation,
Fig. 5 c is unobstructed situation disparity map,
Fig. 5 d is to have the left camera collection image of circumstance of occlusion, wherein what is blocked is right camera, left side camera not by
It blocks.Fig. 5 e is to have the right camera collection image of circumstance of occlusion, wherein what is blocked is right camera, left camera is not blocked.
Fig. 5 f is to have disparity map under circumstance of occlusion, wherein right side camera is blocked, left side camera is not blocked.
Fig. 6 is detection of obstacles effect picture in the present invention.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and embodiments.
Fig. 1 is operational flow diagram of the invention, including 9 steps, can be executed parallel in the case where not interdepending.
In step 1, camera is installed using the mode of such as Fig. 2, it is desirable that two cameras are substantially parallel and face for left and right
The field bridge Long Dao to be travelled is installed, and is all fixed-focus camera.The image of Long Dao is obtained using left and right camera.Fig. 3 is
The instance graph of installation.Then left and right camera is demarcated respectively using the scaling board of Zhang Shi standardization, respectively obtains a left side and takes the photograph
As the intrinsic parameter I1 of the head and intrinsic parameter I2 of right camera.
In step 2, using the method for stereo calibration to left and right camera calibration, the outer parameter E of left and right camera is obtained.
In step 3, using intrinsic parameter I1, I2 and outer parameter E are corrected to obtain school to the left images that camera obtains
Left images L2, R2 after just..
In step 4, L or L2 are first switched into grayscale image, then obtained using the edge of Canny operator detection image
Image L3;Hough transformation straight-line detection is carried out to L3, the straight line detected is by screening satisfactory length, angle
The line of degree marks Road inside region (ROI) A1 according to Road later as Road, and according to the parameter of manual setting
By Road, extension a distance obtains prewarning area A2 to the left and right.
In step 5, detection of obstacles module is divided into two parts, and a part is the detection based on binocular, a part
For the deep learning object detection based on monocular, the two parts can execute parallel.
Binocular is detected, the specific steps are as follows:
Step 5-1-1: left images L2, R2 after correction are calculated using parallel SGM algorithm, generate disparity map D;
Step 5-1-2: being partitioned into road surface region according to the method for v parallax for disparity map D, is detected according to the method for u parallax
The size B1 and coordinate B2 of the frame of barrier or barrier out;
Step 5-1-3: according to disparity map D, the barrier that will test out is by a SVM to determine whether being plane, i.e.,
Some error detections are removed, such as water stain;
Trained object detection model is used for deep learning object detection module, is had the following steps:
Step 5-2, by trained object detection model come detection object, acquired disturbance object information C and B1 and
B2;
Finally these obstacle informations are integrated:
Step 5-1 and 5-2 acquired disturbance object information is integrated, obtains complete obstacle information by step 5-3.
In step 6, if calculating the distance of barrier using perspective transform, four points on Road are first chosen by saturating
A rectangle is projected to depending on transformation, is calculated further according to the ordinate Y1 of the bottom edge coordinate B2 of barrier frame by perspective transform
The ordinate Y2 of B2 afterwards, by obtaining the distance J of barrier multiplied by a Proportional coefficient K;If using the side of binocular ranging
If method, pixel Data-Statistics are carried out to the frame B of each barrier of disparity map D, obtain its mode M, it is public further according to binocular ranging
FormulaObtain the distance J of barrier, wherein f is the focal length of camera, the video camera internal reference obtained included in step 1
In, b is baseline length, i.e. the distance between two camera optical centers.
The obstacle information that continuous n frame (generally 3-5 frame) is stored in step 7 calculates the barrier of different frame similar
Degree, the most similar barrier are considered as same barrier, and the coordinate of frame is considered as the coordinate of the barrier, it is possible thereby to calculate
The moving direction and approximate velocity of this few each barrier of frame, if the centre coordinate of the barrier frame in prewarning area, and
It is mobile towards Road inside region (ROI), then carry out early warning.
In step 8, the disparity map D obtained using step 5 takes specific region to calculate mean value and variance, if there is equal on D
Value is more than threshold value t1 or variance is more than threshold value t2, then being considered as has barrier to block left or right camera, alarms.It is former
It manages as follows:
Disparity map D is to match to obtain by left image and right image, and the gray value of each pixel represents its left image
(benchmark image) abscissa number that coordinate in right image differs with it, Fig. 4 a are that left camera acquires image.Fig. 4 b is left camera
Acquire the disparity map of image.As shown in Fig. 5 a, Fig. 5 b, Fig. 5 c, Fig. 5 d, Fig. 5 e, Fig. 5 f, if one or both of camera quilt
It blocking, disparity map will will appear such as the case where Fig. 5 f, if not blocking camera, the case where disparity map is similar to Fig. 5 c.
In step 9, the centre coordinate of Use barriers object frame is calculated, if the coordinate falls in Road inside region
(ROI) being then considered as in has barrier obstruction RTG traveling, alarms.
Embodiment
In order to operate this system preferably, installed as being required as before camera first, for convenience
The present embodiment use same model camera, and all be cameras with fixed focus, fixed-focus the reason is that in order to use binocular model by
The distance of disparity computation barrier.After being installed, then using the scaling board of Zhang Shi standardization respectively to left and right camera
It is demarcated, respectively obtains the intrinsic parameter I1 of left camera and the intrinsic parameter I2 of right camera.The left and right figure acquired in embodiment
As resolution ratio is all 1280*960 pixel.It is subsequent for the faster procedure speed of service, in addition to correction, disparity map use 960*
The resolution ratio of 540 pixels, every other function are calculated using the resolution ratio of 640*480 pixel.
This example has carried out three-dimensional correction to the data that binocular camera acquires later, has used the correction function in OpenCV
Can, this correction can make the disparity map effect generated more preferable, obtain Camera extrinsic number E.
Later for video flowing, the present invention handles each frame single frames, after left images are corrected, left figure is sent into
Image is switched to grayscale image, and detect edge with Canny operator, and used corruption in the present embodiment by road detection module
The operation of erosion expansion eliminates some noises, reuses Hough transform detection straight line.Since the travel route of RTG is relatively more fixed,
So its Road Position Approximate is essentially identical after camera is installed, thus thus select special angle, length it is straight
System has obtained Road, and the region of this left and right two road line composition is exactly the region to be travelled RTG, in the present embodiment,
140 pixels or so are extended toward left and right respectively to left line and right line and form a prewarning area, are subsequent barrier early warning function
Basis can be provided.
Then the left figure right figure after correction is sent into detection of obstacles module, module detection mainly two can be parallel
Detection method can be used alone, and first detection method is the method based on binocular, and substantially steps are as follows:
1. left and right figure, which is carried out Stereo matching using the algorithm of SGM, obtains disparity map.
2. being partitioned into the road surface on disparity map using the method for U parallax and V parallax based on disparity map, barrier is left
A svm is passed through to determine whether being some error detections such as water to the image for selecting frame of the obstacle detected after 3.
Stain etc..Here svm model is by acquiring image before and training.
Article 2 detection method is the method based on deep learning object detection, using trained model inspection object,
The object such as people, vehicle that can detecte trained particular category, obtain the coordinate of the frame of barrier and the confidence level of classification,
Confidence level uses 45 threshold value in this example, is then considered as the object more than the threshold value.
The barrier obtained by both the above mode is subjected to unification, the barrier detected by first detection method later
Hinder object not classified, is uniformly labeled as " unknown " in this embodiment.If barrier be in front of ROI among
Then alarm.
Sampling is carried out in specific region by disparity map and judges whether that camera is blocked, and in the present embodiment, frame has selected 6
30*30 size selects frame, is substantially distributed in image road and just above part, calculates 6 and selects the equal of disparity map pixel in frame
Value and variance, are blocked, this threshold value can choose 20-70 range in this example, through surveying if it exceeds threshold value 50 is then considered as
It is preferable to try 50 effects, does not have the case where reporting by mistake.
The distance in calculating, the present embodiment is carried out to each barrier later and calculates the method for using perspective transform.
Each barrier has been obtained the coordinate and size of its frame, the ordinate on the bottom edge of our marquees be according to
According to.According to the matrix of a good in advance perspective transform, which is reduced into the case where looking down road by perspective transform
Under coordinate, according to the scale bar that measures in advance calculate the barrier apart from size.
The present invention can be tracked barrier by the method for several frames of Coutinuous store and carry out early warning.In the present embodiment, I
Store the detection of obstacles situation of nearly 5 frame, for classification is identical in its consecutive frame of each detection of obstacles, is closer
Barrier is considered as same barrier, and calculates its approximate velocity and the direction of motion according to its changes in coordinates, if the barrier
It is in prewarning area and mobile towards ROI, then carry out early warning.
Fig. 6 gives a demonstration graph of detection effect, can accurately detect to people and truck the two barriers
And elected in figure center, first character section is barrier classification to each barrier frame thereon, and second field is the classification
Confidence level, third field are the barrier that calculates with a distance from camera, and unit is rice.
After tested, the present invention in most cases relatively stable can detect obstacle in daytime, night, rainy day etc.
Object, and and alarm, and the distance of barrier is also more accurate.For the common impairments object such as people, vehicle, tool box, safety cap
It can preferably identify.To the case where blocking camera can with and alarm, the detection of Road it is also more accurate.
The container terminal bridge obstacle detection method that the present invention provides a kind of based on binocular vision, specific implementation should
There are many method and approach of technical solution, the above is only a preferred embodiment of the present invention, it is noted that for this technology
For the those of ordinary skill in field, various improvements and modifications may be made without departing from the principle of the present invention, this
A little improvements and modifications also should be regarded as protection scope of the present invention.Existing skill can be used in each component part being not known in the present embodiment
Art is realized.
Claims (10)
1. a kind of container terminal based on binocular vision bridge obstacle detection method, which comprises the steps of:
Step 1, the image of field bridge traveling ahead Long Dao, including left image L and right image R are obtained by binocular camera, and left and right is schemed
As being demarcated, the intrinsic parameter I of left and right camera is obtained;
Step 2, left and right camera is demarcated, obtains the outer parameter E of camera;
Step 3, to each frame of video taken by camera using obtained intrinsic parameter I and outer parameter E to left image L,
Right image R is corrected, left image L2, right image R2 after being corrected;
Step 4, road detection is carried out by left figure L or L2 to each frame of video taken by camera or every n frame: used
Hough transform detects straight line, obtains two road line, marks two road line inside region A1, and two sides extension to the left and right obtains
Obtain prewarning area A2;
Step 5, detection of obstacles is carried out to left images L2, R2 after correction, obtains the disparity map D of left images, obtained every
The size B1 and coordinate B2 of the classification C of a barrier and the corresponding frame for outlining barrier;
Step 6, the coordinate B2 of the barrier frame obtained using step 5 uses the method for perspective transform or the side of binocular ranging
Method calculates the distance J of each barrier;
Step 7, obstacle information C, B1, B2, the D obtained based on step 5 and 6 carries out barrier tracking, if barrier is in
Precautionary areas and to Road inside region, then carry out early warning;
Step 8, the disparity map D obtained using step 5 carries out shadowing, judge whether there is barrier it is closer blocked it is left or
Right camera, if so, then alarming;
Step 9, obstacle information B1, the B2 obtained using step 5, whether disturbance in judgement object is in Road inside region, such as
Fruit is then being alarmed.
2. the method according to claim 1, wherein in step 1, binocular camera face field in a parallel manner
The bridge Long Dao to be travelled is installed, and the left and right imaging of binocular camera is parallel, and left and right camera is all fixed-focus, uses scaling board
Left and right camera is demarcated respectively with Zhang Shi standardization, respectively obtain left camera intrinsic parameter I1 and right camera it is interior
Parameter I2.
3. according to the method described in claim 2, it is characterized in that, in step 2, using in OpenCV
StereoCalibrate function obtains the outer parameter E of left and right camera to left and right camera calibration.
4. method according to claim 3, which is characterized in that in step 3, using intrinsic parameter I1, I2 and outer parameter E are to camera shooting
The left images that head obtains are corrected the left image L2 after being corrected, right image R2.
5. according to the method described in claim 4, it is characterized in that, L or L2 first being switched to grayscale image, then used in step 4
The edge of Canny operator detection grayscale image image obtains image L3;Hough transformation straight-line detection, the straight line detected are carried out to L3
Line by screening satisfactory length, angle marks Road inside region A1 according to Road later as Road,
And extension a distance obtains prewarning area A2 to the left and right by Road.
6. according to the method described in claim 5, it is characterized in that, hindering in step 5 to left images L2, R2 after correction
Analyte detection is hindered to be divided into the detection based on binocular and the deep learning object detection based on monocular, for the detection based on binocular, tool
Steps are as follows for body:
Step 5-1-1: left images L2, R2 after correction are calculated using parallel SGM algorithm, generate disparity map D;
Step 5-1-2: disparity map D is partitioned into road surface region according to the method for v parallax, detects to hinder according to the method for u parallax
Hinder the size B1 and coordinate B2 of the frame of object or barrier;
Step 5-1-3: according to disparity map D, the barrier that will test out is by a SVM to determine whether being plane, to go
Except error detection;
For the deep learning object detection based on monocular, specifically comprise the following steps:
Step 5-2-1, by trained object detection model come detection object, acquired disturbance object information C and barrier
The size B1 and coordinate B2 of frame;
Step 5-2-2 integrates step 5-1-1~5-1-3 and 5-2-1 acquired disturbance object information, obtains complete barrier letter
Breath.
7. according to the method described in claim 6, it is characterized in that, in step 6, if using the method for perspective transform, specifically
Include: that four points first chosen on Road project to a rectangle by perspective transform, is sat further according to the bottom edge of barrier frame
The ordinate Y1 of mark B2 calculates the ordinate Y2 by the B2 after perspective transform, by obtaining multiplied by a Proportional coefficient K
The distance J of barrier;
If specifically included using the method for binocular ranging: pixel Data-Statistics are carried out to the frame B of each barrier of disparity map D,
Its mode M is obtained, further according to binocular ranging formulaObtaining the distance J of barrier, wherein f is the focal length of camera,
Included in the camera intrinsic parameter that step 1 obtains, b is baseline length, i.e. the distance between two camera optical centers, z tri-
The z-axis coordinate in world coordinate system, i.e. depth coordinate are tieed up, indicates distance.
8. the method according to the description of claim 7 is characterized in that the obstacle information of continuous n frame is stored, to not in step 7
Barrier at same frame calculates similarity, and the most similar barrier is considered as same barrier, and the coordinate of frame is considered as the barrier
Coordinate, the moving direction and speed of this each barrier of n frame are thus calculated, if the centre coordinate of the barrier frame is pre-
Police region domain, and it is mobile towards Road inside region, then carry out early warning.
9. according to the method described in claim 8, it is characterized in that, in step 8, the disparity map D obtained using step 5, on D
It takes the small boxes of fixed m to calculate mean value and variance, if there is mean value is more than threshold value t1 or variance is more than threshold value t2, then regards
To there is barrier to block left or right camera, alarm.
10. according to the method described in claim 8, it is characterized in that, the centre coordinate of Use barriers object frame carries out in step 9
Determine, being considered as if the coordinate is fallen in Road inside region has barrier obstruction RTG traveling, alarms.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811243435.3A CN109269478A (en) | 2018-10-24 | 2018-10-24 | A kind of container terminal based on binocular vision bridge obstacle detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811243435.3A CN109269478A (en) | 2018-10-24 | 2018-10-24 | A kind of container terminal based on binocular vision bridge obstacle detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109269478A true CN109269478A (en) | 2019-01-25 |
Family
ID=65194306
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811243435.3A Pending CN109269478A (en) | 2018-10-24 | 2018-10-24 | A kind of container terminal based on binocular vision bridge obstacle detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109269478A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110765970A (en) * | 2019-10-31 | 2020-02-07 | 北京地平线机器人技术研发有限公司 | Method and device for determining nearest obstacle, storage medium and electronic equipment |
CN110765922A (en) * | 2019-10-18 | 2020-02-07 | 华南理工大学 | AGV is with two mesh vision object detection barrier systems |
CN110864670A (en) * | 2019-11-27 | 2020-03-06 | 苏州智加科技有限公司 | Method and system for acquiring position of target obstacle |
CN111290386A (en) * | 2020-02-20 | 2020-06-16 | 北京小马慧行科技有限公司 | Path planning method and device and carrying tool |
CN111443704A (en) * | 2019-12-19 | 2020-07-24 | 苏州智加科技有限公司 | Obstacle positioning method and device for automatic driving system |
CN111551920A (en) * | 2020-04-16 | 2020-08-18 | 重庆大学 | Three-dimensional target real-time measurement system and method based on target detection and binocular matching |
CN111951334A (en) * | 2020-08-04 | 2020-11-17 | 郑州轻工业大学 | Identification and positioning method and lifting method for stacking steel billets based on binocular vision technology |
CN112115889A (en) * | 2020-09-23 | 2020-12-22 | 成都信息工程大学 | Intelligent vehicle moving target detection method based on vision |
CN112215794A (en) * | 2020-09-01 | 2021-01-12 | 北京中科慧眼科技有限公司 | Method and device for detecting dirt of binocular ADAS camera |
CN113283273A (en) * | 2020-04-17 | 2021-08-20 | 上海锐明轨交设备有限公司 | Front obstacle real-time detection method and system based on vision technology |
CN113674407A (en) * | 2021-07-15 | 2021-11-19 | 中国地质大学(武汉) | Three-dimensional terrain reconstruction method and device based on binocular vision image and storage medium |
CN114972541A (en) * | 2022-06-17 | 2022-08-30 | 北京国泰星云科技有限公司 | Tire crane three-dimensional anti-collision method based on three-dimensional laser radar and binocular camera fusion |
CN115690061A (en) * | 2022-11-08 | 2023-02-03 | 北京国泰星云科技有限公司 | Container terminal truck collection detection method based on vision |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103411536A (en) * | 2013-08-23 | 2013-11-27 | 西安应用光学研究所 | Auxiliary driving obstacle detection method based on binocular stereoscopic vision |
CN103745433A (en) * | 2013-12-05 | 2014-04-23 | 莱阳市科盾通信设备有限责任公司 | Vehicle safety auxiliary video image processing method |
CN104299219A (en) * | 2013-07-19 | 2015-01-21 | 株式会社理光 | Object detection method and device |
CN105225482A (en) * | 2015-09-02 | 2016-01-06 | 上海大学 | Based on vehicle detecting system and the method for binocular stereo vision |
CN105678787A (en) * | 2016-02-03 | 2016-06-15 | 西南交通大学 | Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera |
CN106228538A (en) * | 2016-07-12 | 2016-12-14 | 哈尔滨工业大学 | Binocular vision indoor orientation method based on logo |
CN106250816A (en) * | 2016-07-19 | 2016-12-21 | 武汉依迅电子信息技术有限公司 | A kind of Lane detection method and system based on dual camera |
CN107347151A (en) * | 2016-05-04 | 2017-11-14 | 深圳众思科技有限公司 | binocular camera occlusion detection method and device |
CN107389026A (en) * | 2017-06-12 | 2017-11-24 | 江苏大学 | A kind of monocular vision distance-finding method based on fixing point projective transformation |
CN107609486A (en) * | 2017-08-16 | 2018-01-19 | 中国地质大学(武汉) | To anti-collision early warning method and system before a kind of vehicle |
CN107767687A (en) * | 2017-09-26 | 2018-03-06 | 中国科学院长春光学精密机械与物理研究所 | Free parking space detection method and system based on binocular stereo vision |
CN108205658A (en) * | 2017-11-30 | 2018-06-26 | 中原智慧城市设计研究院有限公司 | Detection of obstacles early warning system based on the fusion of single binocular vision |
-
2018
- 2018-10-24 CN CN201811243435.3A patent/CN109269478A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299219A (en) * | 2013-07-19 | 2015-01-21 | 株式会社理光 | Object detection method and device |
CN103411536A (en) * | 2013-08-23 | 2013-11-27 | 西安应用光学研究所 | Auxiliary driving obstacle detection method based on binocular stereoscopic vision |
CN103745433A (en) * | 2013-12-05 | 2014-04-23 | 莱阳市科盾通信设备有限责任公司 | Vehicle safety auxiliary video image processing method |
CN105225482A (en) * | 2015-09-02 | 2016-01-06 | 上海大学 | Based on vehicle detecting system and the method for binocular stereo vision |
CN105678787A (en) * | 2016-02-03 | 2016-06-15 | 西南交通大学 | Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera |
CN107347151A (en) * | 2016-05-04 | 2017-11-14 | 深圳众思科技有限公司 | binocular camera occlusion detection method and device |
CN106228538A (en) * | 2016-07-12 | 2016-12-14 | 哈尔滨工业大学 | Binocular vision indoor orientation method based on logo |
CN106250816A (en) * | 2016-07-19 | 2016-12-21 | 武汉依迅电子信息技术有限公司 | A kind of Lane detection method and system based on dual camera |
CN107389026A (en) * | 2017-06-12 | 2017-11-24 | 江苏大学 | A kind of monocular vision distance-finding method based on fixing point projective transformation |
CN107609486A (en) * | 2017-08-16 | 2018-01-19 | 中国地质大学(武汉) | To anti-collision early warning method and system before a kind of vehicle |
CN107767687A (en) * | 2017-09-26 | 2018-03-06 | 中国科学院长春光学精密机械与物理研究所 | Free parking space detection method and system based on binocular stereo vision |
CN108205658A (en) * | 2017-11-30 | 2018-06-26 | 中原智慧城市设计研究院有限公司 | Detection of obstacles early warning system based on the fusion of single binocular vision |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110765922A (en) * | 2019-10-18 | 2020-02-07 | 华南理工大学 | AGV is with two mesh vision object detection barrier systems |
CN110765922B (en) * | 2019-10-18 | 2023-05-02 | 华南理工大学 | Binocular vision object detection obstacle system for AGV |
CN110765970A (en) * | 2019-10-31 | 2020-02-07 | 北京地平线机器人技术研发有限公司 | Method and device for determining nearest obstacle, storage medium and electronic equipment |
CN110765970B (en) * | 2019-10-31 | 2022-08-09 | 北京地平线机器人技术研发有限公司 | Method and device for determining nearest obstacle, storage medium and electronic equipment |
CN110864670A (en) * | 2019-11-27 | 2020-03-06 | 苏州智加科技有限公司 | Method and system for acquiring position of target obstacle |
WO2021120574A1 (en) * | 2019-12-19 | 2021-06-24 | Suzhou Zhijia Science & Technologies Co., Ltd. | Obstacle positioning method and apparatus for autonomous driving system |
CN111443704A (en) * | 2019-12-19 | 2020-07-24 | 苏州智加科技有限公司 | Obstacle positioning method and device for automatic driving system |
CN111443704B (en) * | 2019-12-19 | 2021-07-06 | 苏州智加科技有限公司 | Obstacle positioning method and device for automatic driving system |
CN111290386B (en) * | 2020-02-20 | 2023-08-04 | 北京小马慧行科技有限公司 | Path planning method and device and carrier |
CN111290386A (en) * | 2020-02-20 | 2020-06-16 | 北京小马慧行科技有限公司 | Path planning method and device and carrying tool |
CN111551920A (en) * | 2020-04-16 | 2020-08-18 | 重庆大学 | Three-dimensional target real-time measurement system and method based on target detection and binocular matching |
CN113283273B (en) * | 2020-04-17 | 2024-05-24 | 上海锐明轨交设备有限公司 | Method and system for detecting front obstacle in real time based on vision technology |
CN113283273A (en) * | 2020-04-17 | 2021-08-20 | 上海锐明轨交设备有限公司 | Front obstacle real-time detection method and system based on vision technology |
CN111951334A (en) * | 2020-08-04 | 2020-11-17 | 郑州轻工业大学 | Identification and positioning method and lifting method for stacking steel billets based on binocular vision technology |
CN111951334B (en) * | 2020-08-04 | 2023-11-21 | 郑州轻工业大学 | Identification and positioning method and lifting method for stacked billets based on binocular vision technology |
CN112215794B (en) * | 2020-09-01 | 2022-09-20 | 北京中科慧眼科技有限公司 | Method and device for detecting dirt of binocular ADAS camera |
CN112215794A (en) * | 2020-09-01 | 2021-01-12 | 北京中科慧眼科技有限公司 | Method and device for detecting dirt of binocular ADAS camera |
CN112115889B (en) * | 2020-09-23 | 2022-08-30 | 成都信息工程大学 | Intelligent vehicle moving target detection method based on vision |
CN112115889A (en) * | 2020-09-23 | 2020-12-22 | 成都信息工程大学 | Intelligent vehicle moving target detection method based on vision |
CN113674407A (en) * | 2021-07-15 | 2021-11-19 | 中国地质大学(武汉) | Three-dimensional terrain reconstruction method and device based on binocular vision image and storage medium |
CN113674407B (en) * | 2021-07-15 | 2024-02-13 | 中国地质大学(武汉) | Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image |
CN114972541A (en) * | 2022-06-17 | 2022-08-30 | 北京国泰星云科技有限公司 | Tire crane three-dimensional anti-collision method based on three-dimensional laser radar and binocular camera fusion |
CN114972541B (en) * | 2022-06-17 | 2024-01-26 | 北京国泰星云科技有限公司 | Tire crane stereoscopic anti-collision method based on fusion of three-dimensional laser radar and binocular camera |
CN115690061A (en) * | 2022-11-08 | 2023-02-03 | 北京国泰星云科技有限公司 | Container terminal truck collection detection method based on vision |
CN115690061B (en) * | 2022-11-08 | 2024-01-05 | 北京国泰星云科技有限公司 | Vision-based container terminal truck collection detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109269478A (en) | A kind of container terminal based on binocular vision bridge obstacle detection method | |
CN107031623B (en) | A kind of road method for early warning based on vehicle-mounted blind area camera | |
CN107463890B (en) | A kind of Foregut fermenters and tracking based on monocular forward sight camera | |
US20200285864A1 (en) | Barrier and guardrail detection using a single camera | |
CN106778593A (en) | A kind of track level localization method based on the fusion of many surface marks | |
CN105835880B (en) | Lane following system | |
EP1671216B1 (en) | Moving object detection using low illumination depth capable computer vision | |
CN106096525A (en) | A kind of compound lane recognition system and method | |
US20100322476A1 (en) | Vision based real time traffic monitoring | |
CN104129389A (en) | Method for effectively judging and recognizing vehicle travelling conditions and device thereof | |
KR20140109990A (en) | Determining a vertical profile of a vehicle environment by means of a 3d camera | |
CN108364466A (en) | A kind of statistical method of traffic flow based on unmanned plane traffic video | |
CN106324618A (en) | System for detecting lane line based on laser radar and realization method thereof | |
CN109635737A (en) | Automobile navigation localization method is assisted based on pavement marker line visual identity | |
Sehestedt et al. | Efficient lane detection and tracking in urban environments | |
CN113518995A (en) | Method for training and using neural networks to detect self-component position | |
Yoneda et al. | Simultaneous state recognition for multiple traffic signals on urban road | |
JP2007280387A (en) | Method and device for detecting object movement | |
Philipsen et al. | Day and night-time drive analysis using stereo vision for naturalistic driving studies | |
Álvarez et al. | Perception advances in outdoor vehicle detection for automatic cruise control | |
Seo et al. | Use of a monocular camera to analyze a ground vehicle’s lateral movements for reliable autonomous city driving | |
Ben Romdhane et al. | A lane detection and tracking method for driver assistance system | |
Alcantarilla et al. | Automatic daytime road traffic control and monitoring system | |
Suganuma et al. | Fast dynamic object extraction using stereovision based on occupancy grid maps and optical flow | |
Wu et al. | A vision-based collision warning system by surrounding vehicles detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190125 |